ε -strict subdifferentials of set-valued maps and optimality conditions

ε -strict subdifferentials of set-valued maps and optimality conditions

Nonlinear Analysis 75 (2012) 3761–3775 Contents lists available at SciVerse ScienceDirect Nonlinear Analysis journal homepage: www.elsevier.com/loca...

267KB Sizes 0 Downloads 48 Views

Nonlinear Analysis 75 (2012) 3761–3775

Contents lists available at SciVerse ScienceDirect

Nonlinear Analysis journal homepage: www.elsevier.com/locate/na

ε-strict subdifferentials of set-valued maps and optimality conditions✩ Zhi-Ang Zhou a , Xin-Min Yang b , Jian-Wen Peng b,∗ a

Department of Applied Mathematics, Chongqing University of Technology, Chongqing 400054, PR China

b

School of Mathematics, Chongqing Normal University, Chongqing, 400047, PR China

article

info

Article history: Received 18 May 2011 Accepted 31 January 2012 Communicated by Enzo Mitidieri MSC: 90C26 90C29 90C30

abstract In this paper, first, a new notion of ε -strict subdifferentials of set-valued maps is introduced in a locally convex space. Second, the existence for a ε -strict subdifferential of a set-valued map and an equivalent characterization for ε -strict subgradient of a set-valued map are presented, respectively. Third, a generalized Moreau–Rockafellar theorem with set-valued maps is obtained. Finally, optimality conditions of vector optimization problems with setvalued maps are established in the sense of ε -strict subdifferential. © 2012 Elsevier Ltd. All rights reserved.

Keywords: Set-valued map ε -strict subdifferential Optimality conditions Near-subconvexlikeness

1. Introduction In vector optimization theory, the efficient point of a set plays an important role. For example, by scalarization, we can establish the relation between the efficient point of a set and the optimal point of the scalarized set. However, some authors find that the sets of the efficient points for some sets are fairly big, which in turn lead to some undesirable properties. To overcome the flaw, based on the efficient point, some authors [1–4] introduced different kinds of properly efficient points. Among these properly efficient points, super efficient point not only refines the efficient point and other kinds of properly efficient point but also can be scalarized by a strictly positive functional. Observed from Example 4 in [4], it is very difficult to guarantee the existence of super efficient point of the set even if a set is compact and convex. This fact implies that the existence conditions of a super efficient point are very strong. To overcome the flaw, Cheng and Fu [5] introduced the notion of the strictly efficient point which has both the main properties of the super efficient point and weaker existence conditions than the super efficient point. On the other hand, from a computational point of view, algorithms which have been used in the literature to solve the optimization problems often give rise to approximate solutions for the problems. Thus, it may happen that the (properly) efficient point of a set does not exist. Therefore, it is necessary to weaken the notion of the (properly) efficient point. Loridan [6] introduced ε -efficient point (i.e. approximately efficient point) in finite dimensional space, and Vályi [7] extended

✩ This work was supported by the National Nature Science Foundation of China (10831009 and 11171363), and the Natural Science Foundation of Chongqing (CSTC 2009BB8240, CSTC 2011jjA00022), the Special Fund of Chongqing Key Laboratory (CSTC 2011KLORSE01) and the Science and Technology Project of Chongqing Municipal Education Commission (KJ110827). ∗ Corresponding author. Tel.: +86 2165642118. E-mail address: [email protected] (J.-W. Peng).

0362-546X/$ – see front matter © 2012 Elsevier Ltd. All rights reserved. doi:10.1016/j.na.2012.01.030

3762

Z.-A. Zhou et al. / Nonlinear Analysis 75 (2012) 3761–3775

the ε -efficient point from the finite dimensional space to the infinite dimensional space and introduced the ε -weakly efficient point. Li et al. [8] and Tuan [9] introduced the ε -strictly efficient point and the ε -Benson properly efficient point in the locally convex spaces, respectively. Recently, with the development of set-valued analysis, many authors have been increasingly interested in the (proper) efficiency of the set-valued optimization problems. Under the assumptions of the generalized cone convex set-valued maps, the (proper) efficiency of the set-valued optimization problems has been investigated in the literature (see Refs. [10–12] and the references therein). At the same time, the (proper) subdifferential of the set-valued maps induced by the (proper) subgradient of the set-valued maps has been introduced. For example, based on Sawaragi and Tanino [13], Taa [14] introduced the weak subdifferential of the set-valued maps, and Li and Xu [15] and Yu and Liu [16] introduced a strict subdifferential and the Henig subdifferential of set-valued maps, respectively. However, the (proper) subdifferential of a set-valued map may be an empty set (see Example 4.1 in [17]). To avoid the fact, some authors introduced different kinds of approximate (proper) subdifferential. For example, Taa [17] and Tuan [9] introduced the ε -weak subdifferential and the ε-Benson subdifferential of the set-valued maps, respectively, which in turn generalized the weak subdifferential and the Benson subdifferential of the set-valued maps, and established a series of optimality conditions characterized by ε -weak subdifferential and the ε -Benson subdifferential. Bao and Mordukhovich [18–22] also obtained some optimality conditions of set-valued optimization problems by normal and mixed subdifferentials of set-valued mappings. Inspired by the above results, we will introduce a new notion of ε -strict subdifferential, which is the generalization of the strict subdifferential introduced in [15]. This paper is organized as follows. In Section 2, we give some preliminaries, including the new notion of ε -strict subdifferential of a set-valued map. In Section 3, the existence for ε -strict subdifferential of a setvalued map and an equivalent characterization for ε -strict subgradient of a set-valued map are presented, respectively. A generalized Moreau–Rockafellar theorem in the sense of ε -strict subdifferential is also obtained. In Section 4, optimality conditions of vector optimization problems with set-valued maps are established in the sense of ε -strict subdifferential. Our results, which improve some known results in the literature, are very interesting. 2. Preliminaries Let X , Y and Z be three locally convex spaces, and let 0 stand for the zero element of every space. Let K be a nonempty subset of Y . The generated cone of K is defined as coneK := {λk|λ ≥ 0, k ∈ K }. A cone K ⊆ Y is said to be pointed if K ∩ {−K } = {0}. Let Y ∗ and Z ∗ stand for topological dual spaces of Y and Z , respectively. Let C and D be pointed closed convex cones with nonempty interior in Y and Z , respectively. The topological dual cone C + and strictly topological dual cone C +i of C are defined as C + := {y∗ ∈ Y ∗ |⟨y, y∗ ⟩ > 0, ∀y ∈ C } and C +i := {y∗ ∈ Y ∗ |⟨y, y∗ ⟩ > 0, ∀y ∈ C \ {0}}, where ⟨y, y∗ ⟩ denotes the value of the linear continuous functional y∗ at the point y. The meanings of D+ and D+i are similar to those of C + and C +i . Let K be a nonempty subset of Y . We denote the closure and interior of K by clK and intK . Let L(X , Y ) stand for the set of all continuous linear operators from X to Y . The meanings of L(X , Z ) and L(Z , Y ) are similar to that of L(X , Y ). Write L+ (D, C ) := {T ∈ L(Z , Y )|T (D) ⊆ C }. Let N (0) be the set of all neighborhoods of 0 in Y . Write R :=] − ∞, +∞[ and R+ := [0, +∞[. Definition 2.1. A convex subset B of C is said to be a base of C if C = coneB and 0 ̸∈ clB. In the sequel we denote by B a base of C . Remark 2.1. If B is a bounded base of C , then there exists ϕ ∈ Y ∗ \ {0} such that r := inf{⟨b, ϕ⟩|b ∈ B} > 0. Write Bst := {y∗ ∈ Y ∗ |there exists t > 0 such that ⟨b, y∗ ⟩ ≥ t for any b ∈ B}. Lemma 2.1 ([5]). Let y∗ ∈ Y ∗ . y∗ ∈ Bst if and only if there exists a convex neighborhood U of 0 in Y such that ⟨b − u, y∗ ⟩ ≥ 0, ∀b ∈ B, ∀u ∈ U. Write KU (B) := cone(B + U ), where U is an open convex symmetric neighborhood of 0 in Y . Clearly, KU (B) is a convex cone in Y . Definition 2.2. Let M be a nonempty subset in Y . y ∈ M is called an efficient point (resp. a weak efficient point) of M with respect to C , written as y ∈ E (M , C ) (resp. y ∈ WE (M , C )), if

(M − y) ∩ (−C \ {0}) = ∅ (resp. (M − y) ∩ (−intC ) = ∅).

Z.-A. Zhou et al. / Nonlinear Analysis 75 (2012) 3761–3775

3763

Definition 2.3. Let M be a nonempty subset in Y , and let ε ∈ C . y ∈ M is called a ε -efficient point (resp. a ε -weak efficient point) of M with respect to C , written as y ∈ ε − E (M , C ) (resp. y ∈ ε − WE (M , C )), if

(M + ε − y) ∩ (−C \ {0}) = ∅ (resp. (M + ε − y) ∩ (−intC ) = ∅). Definition 2.4 ([5]). Let M be a nonempty subset of Y . y ∈ M is called a strictly efficient point of M with respect to B, written as y ∈ FE (M , B), if there exists U ∈ N (0) such that cl(cone(M − y)) ∩ (U − B) = ∅.

(2.1)

Definition 2.5 ([8]). Let M be a nonempty subset of Y , and let ε ∈ C . y ∈ M is called a ε -strictly efficient point of M with respect to B, written as y ∈ ε − FE (M , B), if there exists U ∈ N (0) such that cl(cone(M + ε − y)) ∩ (U − B) = ∅.

(2.2)

Remark 2.2. Equalities (2.1) and (2.2) are equivalent to cone(M − y) ∩ (U − B) = ∅ and cone(M + ε − y) ∩ (U − B) = ∅, respectively. If it is necessary, we may suppose that U is an open convex symmetric neighborhood of 0 in Y . Remark 2.3. By Definitions 2.2 and 2.4, the following inclusion relations hold: FE (M , B) ⊆ E (M , C ) ⊆ WE (M , C ). However, the following examples will show that the converse of the above inclusion relations does not hold generally. Example 2.1. Let Y = R2 , M = {(x, y)|x ̸= 0, y > 0} ∪ {(x, y)|x = 0, y ≥ 0} ⊆ R2 , C = {(x, y)|x ≥ 0, y ≥ 0}, and B = {(x, y)|x + y = 1, x ≥ 0, y ≥ 0}. Clearly, (0, 0) ∈ E (M , C ) and (0, 0) ̸∈ FE (M , C ). Example 2.2. Let Y = R2 , M = {(x, y)|x ∈ R, y ≥ 0} ⊆ R2 , and C = {(x, y)|x ≥ 0, y ≥ 0}. Clearly, (0, 0) ∈ WE (M , C ) and

(0, 0) ̸∈ E (M , C ).

Remark 2.4. By Definitions 2.3–2.5, the following inclusion relations hold: FE (M , B) ⊆ ε − FE (M , B) ⊆ ε − E (M , C ) ⊆ ε − WE (M , C ). However, the following examples will show that the converse of the above inclusion relations does not hold generally. Example 2.3. Let Y = R2 , M = {(x, y)|x ≥ −2, y ≥ 0} ⊆ R2 , C = {(x, y)|x ≥ 0, y ≥ 0}, B = {(x, y)|x + y = 1, x ≥ 0, y ≥ 0}, and ε = (1, 0). Clearly, (−1, 0) ∈ ε − F (M , B) and (−1, 0) ̸∈ F (M , B). Example 2.4. In Example 2.1, let ε = (1, 0). Clearly, (0, 0) ∈ ε − E (M , B) and (0, 0) ̸∈ ε − FE (M , B). In Example 2.2, let ε = (1, 0). Clearly, (0, 0) ∈ ε − WE (M , B) and (0, 0) ̸∈ ε − E (M , B). Let F : X ⇒ 2Y is a set-valued map on X . In the sequel we denote the domain domF , the graph grF and the epigraph epiF of F , respectively, by domF := {x ∈ X |F (x) ̸= ∅}, grF := {(x, y) ∈ X × Y |y ∈ F (x)}, epiF := {(x, y) ∈ X × Y |y ∈ F (x) + C }. Let A be a nonempty subset of X . Write F (A) :=



x∈A

F (x).

Definition 2.6 ([13]). Let A be a nonempty subset in X , and let F : X ⇒ 2Y be a set-valued map on A. T ∈ L(X , Y ) is called a subgradient (resp. a weak subgradient) of F at (x, y) ∈ grF if





 y − T (x) ∈ E (F (x) − T (x)), C x∈A







 resp. y − T (x) ∈ WE (F (x) − T (x)), C x∈A

.

3764

Z.-A. Zhou et al. / Nonlinear Analysis 75 (2012) 3761–3775

The set of all subgradients (resp. weak subgradients) of F at (x, y), denoted by







 ∂ F (x; y) := T ∈ L(X , Y )|y − T (x) ∈ E (F (x) − T (x)), C C

x∈A









 C (F (x) − T (x)), C resp. ∂W F (x; y) := T ∈ L(X , Y )|y − T (x) ∈ WE

,

x∈A

is called the subdifferential (resp. weak subdifferential). Definition 2.7 ([15]). Let A be a nonempty subset in X , and let F : X ⇒ 2Y be a set-valued map on A. T ∈ L(X , Y ) is called a strict subgradient of F at (x, y) ∈ grF with respect to B if



  y − T (x) ∈ FE (F (x) − T (x)), B . x∈A

The set of all strict subgradients of F at (x, y) with respect to B, denoted by

  (F (x) − T (x)), B , (x; y) := T ∈ L(X , Y )|y − T (x) ∈ FE 



B FE F



x∈A

is called the strict subdifferential. Definition 2.8 ([17]). Let A be a nonempty subset in X , and let F : X ⇒ 2Y be a set-valued map on A. Let ε ∈ C . T ∈ L(X , Y ) is called a ε -subgradient (resp. a ε -weak subgradient) of F at (x, y) ∈ grF if



  (F (x) − T (x)), C y − T (x) ∈ ε − E x∈A





  (F (x) − T (x)), C resp. y − T (x) ∈ ε − WE

.

x∈A

The set of all ε -subgradients (resp. ε -weak subgradients) of F at (x, y), denoted by







 (F (x) − T (x)), C ∂ε F (x; y) := T ∈ L(X , Y )|y − T (x) ∈ ε − E C

x∈A



 resp. ∂

C εW F

(x; y) := T ∈ L(X , Y )|y − T (x) ∈ ε − WE

 

 (F (x) − T (x)), C

,

x∈A

is called the ε -subdifferential (resp. ε -weak subdifferential). If F : X → R ∪ {+∞} is a real-valued function on A and ε ≥ 0, then the ε -subdifferential in Definition 2.8 reduces to the following expression

∂ε F (x) = {x∗ ∈ X ∗ |⟨x − x, x∗ ⟩ − ε ≤ F (x) − F (x), ∀x ∈ A}. Now, we introduce a new notion of ε -strict subdifferential, which is the generalization of the strict subdifferential. Definition 2.9. Let A be a nonempty subset in X , and let F : X ⇒ 2Y be a set-valued map on A. Let ε ∈ C . T ∈ L(X , Y ) is called a ε -strict subgradient of F at (x, y) ∈ grF with respect to B if



  y − T (x) ∈ ε − FE (F (x) − T (x)), B . x∈A

The set of all ε -strict subgradients of F at (x, y) with respect to B, denoted by





B ε FE F



  (x; y) = T ∈ L(X , Y )|y − T (x) ∈ ε − FE (F (x) − T (x)), B , x∈A

is called the ε -strict subdifferential of F at (x, y) with respect to B. Remark 2.5. Clearly, if y ∈ F (x) ∩ ε − FE (



x∈A

F (x), B), then 0 ∈ ∂εBFE F (x; y).

Z.-A. Zhou et al. / Nonlinear Analysis 75 (2012) 3761–3775

3765

Remark 2.6. Remarks 2.3–2.4 and Definitions 2.6–2.9 show that B C ∂FE F (x; y) ⊆ ∂ C F (x; y) ⊆ ∂W (x; y)

and B ∂FE F (x; y) ⊆ ∂εBFE F (x; y) ⊆ ∂εC F (x; y) ⊆ ∂εCW (x; y).

Definition 2.10. Let A be a nonempty subset in X , and let ε ∈ C . The set NAεFE (x) = {T ∈ L(X , Y )|∃U ∈ N (0) such that cl(cone(T (A) − ε − T (x))) ∩ (B − U ) = ∅}

(resp. NAε (x) = {T ∈ L(X , Y )|(T (A) − ε − T (x)) ∩ C \ {0} = ∅})

is called a ε -strictly normal (a ε -strictly normal) set of A at x ∈ A with respect to B, where U ∈ N (0) is an open convex symmetric neighborhood in Y . Let A be a nonempty subset in X . The generalized indicator function δA : X ⇒ 2Y , which is a set-valued map, is defined as follows

 {0}, δA (x) := ∅,

x ∈ A, x ̸∈ A.

Remark 2.7. It is easy to check that ∂εBFE δA (x; 0) = NAεFE (x). Definition 2.11. Let A be a nonempty subset in X . A set-valued map F : X ⇒ 2Y is called C -convex on A if, for any x1 , x2 ∈ A, and any λ ∈]0, 1[,

λF (x1 ) + (1 − λ)F (x2 ) ⊆ F (λx1 + (1 − λ)x2 ) + C . Remark 2.8. If A is a nonempty convex set in X and the set-valued map F is C -convex on A, then F (A) + C is a convex set. Definition 2.12 ([23]). Let A be a nonempty subset in X . A set-valued map F : A ⇒ 2Y is called nearly C -convexlike on A if cl(F (A) + C ) is a convex set. Remark 2.9. Let y0 ∈ Y . Since cl(F (A) + y0 + C ) = cl(F (A) + C ) + y0 , it is easy to see that F is nearly C -convexlike on A if and only if F + y0 is nearly C -convexlike on A. Definition 2.13 ([11]). Let A be a nonempty subset in X . A set-valued map F : X ⇒ 2Y is called nearly C -subconvexlike on A if cl(cone(F (A) + C )) is a convex set. Remark 2.10. Since cl(F (A)+ C ) = cl(F (A)+ intC ), Definition 2.12 is equivalent to Definition 2.3 in [17]. Example 3.1 in [11] shows that nearly C -subconvexlikeness is the generalization of nearly C -convexlikeness. 3. ε-strict subdifferentials of set-valued maps In this section, first, we will give the existence theorem of ε -strict subdifferential of a set-valued map. To reach this aim, we need the following definitions and lemmas. Definition 3.1 ([13]). Let A be a nonempty subset in X , and let F : X ⇒ 2Y be a set-valued map on A. F is said to be connected at x ∈ A, if there exist a neighborhood U of x and a function f : A → Y such that f is continuous at x and f (x) ∈ F (x), ∀x ∈ U. Definition 3.2. Let A be a nonempty subset in X , and let F : X ⇒ 2Y be a set-valued map on A. F is said to be weakly lower semi-continuous at x ∈ A if there exists y ∈ F (x), for any neighborhood U of y, there exist a neighborhood V of x such that F (x) ∩ U ̸= ∅,

∀x ∈ V .

Remark 3.1. If F is connected at x ∈ A, then F is weakly lower semi-continuous at x. However, the converse is not true generally. Indeed, since F is connected at x ∈ A, then there exist a neighborhood U1 of x and a function f : A → Y such that f is continuous at x and f (x) ∈ F (x),

∀x ∈ U1 .

(3.1)

3766

Z.-A. Zhou et al. / Nonlinear Analysis 75 (2012) 3761–3775

Let y := f (x). Clearly, y ∈ F (x). Since f is continuous at x, for any neighborhood V of y, there exists a neighborhood U2 of x such that f (U2 ) ⊆ V .

(3.2)

U := U1 ∩ U2 .

(3.3)

Take

By (3.1)–(3.3), we have f (x) ∈ F (x) ∩ V ,

∀x ∈ U ,

i.e., F (x) ∩ V ̸= ∅,

∀x ∈ U .

Therefore, F is weakly lower semi-continuous at x. Lemma 3.1 ([16]). Let A be a nonempty subset in X , and let F : X ⇒ 2Y be a set-valued map on A. Let x ∈ domF . If F is weakly lower semi-continuous at x, then int(epiF ) ̸= ∅. By Lemma 3.1 in [14] and Lemma 3.1, we have the following lemma. Lemma 3.2. Let F1 : X ⇒ 2Y and F2 : X ⇒ 2Y be two set-valued maps. Let A := {x ∈ X |F1 (x) ̸= ∅ and F2 (x) ̸= ∅}. Let F1 and F2 be C -convex on A. If F1 is weakly lower semi-continuous at x ∈ intA, then int(epiF1 ) ∩ epiF2 ̸= ∅. Now, we give a sufficient condition of existence for a ε -strict subdifferential of set-valued map. Theorem 3.1. Let A be a nonempty subset in X , and let F : X ⇒ 2Y be a C -convex set-valued map on A. Let ε ∈ C , and let B be a base of C . If the following conditions are satisfied: (i) there exists x ∈ domF such that y ∈ ε − FE (F (x), B); (ii) F is weakly lower semi-continuous at x. Then ∂εBFE F (x; y) ̸= ∅. Proof. Since y ∈ ε − FE (F (x), B), by Definition 2.5, there exists an open convex symmetric neighborhood U ∈ N (0) in Y such that cone(F (x) + ε − y) ∩ (U − B) = ∅.

(3.4)

Let

Ω1 := {(x, y) ∈ A × Y |y ∈ F (x) + ε + KU (B)}, Ω2 := {(x, y) ∈ A × Y |y ∈ F (x) + KU (B)}. Clearly,

Ω1 = Ω2 + {(0, ε)}.

(3.5)

intΩ1 = intΩ2 + {(0, ε)}.

(3.6)

So,

It follows from (3.6) and Lemma 3.1 that intΩ1 ̸= ∅. We will show that Ω1 is a convex set. Since B is a base of C , for any c ∈ C \ {0}, there exist α > 0 and b ∈ B such that c = α b ∈ α B ⊆ α B − α U = α(B − U ) ⊆ KU (B). Therefore, C ⊆ KU (B). Because F is C -convex on A, F is KU (B)-convex on A. Thus, by Definition 2.11, we easily see that Ω2 is a convex set. It follows from (3.5) that Ω1 is a convex set. We assert that (x, y) ̸∈ intΩ1 . Otherwise, there exists U1 ∈ N (0) in Y such that (x, y + U1 ) ⊆ Ω1 . Since KU (B) is a cone, we have

−d := λ(b − u) ∈ KU (B),

∀λ > 0, ∀b ∈ B, ∀u ∈ U .

For any λ > 0, when b ̸= u, we have −d ∈ KU (B) \ {0}. Since U1 is absorbent, d ∈ U1 when λ > 0 is sufficiently small. It follows from the definition of Ω1 that y + d ∈ F (x) + ε + KU (B).

Z.-A. Zhou et al. / Nonlinear Analysis 75 (2012) 3761–3775

3767

Thus, there exist y1 ∈ F (x) and d1 ∈ KU (B) such that y + d = y1 + ε + d1 , which implies y1 + ε − y = d − d1 ∈ −KU (B) \ {0} = cone(U − B) \ {0}. Accordingly, there exists λ′ > 0 such that 1 λ′ ∈ U − B, which contradicts (3.4). Hence, (x, y) ̸∈ intΩ1 . By the separation theorem, there exists (x∗ , y∗ ) ∈ X ∗ × Y ∗ with (x∗ , y∗ ) ̸= (0, 0) such that y +ε−y

⟨x, x∗ ⟩ + ⟨y, y∗ ⟩ ≥ ⟨x, x∗ ⟩ + ⟨y, y∗ ⟩,

∀x ∈ A, ∀y ∈ F (x) + ε + KU (B).

(3.7)

Letting x = x and y = y + ε + d2 in (3.7), we have

⟨ε + d2 , y∗ ⟩ ≥ 0,

∀d2 ∈ KU (B).

(3.8)

We assert that

⟨d2 , y∗ ⟩ ≥ 0,

∀d2 ∈ KU (B).

(3.9)

Otherwise, there exists d3 ∈ KU (B) such that ⟨d3 , y ⟩ < 0, which implies ⟨λd3 , y ⟩ < 0, ∀λ > 0. Thus, when λ is sufficiently large, we have ∗



⟨ε + λd3 , y∗ ⟩ < 0, which contradicts (3.8). Therefore, (3.9) holds. Next, we will show that y∗ ∈ Bst . It follows from (3.9) that

⟨b − u, y∗ ⟩ ≥ 0,

∀b ∈ B, ∀u ∈ U . ∗

(3.10)

st

By (3.10) and Lemma 2.1, y ∈ B . Since y∗ ∈ Bst , we can find a point y2 ∈ C such that ⟨y2 , y∗ ⟩ = 1. Define T (x) = −⟨x, x∗ ⟩y2 ,

∀x ∈ X .

(3.11)

Clearly, T ∈ L(X , Y ) and ⟨T (x), y ⟩ = −⟨x, x ⟩, ∀x ∈ X . Because y∗ ∈ Bst , there exists t > 0 such that ∗

⟨b, y∗ ⟩ ≥ t ,



∀b ∈ B.

(3.12)

Let U2 := {u2 ∈ Y ∥ ⟨u2 , y ⟩| < of U2 , we have ∗

⟨u2 − b, y∗ ⟩ < 0,

t 3

}. Clearly, U2 is an open convex symmetric neighborhood of 0 in Y . By (3.12) and definition

∀u2 ∈ U2 , ∀b ∈ B.

(3.13)

We assert that there exists U3 ∈ N (0) such that cone

 

 (F (x) − T (x)) + ε − y + T (x) ∩ (U3 − B) = ∅.

(3.14)

x∈A

Otherwise, for the above U2 ∈ N (0), we have



  cone (F (x) − T (x)) + ε − y + T (x) ∩ (U2 − B) ̸= ∅.

(3.15)

x∈A

By (3.13) and (3.15), there exist r ′ > 0, x1 ∈ A and y3 ∈ F (x1 ) such that r ′ ⟨y3 − T (x1 ) + ε − y + T (x), y∗ ⟩ < 0.

(3.16)

Since y3 + ε ∈ F (x1 ) + ε + KU (B), it follows from (3.7) and (3.11) that r ′ ⟨y3 − T (x1 ) + ε − y + T (x), y∗ ⟩ = r ′ [(⟨x1 , x∗ ⟩ + ⟨y3 + ε, y∗ ⟩) − (⟨x, x∗ ⟩ + ⟨y, y∗ ⟩)] ≥ 0, which contradicts (3.16). Therefore, (3.14) holds. This implies that y − T (x) ∈ ε − FE

 

 (F (x) − T (x)), B .

x∈A

Consequently, T ∈ ∂εBFE F (x; y). Thus, we obtain that ∂εBFE F (x; y) ̸= ∅.



3768

Z.-A. Zhou et al. / Nonlinear Analysis 75 (2012) 3761–3775

Remark 3.2. When ε = 0, Theorem 3.1 reduces to Theorem 3.1 in [15]. Moreover, Remark 3.1 shows that the condition (ii) is weaker than the condition that F is connected at x. Lemma 3.3. Let M be a nonempty subset in Y , and let ε ∈ C . Let B be a base of C . Then

ε − FE (M , B) ⊆ ε − FE (M + C , B). Proof. Let m ∈ ε − FE (M , B). Then there exists a neighborhood U ∈ N (0) such that cone(M + ε − m) ∩ (U − B) = ∅.

(3.17)

Clearly, m ∈ M ⊆ M + C . Therefore, we only need to show that cone(M + C + ε − m) ∩ (U − B) = ∅.

(3.18)

In fact, if (3.18) does not hold, then there exist λ1 ≥ 0, m1 ∈ M , c ∈ C , u ∈ U and b1 ∈ B such that

λ1 (m1 + c + ε − m) = u − b1 . Case one. If λ1 = 0, then 0 ∈ cone(M + ε − m) ∩ (U − B), which contradicts (3.17). Case two. If λ1 > 0, then λ1 (m1 + ε − m) = u − b1 − λ1 c. By (3.17), c ̸= 0. Since B is a base of C , there exist λ2 > 0 and b2 ∈ B such that c = λ2 b2 . Therefore,

λ1 (m1 + ε − m) = u − b1 − λ1 c = u − b1 − λ1 λ2 b2    1 λ1 λ2 1 u− b1 + b2 . = (1 + λ1 λ2 ) 1 + λ1 λ2 1 + λ1 λ2 1 + λ1 λ2

(3.19)

Since B is a convex set, it follows from (3.19) that

λ1 1 + λ1 λ2

(m1 + ε − m) ∈ cone(M + ε − m) ∩ (U − B),

which contradicts (3.17). Thus, cases one and two show that (3.18) holds.



Remark 3.3. The following example will show that the inclusion relation ε − FE (M + C , B) ⊆ ε − FE (M , B) does not hold generally. Example 3.1. Let Y = R2 , M = {(−2, 0)} ⊆ R2 , C = {(x, y)|x ≥ 0, y ≥ 0}, B = {(x, y)|x + y = 1, x ≥ 0, y ≥ 0} and ε = (1, 0). Clearly, (−1, 0) ∈ ε − FE (M + C , B). However, (−1, 0) ̸∈ ε − FE (M , B). Taa [17] and Tuan [9] studied the equivalent characterizations of the ε -weak subgradient and the ε -Benson subgradient with set-valued map, respectively. We now study the equivalent characterization of the ε -strict subgradient with set-valued map. Theorem 3.2. Let F : X ⇒ 2Y be a C -convex set-valued map on X . Let (x, y) ∈ grF and ε ∈ C . Then T ∈ ∂εBFE F (x; y) if and only if there exists y∗ ∈ Bst such that

⟨y − y + ε − T (x − x), y∗ ⟩ ≥ 0,

∀(x, y) ∈ grF .

(3.20)

Proof. Necessity. Since T ∈ ∂εBFE F (x; y), we have y − T (x) ∈ ε − FE (

 y − T (x) ∈ ε − FE



x∈X

(F (x) − T (x)), B). It follows from Lemma 3.3 that

  (F (x) − T (x)) + C , B . x∈X

By Definition 2.5, there exists an open convex symmetric neighborhood U ∈ N (0) such that

  cone (F (x) − T (x)) + C + ε − y + T (x) ∩ (U − B) = ∅. 

x∈X

Since  F is C -convex on X and T is a linear operator, by Remark 2.8, x∈X (F (x) − T (x)) + C is a convex set. Therefore, cone( x∈X (F (x) − T (x)) + C + ε − y + T (x)) is a convex set. Clearly, U − B is a convex set, and int(U − B) = U − B ̸= ∅. Thus, the conditions of the separation theorem are satisfied. So, there exists y∗ ∈ Y ∗ \ {0} such that



λ⟨y − T (x) + c + ε − y + T (x), y∗ ⟩ ≥ ⟨u − b, y∗ ⟩,

∀(x, y) ∈ grF , ∀λ ≥ 0, ∀c ∈ C , ∀u ∈ U , ∀b ∈ B.

(3.21)

Z.-A. Zhou et al. / Nonlinear Analysis 75 (2012) 3761–3775

3769

Letting λ = 0 in (3.21), we have

⟨b − u, y∗ ⟩ ≥ 0,

∀u ∈ U , ∀b ∈ B.

(3.22)

It follows from (3.22) and Lemma 2.1 that y ∈ B . Letting c = 0 in (3.21), we have ∗

st

λ⟨y − T (x) + ε − y + T (x), y∗ ⟩ ≥ ⟨u − b, y∗ ⟩,

∀(x, y) ∈ grF , ∀λ ≥ 0, ∀u ∈ U , ∀b ∈ B.

(3.23)

We assert that (3.20) holds. Otherwise, there exists (x1 , y1 ) ∈ grF such that

⟨y1 − y + ε − T (x1 − x), y∗ ⟩ < 0. Letting λ → +∞, we obtain

λ⟨y1 − y + ε − T (x1 − x), y∗ ⟩ → −∞.

(3.24)

It follows from (3.23) and (3.24) that

−∞ ≥ ⟨u − b, y∗ ⟩,

∀u ∈ U , ∀b ∈ B,

which leads to a contradiction. Therefore, (3.20) holds. Sufficiency. Because y∗ ∈ Bst , there exists t > 0 such that

⟨b, y∗ ⟩ ≥ t ,

∀b ∈ B.

Let V := {y ∈ Y |⟨y, y∗ ⟩ < t }. Since y∗ is continuous at 0, there exists U ∈ N (0) in Y such that U ⊆ V. Clearly,

⟨u − b, y∗ ⟩ < 0,

∀u ∈ U , ∀b ∈ B .

(3.25)

We assert that T ∈ ∂εBFE F (x; y). Otherwise, for the above U ∈ N (0), we have

  (F (x) − T (x)) + ε − y + T (x) ∩ (U − B) ̸= ∅. cone 

x∈X

So, there exist r ′ > 0 and (x1 , y1 ) ∈ grF such that r ′ (y1 − y + ε − T (x1 − x)) ∈ U − B.

(3.26)

By (3.25) and (3.26), we have r ′ ⟨y1 − y + ε − T (x1 − x), y∗ ⟩ < 0, i.e.,

⟨y1 − y + ε − T (x1 − x), y∗ ⟩ < 0, which contradicts (3.20). Therefore, T ∈ ∂εBFE F (x; y).



We will close this section by giving a generalized Moreau–Rockafellar type theorem in the sense of ε -strict subdifferential. Lemma 3.4 ([17]). Let ε ∈ C and y∗ ∈ C + . Then

⟨C ∩ (ε − C ), y∗ ⟩ = [0, ⟨ε, y∗ ⟩], where ⟨C ∩ (ε − C ), y∗ ⟩ = {⟨y, y∗ ⟩|y ∈ C ∩ (ε − C )}. Lemma 3.5 ([17]). Assume that ϕ and ψ : X → R ∪{+∞} are convex and that there exists x0 ∈ domψ such that ϕ is continuous at x0 . Then for every ε ≥ 0 and x ∈ X , we have

∂ε (ϕ + ψ)(x) =

 ε1 ,ε2 ≥0,ε1 +ε2 ≤ε

[∂ε1 ϕ(x) + ∂ε2 ψ(x)].

3770

Z.-A. Zhou et al. / Nonlinear Analysis 75 (2012) 3761–3775

Remark 3.4. In Lemma 3.5, if ε = 0, then we have

∂(ϕ + ψ)(x) = ∂ϕ(x) + ∂ψ(x),

∀x ∈ X .

Let A be a nonempty subset in X . The indicator function σA : X → R ∪ {+∞} is defined as follows:

σA (x) :=



0,

+∞,

x ∈ A, x ̸∈ A.

Definition 3.3 ([24,25]). Let M be a nonempty subset in Y , and let y∗ ∈ Y ∗ be fixed. Let ε ∈ C . The ε -optimal solution set of M with respect to y∗ is defined as

ε − S (M , y∗ ) = {y ∈ M |⟨y, y∗ ⟩ ≤ ⟨y, y∗ ⟩ + ⟨ε, y∗ ⟩, ∀y ∈ M }. Lemma 3.6 ([24]). Let B be a bounded base of C , and let ε ∈ C . Let M be a nonempty convex subset in Y . y ∈ ε − FE (M , B) if and only if there exists y∗ ∈ Bst such that y ∈ ε − S (M , y∗ ). Remark 3.5. From the proof of Lemma 3.6, we find that M is not necessary to be a convex set for the sufficient condition of Lemma 3.6. Inspired by Theorem 4.2 in [17] and Theorem 5.1 in [9], we obtained the following Moreau–Rockafellar theorem in the sense of ε -strict subdifferential of set-valued map. Theorem 3.3. Let B be a bounded base of C . Let F1 : X ⇒ 2Y and F2 : X ⇒ 2Y be two C -convex set-valued maps on X . Let A := {x ∈ X |F1 (x) ̸= ∅ and F2 (x) ̸= ∅}. Let F1 be weakly lower semi-continuous at x ∈ intA. Then for ε ∈ C , y1 ∈ F1 (x), y2 ∈ F2 (x), we have

∂εBFE (F1 + F2 )(x; y1 + y2 ) ⊆

 ε1 ,ε2 ∈C ,ε1 +ε2 ∈ε+Y \intC

[∂εB1 FE F1 (x; y1 ) + ∂εB2 FE F2 (x; y2 )].

(3.27)

Proof. Let T ∈ ∂εBFE (F1 + F2 )(x; y1 + y2 ). Two set-valued maps on X are defined as follows H1 (x) = F1 (x) − y1 − T (x − x),

∀x ∈ X

and H2 (x) = F2 (x) − y2 ,

∀x ∈ X .

Clearly, 0 ∈ H1 (x) + H2 (x). Since T ∈ ∂εBFE (F1 + F2 )(x; y1 + y2 ), we have

  (F1 (x) + F2 (x) − T (x)), B . y1 + y2 − T (x) ∈ ε − FE 

x∈X

Thus, there exists an open convex symmetric neighborhood U of 0 in Y such that



  cone (F1 (x) + F2 (x) − T (x)) + ε − (y1 + y2 − T (x)) ∩ (U − B) = ∅, x∈X

i.e.,



  cone (F1 (x) − y1 − T (x − x) + F2 (x) − y2 ) + ε − 0 ∩ (U − B) = ∅, x∈X

which implies



  0 ∈ ε − FE (H1 (x) + H2 (x)), B .

(3.28)

x∈X

It follows from (3.28) and Lemma 3.3 that



  0 ∈ ε − FE (H1 (x) + H2 (x)) + C , B . x∈X

(3.29)

Z.-A. Zhou et al. / Nonlinear Analysis 75 (2012) 3761–3775

3771

Since F1 and F2 are C -convex on X , H1 (x) + H2 (x) is C -convex on X . It follows from Remark 2.8 that x∈X (H1 (x) + H2 (x)) + C is a convex set. Thus, by (3.29) and Lemma 3.6, there exists y∗ ∈ Bst such that 0 ∈ ε − S ( x∈X (H1 (x) + H2 (x)) + C , y∗ ). Therefore,



⟨y, y∗ ⟩ + ⟨ε, y∗ ⟩ ≥ 0,

∀(x, y) ∈ epi(H1 + H2 ).

(3.30)

Since C + C ⊆ C , we have

(x, y1 + y2 − (y1 + y2 ) − T (x − x)) ∈ epi(H1 + H2 ),

∀(x, y1 ) ∈ epiF1 , ∀(x, y2 ) ∈ epiF2 .

(3.31)

By (3.30) and (3.31), for any (x, y1 ) ∈ epiF1 and (x, y2 ) ∈ epiF2 , we have

σepiF1 (x, y1 ) + σepiF2 (x, y2 ) + ⟨y1 + y2 − (y1 + y2 ) − T (x − x), y∗ ⟩ + ⟨ε, y∗ ⟩ ≥ 0.

(3.32)

Let f1 (x, y1 , y2 ) = σepiF1 (x, y1 ) + ⟨y1 − y1 , y∗ ⟩,

∀(x, y1 ) ∈ epiF1

and f2 (x, y1 , y2 ) = σepiF2 (x, y2 ) + ⟨y2 − y2 − T (x − x), y∗ ⟩,

∀(x, y2 ) ∈ epiF2 .

Because F1 and F2 are C -convex on X , epiF1 and epiF2 are two convex sets. Therefore, f1 (x, y1 , y2 ) and f2 (x, y1 , y2 ) are convex on X × Y × Y . Thus, by (3.32), we have

(0, 0, 0) ∈ ∂⟨ε,y∗ ⟩ (f1 + f2 )(x, y1 , y2 ).

(3.33)

Since F1 be weakly lower semi-continuous at x ∈ intA, it follows from Lemma 3.2 that int(epiF1 ) ∩ epiF2 ̸= ∅. So, there exists (x0 , y0 ) ∈ int(epiF1 ) ∩ epiF2 such that σepiF1 (x, y1 ) is continuous at (x0 , y0 ). Clearly, f1 is continuous at (x0 , y0 , y0 ) and (x0 , y0 , y0 ) ∈ domf2 . By (3.33) and Lemma 3.5, there exist α1 ≥ 0 and α2 ≥ 0 with α1 + α1 ≤ ⟨ε, y∗ ⟩ such that

(0, 0, 0) ∈ ∂α1 f1 (x, y1 , y2 ) + ∂α2 f2 (x, y1 , y2 ). Thus, there exists (x∗1 , y∗1 , y∗2 ) ∈ ∂α1 f1 (x, y1 , y2 ) such that

(−x∗1 , −y∗1 , −y∗2 ) ∈ ∂α2 f2 (x, y1 , y2 ). Therefore, we have

⟨x − x, x∗1 ⟩ − ⟨y1 − y1 , y∗ ⟩ − α1 ≤ 0,

∀(x, y1 ) ∈ grF1

(3.34)

and

− ⟨x − x, x∗1 ⟩ − ⟨y2 − y2 , y∗ ⟩ + ⟨T (x − x), y∗ ⟩ − α2 ≤ 0,

∀(x, y2 ) ∈ grF2 .

(3.35)

Since B is a base of C and y∗ ∈ Bst , we can find a point c ∈ C \ {0} such that ⟨c , y∗ ⟩ = 1. We define the operator L : X → Y by L(x) = ⟨x, x∗1 ⟩c ,

∀x ∈ X .

(3.36)

By (3.34)–(3.36), we have

⟨L(x − x), y∗ ⟩ − ⟨y1 − y1 , y∗ ⟩ − α1 ≤ 0,

∀(x, y1 ) ∈ grF1

(3.37)

and

− ⟨L(x − x), y∗ ⟩ − ⟨y2 − y2 , y∗ ⟩ + ⟨T (x − x), y∗ ⟩ − α2 ≤ 0,

∀(x, y2 ) ∈ grF2 .

(3.38)

Since α1 , α2 ∈ [0, ⟨ε, y ⟩], by Lemma 3.4, there exist ε1 , ε2 ∈ C ∩ (ε − C ) such that ⟨ε1 , y ⟩ = α1 and ⟨ε2 , y ⟩ = α2 . Thus, we have ∗





⟨ε1 + ε2 − ε, y∗ ⟩ = α1 + α2 − ⟨ε, y∗ ⟩ ≤ 0, which implies ε1 + ε2 − ε ̸∈ intC . Hence, ε1 + ε2 ∈ ε + Y \ intC . By (3.37) and (3.38), we have

⟨y1 − y1 + ε1 − L(x − x), y∗ ⟩ ≥ 0,

∀(x, y1 ) ∈ grF1

(3.39)

and

⟨y2 − y2 + ε2 − (T − L)(x − x), y∗ ⟩ ≥ 0,

∀(x, y2 ) ∈ grF2 .

(3.40)

By (3.39) and (3.40) and Theorem 3.2, we obtain L ∈ ∂ε1 FE F1 (x; y1 ) and T − L ∈ ∂ε2 FE F2 (x; y2 ). Therefore, (3.27) holds. B

B



Remark 3.6. It follows from Lemma 3.2 that, if the conditions of Theorem 3.3 hold, then we have int(epiF1 ) ∩ epi F2 ̸= ∅.

3772

Z.-A. Zhou et al. / Nonlinear Analysis 75 (2012) 3761–3775

Remark 3.7. If ε = 0, then by Theorems 3.2 and 3.3, respectively, we can recover Theorems 3.2 and 3.3 in [15]. Remark 3.8. The proof method of Theorem 3.3 is similar to that of Theorem 4.2 in [17] but different from that of Theorem 5.1 in [9], which is based on cone separation theorem. 4. Optimality conditions First, we consider the following vector optimization problem:

 (VP1)

min F (x) s.t . x ∈ A,

where F : X ⇒ 2Y is a set-valued map on X and A is a nonempty convex subset in X . Lemma 4.1. Let y ∈ Y , and let C be a cone with nonempty interior in Y . Then (y + C ) ∩ C ̸= ∅. Proof. Since intC ̸= ∅, there exists c ∈ intC . Thus, there exists U ∈ N (0) such that c + U ⊆ C.

(4.1)

Because U is absorbent, λy ∈ U when λ > 0 is sufficiently small. It follows from (4.1) that c + λy ∈ C . Since C is a cone, c + y ∈ C . We note that λc + y ∈ y + C . Therefore, (y + C ) ∩ C ̸= ∅.  λ Definition 4.1. Let B be a base of C and ε ∈ C . Let x ∈ A and y ∈ F (x). (x, y) is called a ε -strictly efficient element of (VP1) with respect to B if y ∈ ε − FE (F (A), B). Theorem 4.1. Let B be a bounded base of C and ε ∈ C . Let F : X ⇒ 2Y be a C -convex set-valued map on A. If the following conditions are satisfied: (i) int(domF ) ∩ A ̸= ∅; (ii) (x, y) is a ε -strictly efficient element of (VP1) with respect to B. Then there exist ε1 , ε2 ∈ C such that ε1 + ε2 ∈ ε + Y \ intC and ε FE

0 ∈ ∂εB1 FE F (x, y) + NA 2 (x).

(4.2)

Proof. Since (x, y) is a ε -strictly efficient element of (VP1) with respect to B, it follows from Remark 2.5 that 0 ∈ ∂εBFE (F + δA )(x; y).

(4.3)

Take x ∈ int(domF ) ∩ A. Then there exists a neighborhood V of x in X such that x ∈ V ⊆ domF .

(4.4)

It follows from Lemma 4.1 that

(F (x) + C ) ∩ C ̸= φ. Take y ∈ (F (x) + C ) ∩ C . Thus, by (4.4), we obtain y ∈ F (V ) + C , which implies (x, y) ∈ V × {y} ⊆ epiF . Therefore,

(x, y) ∈ int(epiF ).

(4.5)

Since x ∈ A and y ∈ C , we have

(x, y) ∈ epiδA .

(4.6)

By (4.5) and (4.6), int(epiF )∩ epiδA ̸= ∅. By Theorem 3.3 and Remark 3.6, there exist ε1 , ε2 ∈ C such that ε1 +ε2 ∈ ε+ Y \ intC and

∂εBFE (F + δA )(x; y) ⊆ ∂εB1 FE F (x; y) + ∂εB2 FE δA (x; 0).

(4.7)

By Remark 2.7, we have ε FE

∂εB2 FE δA (x; 0) = NA2 (x). By (4.3), (4.7) and (4.8), (4.2) holds.

(4.8) 

Z.-A. Zhou et al. / Nonlinear Analysis 75 (2012) 3761–3775

ε FE

ε

3773

ε

Remark 4.1. Since ∂εB1 FE F (x; y) ⊆ ∂εC1 F (x; y) and NA 2 (x) ⊆ NA 2 (x), it follows from (4.2) that 0 ∈ ∂εC1 F (x; y) + NA 2 (x), which ε FE

ε

improves the necessity for Corollary 6.2 in [9] since the set ∂εB1 FE F (x, y) + NA 2 (x) is smaller than the set ∂εC1 F (x; y) + NA 2 (x). Let A be a nonempty subset in X . Now, we consider the following vector optimization problem with set-valued maps: min F (x) (VP2) s.t . G(x) ∩ (−D) ̸= ∅ x ∈ A,



where F : X ⇒ 2Y and G : X ⇒ 2Z are two set-valued maps on A. The feasible set of (VP2) is defined as follows X0 = {x ∈ A|G(x) ∩ (−D) ̸= ∅}. Definition 4.2. Let B be a base of C and ε ∈ C . Let x ∈ X0 and y ∈ F (x). (x, y) is called a ε -strictly efficient element of (VP2) with respect to B if y ∈ ε − FE (F (X0 ), B). The set-valued map I : X ⇒ 2Y ×Z is defined as follows I (x) := F (x) × G(x),

∀x ∈ X .

By Definition 2.13, we say that the set-valued map I : X ⇒ 2Y ×Z is nearly C × D-subconvexlike on A if cl(cone(I (A) + C × D)) is a convex set in Y × Z . For the simplification, write P = C × D, P + = C + × D+ , Q = Y × Z , and Q ∗ = Y ∗ × Z ∗ . Let ε ∈ C and (x, y) ∈ grF . The set-valued map Iε := (F − y + ε) × G : X ⇒ 2Q is defined as follows Iε (x) := (F (x) − y + ε) × G(x),

∀x ∈ X .

Lemma 4.2 ([26]). Let B be a base of C . Let ε ∈ C and x ∈ X0 . Suppose that the following conditions are satisfied: (i) (x, y) is a ε -strictly efficient element of (VP2) with respect to B; (ii) there exists x0 ∈ A such that G(x0 ) ∩ (−intD) ̸= ∅; (iii) Iε (x) is nearly P-subconvexlike on A. Then there exists (y∗ , z ∗ ) ∈ Bst × D∗ such that

⟨y, y∗ ⟩ + ⟨ε, y∗ ⟩ + ⟨z , z ∗ ⟩ ≥ ⟨y, y∗ ⟩,

∀x ∈ A, ∀y ∈ F (x), ∀z ∈ G(x).

(4.9)

Under the assumption of cone nearly convexlikeness, Taa [17] stated necessary optimality conditions for ε -weak efficient element of (VP2) in terms of Lagrange–Fritz–John multipliers. Now, we will present necessary optimality conditions for ε -strictly efficient elements of (VP2) in terms of Lagrange–Fritz–John multipliers under the assumption of cone nearly subconvexlikeness, which is a generalization of the cone nearly convexlikeness. Theorem 4.2. Let B be a bounded base of C . Let ε ∈ C and x ∈ X0 . Suppose that the following conditions are satisfied: (i) (x, y) is a ε -strictly efficient element of (VP2) with respect to B; (ii) there exists x0 ∈ A such that G(x0 ) ∩ (−intD) ̸= ∅; (iii) Iε (x) is nearly P-subconvexlike on A. Then (a) there exists (y∗ , z ∗ ) ∈ Bst × D∗ such that +

∗ ∗ R ∗ ∗ 0 ∈ ∂⟨ε, y∗ ⟩ (Fy + Gz )(x; ⟨y, y ⟩ + ⟨z , z ⟩),

where Fy∗ = x∈A,y∈F (x) ⟨y, y∗ ⟩ and Gz ∗ = (b) there exists T ∈ L+ (D, C ) such that





x∈A,z ∈G(x)

⟨z , z ∗ ⟩;

0 ∈ ∂εBFE (F + T (G))(x, y + T (z )). Proof. First, we will prove (a). By Lemma 4.2, there exists (y∗ , z ∗ ) ∈ Bst × D∗ such that (4.9) holds. Since x ∈ X0 , there exists z ∈ G(x) ∩ (−D) ̸= ∅. Therefore,

⟨z , z ∗ ⟩ ≤ 0.

(4.10)

By (4.9) and (4.10), we have

⟨y, y∗ ⟩ + ⟨ε, y∗ ⟩ + ⟨z , z ∗ ⟩ ≥ ⟨y, y∗ ⟩ + ⟨z , z ∗ ⟩, So, 0 ∈

R+ ∗ ∂⟨ε, y∗ ⟩ (Fy

+ Gz )(x; ⟨y, y ⟩ + ⟨z , z ⟩). ∗





∀x ∈ A, y ∈ F (x), ∀z ∈ G(x).

(4.11)

3774

Z.-A. Zhou et al. / Nonlinear Analysis 75 (2012) 3761–3775

Now, we will prove (b). Since B is a base of C and y∗ ∈ Bst , we can find a point c ′ ∈ C \ {0} such that ⟨c ′ , y∗ ⟩ = 1. We define the operator T : Z → Y by T (z ) = ⟨z , z ∗ ⟩c ′ ,

∀z ∈ Z .

(4.12)

Clearly, T ∈ L+ (D, C ). By (4.11) and (4.12), we have

⟨y + T (z ), y∗ ⟩ + ⟨ε, y∗ ⟩ ≥ ⟨y + T (z ), y∗ ⟩,

∀x ∈ A, ∀y ∈ F (x), ∀z ∈ G(x),

i.e.,



  ∗ y + T (z ) ∈ ε − S (F (x) + T (G(x))), y .

(4.13)

x∈A

By (4.13), Lemma 3.6 and Remark 3.5, we have



  y + T (z ) ∈ ε − FE (F (x) + T (G(x))), B . x∈A

Therefore, 0 ∈ ∂εBFE (F + T (G))(x, y + T (z )).



Theorem 4.3. Let F : X ⇒ 2Y be a C -convex set-valued map on A, and let G : X ⇒ 2Z be a D-convex set-valued map on A. Let E := {x|F (x) ̸= ∅, G(x) ̸= ∅}. Let B be a bounded base of C and ε ∈ C . Let z ∈ G(x) ∩ (−D). Suppose that the following conditions are satisfied: (i) (ii) (iii) (iv)

A is a nonempty convex subset of X ; F is weakly lower semi-continuous at x′ ∈ intE; there exists x0 ∈ A such that G(x0 ) ∩ (−intD) ̸= ∅; (x, y) is a ε -strictly efficient element of (VP2) with respect to B.

Then there exist T ∈ L+ (D, C ), ε1 , ε2 ∈ C such that ε1 + ε2 ∈ ε + Y \ intC and 0 ∈ ∂εB1 FE F (x, y) + ∂εB2 FE T (G)(x; T (z )). Proof. Since F is a C -convex set-valued map on A and G is a D-convex set-valued map on A, by condition (i), Remark 2.8 and Definition 2.13, Iε (x) is nearly C × D-subconvexlike on A. It follows from Theorem 4.2 that there exists T ∈ L+ (D, C ) such that 0 ∈ ∂εBFE (F + T (G))(x, y + T (z )).

(4.14)

Since G is D-convex on A and T is a linear operator, T (G) is C -convex on A. By condition (ii) and Lemma 3.2, we have int(epiF ) ∩ epi(T (G)) ̸= ∅. Therefore, there exist ε1 , ε2 ∈ C such that ε1 + ε2 ∈ ε + Y \ intC and

∂εBFE (F + T (G))(x; y + T (z )) ⊆ ∂εB1 FE F (x; y) + ∂εB2 FE T (G)(x; T (z )).

(4.15)

It follows from (4.14) and (4.15) that 0 ∈ ∂εB1 FE F (x, y) + ∂εB2 FE T (G)(x; T (z )).



Remark 4.2. Theorem 4.3, which is different from Theorem 6.7 [9] based on ε -Benson subdifferential, is obtained in the sense of ε -strict subdifferential. References [1] [2] [3] [4] [5] [6] [7] [8]

J.M. Borwein, Proper efficient points for maximizations with respect to cones, SIAM J. Control Optim. 15 (1977) 57–63. H.P. Benson, An improved definition of proper efficiency for vector maximization with respect to cones, J. Math. Anal. Appl. 71 (1979) 232–241. M.I. Henig, Proper efficiency with respect to cones, J. Optim. Theory Appl. 36 (1982) 387–407. J.M. Borwein, D.M. Zhuang, Super efficiency in convex optimization, Math. Meth. Oper. Res. 35 (1991) 175–184. Y.H. Cheng, W.T. Fu, Strong efficiency in a locally convex space, Math. Meth. Oper. Res. 50 (1999) 373–384. P. Loridan, ε -solutions in vector minimization problems, J. Optim. Theory Appl. 43 (1984) 265–276. I. Vályi, Approximate saddle-point theorems in vector optimization, J. Optim. Theory Appl. 55 (1987) 435–448. T.Y. Li, Y.H. Xu, C.X. Zhu, ε -strictly efficient solutions of vector optimization problems with set-valued maps, Asia. Pacific. J. Oper. Res. 24 (6) (2007) 841–854. [9] L.A. Tuan, ε -optimality conditions for vector optimization problems with set-valued maps, Numer. Func. Anal. Optim. 31 (1) (2010) 78–95. [10] W.D. Rong, Y.N. Wu, Characterizations of super efficiency in cone-convexlike vector optimization with set-valued maps, Math. Method. Oper. Res. 48 (1998) 247–258.

Z.-A. Zhou et al. / Nonlinear Analysis 75 (2012) 3761–3775 [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26]

3775

X.M. Yang, D. Li, S.Y. Wang, Near-subconvexlikeness in vector optimization with set-valued functions, J. Optim. Theory Appl. 110 (2) (2001) 413–427. Y.H. Xu, S.Y. Liu, On strict efficiency in set-valued optimization with nearly cone-subconvexlikeness, J. Syst. Sci. Math. Sci. 24 (2004) 311–317. Y. Sawaragi, T. Tanino, Conjugate maps and duality in multiobjective optimization, J. Optim. Theory Appl. 31 (4) (1980) 473–499. A. Taa, Subdifferentials of multifunctions and Lagrange multipliers for multiobjective optimization, J. Math. Anal. Appl. 283 (2003) 398–415. T.Y. Li, Y.H. Xu, The strictly efficient subgradient of set-valued optimization, Bull. Austral. Math. Soc. 75 (2007) 361–371. G.L. Yu, S.Y. Liu, The Henig efficient subdifferential of set-valued mapping and stability, Acta. Math. Sin. 28A (3) (2008) 438–446. A. Taa, ε -subdifferentials of set-valued maps and ε -weak Pareto optimality for multiobjective optimization, Math. Meth. Oper. Res. 62 (2005) 187–209. T.Q. Bao, B.S. Mordukhovich, Variational principles for set-valued mappings with applications to multiobjective optimization, Control. Cybern. 36 (2007) 531–562. T.Q. Bao, B.S. Mordukhovich, Existence of minimizers and necessary conditions in set-valued optimization with equilibrium constraints, Appl. Math. 52 (2007) 452–562. T.Q. Bao, B.S. Mordukhovich, Necessary conditions for super minimizers in constrained multiobjective optimization, J. Global. Optim. 43 (2009) 533–552. T.Q. Bao, B.S. Mordukhovich, Relative Pareto minimizers in multiobjective optimization: existence and optimality conditions, Math. Program. 122 (2010) 301–347. T.Q. Bao, B.S. Mordukhovich, Set-valued optimization in welfare economics, Adv. Math. Econ. 13 (2010) 113–153. W. Song, Lagrangian duality for minimization of nonconvex multifunctions, J. Optim. Theory Appl. 93 (1) (1997) 167–182. C.Y. Zhao, W.D. Rong, Scalarization of ε -strong strict efficient point set, in: Proceedings of the Eighth National Conference of Operations Research Socity of China, 2006, pp. 220–225. W.D. Rong, Y.N. Wu, ε -weak minimal solutions of vector optimization problems with set-valued maps, J. Optim. Theory Appl. 106 (3) (2000) 569–579. Q.L. Wang, Y. Wu, Optimality conditions of ε -strictly efficient solutions for set-valued optimization problems, J. Southwest China Norm. Univ. (Natur. Sci. Ed.) 31 (4) (2006) 40–43.