Acta Mathematica Scientia 2008,28B(4):843–850 http://actams.wipm.ac.cn
THE TANGENT CONES ON CONSTRAINT QUALIFICATIONS IN OPTIMIZATION PROBLEMS∗
Huang Longguang (
)
School of Science, Jimei University, Xiamen 361021, China E-mail:
[email protected]
Abstract This article proposes a few tangent cones, which are relative to the constraint qualifications of optimization problems. With the upper and lower directional derivatives of an objective function, the characteristics of cones on the constraint qualifications are presented. The interrelations among the constraint qualifications, a few cones involved, and level sets of upper and lower directional derivatives are derived. Key words Constraint qualifications, upper directional derivatives, lower directional derivatives, strongly directional differentiable, concave functions 2000 MR Subject Classification
1
49J52, 90C30
Introduction
In the research of infinite dimensional optimization problems, the main focus is generally on the existence and convergence of solutions. It is seldom on the properties and characteristics of constraint qualifications to the optimization problems. So far, a few references [1–5] are found to deal with the problems. They are all under different assumptions on the directional differentiability of the objective functions involved. Merkovsky and Ward [1] used directional differentiability in the Hadamard sense; Ward [2] as well as Kuntz and Scholtes [3] used quasi-differentiability (i.e., Dini directional differentiability that the directional derivative can be written as the difference of two sublinear functions); Jourani [4] assumed Clarke directional differentiability. In another article, Kuntz and Scholtes [5] centered the problems on the B-differentiable function (i.e., Dini directional differentiable function that the directional derivative is locally a first-order approximation to the function); Li, Nahak, and Singer [6] discussed the constraint qualifications for the semi-infinite systems of convex inequalities. Crespi, Ginchev, and Rocca [7] study the Minty variational inequalities, increase-along-rays property and optimization problems. Xu and Liu [8] studied some properties for convex cones, which were used to obtain an equivalent condition and another important property for nearly conesubconvexlike set-valued functions. In contrast to those approaches, the present article does not ∗ Received
March 17, 2006; revised December 25, 2006. Supported by the Natural Science Foundation of Fujian Province of China (S0650021, 2006J0215) and supported by the National Natural Science Foundation of China (10771086)
844
ACTA MATHEMATICA SCIENTIA
Vol.28 Ser.B
make any directional differentiability assumptions, it assumes that feasible sets are lower-level sets of upper semicontinuous functions. It introduces a few tangent cones which are relative to the constraint qualifications of optimization problems. With the upper and lower directional derivatives of an objective function, the characteristics of cones on the constraint qualifications are presented. The interrelations among the constraint qualifications, a few cones introduced, and level sets of upper and lower directional derivatives are shown. Let E be a nonempty subset in a real Banach space X and f : E → R a real valued function. In this article, we consider optimization problem with constraint qualifications as below min f (x). x∈E
A few concepts well known in the following are needed. If there exists a neighborhood V of x0 ∈ E such that (i) f (x) ≥ f (x0 ) (∀x ∈ E ∩ V ), then x0 is called a local minimum point for f ; In addition to (i) (ii) If there exists constant c > 0 such that f (x) ≥ f (x0 ) + c x − x0
(∀x ∈ E ∩ V ),
then x0 is called a local minimum point of order one for f . Obviously, local minimum point of order one for f is a local minimum point and is locally unique. Recall that f is called a concave function in E if, for all x, y ∈ E and λ ∈ [0, 1], f (λx + (1 − λ)y) ≥ λf (x) + (1 − λ)f (y). In the sequel we shall use the following two cones. C(x, E) = {u ∈ X | ∃tn ↓ 0, un → u (n → ∞) such that x + tn un ∈ E (∀n ∈ N )} and T (x, E) = {u ∈ X | ∃δ > 0 and neighborhood V of u such that x + td ∈ E, ∀t ∈ (0, δ), d ∈ V } called contingent cone and inner tangent cone to E at x ∈ X, respectively.
2
Optimality Conditions and Tangent Cones
It is easy to know the following lemma from definitions. Lemma 1 (i) Both C(x, E) and T (x, E) are cones, and are, respectively, closed and open sets. (ii) T (x, E) ⊆ C(x, E). (iii) X \ T (x, E) = C(x, X \ E). Denote f (x + th) − f (x) f+ , (x, d) = lim sup t t↓0, h→d f (x + th) − f (x) . h→d t
f− (x, d) = lim inf t↓0,
If
f (x + th) − f (x) f (x + td) − f (x) = lim t↓0 t t h→d
f+ (x, d) = lim sup t↓0,
(∀d ∈ X),
No.4
Huang: THE TANGENT CONES ON CONSTRAINT QUALIFICATIONS
then we call f upper directionally differentiable at x. If f (x + th) − f (x) f (x + td) − f (x) f− (x, d) = lim inf = lim t↓0, h→d t↓0 t t then we call f lower directional differentiable at x. If f (x + td) − f (x) (x, d) = f− (x, d) = lim f+ t↓0 t
845
(∀d ∈ X),
(∀d ∈ X),
denoted by f (x, d), then we call f strongly directional differentiable at x. Theorem 1 (i) If x0 is a local minimum point for f , then {d ∈ X | f+ (x0 , d) < 0} ∩ C(x0 , E) = ∅.
(ii)
If x0 is a local minimum point of order one for f , then {d ∈ X | f+ (x0 , d) ≤ 0} ∩ C(x0 , E) = {0}.
Therefore, from (i) {d ∈ X | f+ (x0 , d) = 0} ∩ C(x0 , E) = {0}. Proof (i) If d ∈ C(x0 , E), then there exists tn ↓ 0, dn → d (n → ∞) such that
x0 + tn dn ∈ E (∀n ∈ N ). Since x0 is a local minimum point for f , when n is sufficiently large, f (x0 + tn dn ) ≥ f (x0 ). So f (x0 + th) − f (x0 ) f (x0 + tn dn ) − f (x0 ) ≥ lim sup ≥ 0. t tn n→∞ h→d
(x0 , d) = lim sup f+ t↓0,
(ii) For d ∈ C(x0 , E) \ {0}, because x0 is a local minimum point of order one for f , then there exist tn ↓ 0, dn → d (n → ∞), and c > 0 such that x0 + tn dn ∈ E (∀n ∈ N ) f (x0 + tn dn ) − f (x0 ) ≥ ctn dn . Thus, lim sup n→∞
f (x0 + tn dn ) − f (x0 ) ≥ c d > 0, tn
f+ (x0 , d) > 0.
Remark 1 If f+ (x0 , d) ≥ 0 (∀d ∈ X), then (i) holds obviously. When x0 ∈ intE is a local minimum point for f , where int denotes the interior of a set E, f+ (x0 , d) ≥ 0 (∀d ∈ X). Theorem 2 If x0 is a local minimum point for f , then, {d ∈ X | f− (x0 , d) < 0} ∩ T (x0 , E) = ∅.
Proof For u ∈ T (x0 , E), there exist δ > 0 and spherical neighborhood B(u, ε) of u such that x0 + td ∈ E,
∀t ∈ (0, δ), d ∈ B(u, ε).
Since x0 is a local minimum point for f , when t > 0 and ε > 0 is sufficiently small, f (x0 + tu) − f (x0 ) ≥ 0.
846
ACTA MATHEMATICA SCIENTIA
Thus,
f (x0 + tu) − f (x0 ) ≥ 0, u→d t
lim inf
t↓0,
Remark 2
Vol.28 Ser.B
f− (x0 , d) ≥ 0.
Because of T (x0 , E) ⊆ C(x0 , E) and (x0 , d) < 0} ⊆ {d ∈ X | f− (x0 , d) < 0}, {d ∈ X | f+
in generally, Theorem 1 (i) and Theorem 2 are mutually independent. If f+ (x0 , d) = f− (x0 , d) (∀d ∈ X), then the result in Theorem 2 is weaker than that in Theorem 1(i).
3
Characteristics of Tangent Cones on Constraint Qualifications
In the following, we assume that all of gi : X → R (i ∈ I = {1, 2, · · · , m}) are upper semi-continuous functions on X, and E is defined by inequality constraint gi denoted by E = {x ∈ X | gi (x) ≤ 0, ∀i ∈ I}. For x0 ∈ E, denote I(x0 ) = {i ∈ I | gi (x0 ) = 0}, M (x0 , E) = {d ∈ X | (gi )− (x0 , d) ≤ 0, i ∈ I(x0 )}, G(x0 , E) = {d ∈ X | (gi )+ (x0 , d) < 0, i ∈ I(x0 )}. It is clear that both M (x0 , E) and G(x0 , E) are cones. Lemma 2 If x0 ∈ E, then G(x0 , E) ⊆ T (x0 , E) ⊆ C(x0 , E) ⊆ M (x0 , E). Proof For d ∈ / T (x0 , E), from Lemma 1 (i), d ∈ C(x0 , X \ E), there exist tn ↓ 0, dn → d (n → ∞) such that x0 + tn dn ∈ X \ E (∀n ∈ N ). By the definition of E, there exists kn ∈ I such that gkn (x0 + tn dn ) > 0 (∀n ∈ N ). Since I is a finite set, {kn } has constant number subsequence, without loss of generality, suppose kn ≡ p. From x0 ∈ E, gp (x0 ) ≤ 0, gp (x0 + tn dn ) > 0 and upper semi-continuity of gp , gp (x0 ) = 0. Thus, p ∈ I(x0 ). gp (x0 + tn dn ) − gp (x0 ) gp (x0 + tn dn ) = >0 tn tn
(∀n ∈ N ).
It follows (gp )+ (x0 , d) ≥ 0, that is, d ∈ / G(x0 , E). So G(x0 , E) ⊆ T (x0 , E). For d ∈ / M (x0 , E), there exists s ∈ I(x0 ) such that (gs )− (x0 , d) > 0. Therefore, ∀tn ↓ 0, dn → d (n → ∞), when n is sufficiently large, gs (x0 + tn dn ) − gs (x0 ) > 0. tn
No.4
Huang: THE TANGENT CONES ON CONSTRAINT QUALIFICATIONS
847
By s ∈ I(x0 ) and gs (x0 + tn dn ) > 0, it derives x0 + tn dn ∈ / E when n is sufficiently large. That is, d∈ / C(x0 , E), that is, C(x0 , E) ⊆ M (x0 , E). The second inclusion relation formula in Lemma 2 can be known from Lemma 1(ii). The first inclusion relation formula in Lemma 2 may be proper inclusion. Example 1 For real Banach space X = C[0, 1], where C[0, 1] is the space of all continuous functions on the closed interal [0, 1], endowed with the general continuous functions space norm and x(0)− | x(0) | g(x) = (∀x = x(t) ∈ X). 2 Take x0 = 0, then, E = {x ∈ X | g(x) ≤ 0} = C[0, 1], but (0, d) < 0} = {d ∈ X | d(0) < 0} = X, G(0, E) = {d ∈ X | g+
T (0, E) = X = C[0, 1]. It is easy to know that the second inclusion relation formula in Lemma 2 is a proper inclusion relation by Lemma 1(i) and Banach space being connected. If we set g(x) = x 2 in Example 1, then E = {0} and C(0, E) = {0} = M (0, E) = C[0, 1] = X. Namely, the third inclusion relation formula in Lemma 2 may be a proper inclusion. Theorem 3 Suppose I = {1, 2 · · · , m}, gi : X → R are upper directional differentiable and upper semi-continuous on X for all i ∈ I, E = {x ∈ X | gi (x) ≤ 0, ∀i ∈ I} = ∅. If there exists x ∈ E such that G(x, E) = ∅, then E satisfies Slater constraint qualifications, i.e., there exists x ∈ E such that gi (x) < 0 for all i ∈ I. Proof Assume that z ∈ E, G(z, E) = ∅, if I(z) = ∅, then gi (z) < 0 for all i ∈ I. The result is obtained. If I(z) = ∅, from the upper directional differentiability of gi , 0 > (gi )+ (z, d) = lim t↓0
gi (z + td) − gi (z) gi (z + td) = lim t↓0 t t
Thus, there exists δi > 0 such that gi (z + td) < 0 (∀t ∈ (0, δi ), i ∈ I(z)). Given δ = min δi , then i∈I(z)
gi (z + td) < 0 (∀t ∈ (0, δ), i ∈ I(z)).
(∀i ∈ I(z)).
848
ACTA MATHEMATICA SCIENTIA
Vol.28 Ser.B
For i ∈ / I(z), gi (z) < 0 and gi is upper semi-continuous on X, there exists γ ∈ (0, δ) such that gi (z + γd) < 0. Let x0 = z + γ2 d, then
gi (x0 ) < 0 (∀i ∈ I).
The proof is completed. Theorem 4 Let gi : X → R be convex and upper directional differentiable on X for all i ∈ I and E satisfy Slater constraint qualifications, then G(x, E) = ∅ (∀x ∈ E). Proof E satisfies Slater constraint qualifications, i.e., there exists x0 ∈ E such that gi (x0 ) < 0 (∀i ∈ I). For all x ∈ E and i ∈ I(x), since gi is convex and upper semi-continuous, 0 > gi (x0 ) = gi (x + (x0 − x)) gi (x + t(x0 − x)) − gi (x) ≥ gi (x) + t gi (x + t(x0 − x)) − gi (x) (∀t ∈ (0, 1)). = t Given t ↓ 0, 0 > gi (x0 ) ≥ (gi )+ (x, x0 − x). Consequently, x0 − x ∈ G(x, E), that is, G(x, E) = ∅. In the following, we present a result on optimization problem with parameter. Let X and Y be real Banach spaces, gi : X × Y → R be upper semicontinuous for all i ∈ I = {1, 2, · · · , m}, and E(u) = {x ∈ X | gi (x, u) ≤ 0, i ∈ I}, I(x, u) = {i ∈ I | gi (x, u) = 0}, ∗
G (x, E(u)) = {d ∈ X | (gi )+ ((x, u), (d, 0)) < 0, i ∈ I(x, u)}, then, we have the following result. Theorem 5 If u ∈ Y and G∗ (x, E(u)) = ∅ for all x ∈ E(u), then the set-value mapping E(u) is semicontinuous at u, which means that there exists xn ∈ E(un ) with xn → x (n → ∞) where x ∈ E(u) and un → u (n → ∞). Proof Let x ∈ E(u) and un → u (n → ∞). Take d ∈ G∗ (x, E(u)) and (1) tn = un − u , xn = x + tn d, then, tn → 0, xn → x (n → ∞). For i ∈ / I(x, u), gi (x, u) < 0. By the upper semicontinuity of gi , there exists ki such that gi (xn , un ) < 0 (∀n > ki ).
(2)
For i ∈ I(x, u), given N1 = {n ∈ N | un = u}, N2 = N \N1 . From the form (1), xn = x and un = u for n ∈ N1 , gi (xn , un ) = gi (x, u) = 0.
(3)
No.4
Huang: THE TANGENT CONES ON CONSTRAINT QUALIFICATIONS
849
For n ∈ N2 , from (1) gi (xn , un ) = gi ((x, u) + tn (d,
un − u )) − gi (x, u). tn
(4)
If N2 is finite, then there exists k such that the form (3) holds for all n ∈ N1 with n > k. If N2 is infinite, lim
n→∞, n∈N2
un − u un − u = lim = 0. n→∞, n∈N2 tn un − u
On being divided by tn in (4) and from d ∈ G∗ (x, E(u)), lim sup n→∞, n∈N2
gi (xn , un ) ≤ lim sup tn t↓0, (d, ξ)→(d,
gi ((x, u) + t(d, ξ)) − gi (x, u) t 0)
= (gi )+ ((x, u), (d, 0)) < 0. Thus, there exists ki ∈ N2 such that gi (xn , un ) < 0
(∀n > ki , n ∈ N2 ).
(5)
From (2), (3), and (5) gi (xn , un ) ≤ 0 (∀n > ki ), i.e., xn ∈ E(un ). Theorem 6 Let x ∈ E and gi be strongly directionally differentiable at x for all i ∈ I(x). If gi (x, ·) is subadditive and G(x, E) = ∅, then C(x, E) = M (x, E). Proof From Lemma 2, it needs only to prove M (x, E) ⊆ C(x, E). Let d ∈ M (x, E) and i ∈ I(x). Given d0 ∈ G(x, E) and ε > 0, by the subadditivity and positive homogeneity of gi (x, ·), gi (x, d + εd0 ) ≤ gi (x, d) + εgi (x, d0 ) < 0. From gi (x, d + εd0 ) =
lim sup t↓0, h→d+εd0
(6)
g(x + th) − g(x) , t
and form (6), there exist δ > 0 and a neighborhood V of d + εd0 such that g(x + th) − g(x) < 0 (∀t ∈ (0, δ), h ∈ V ). By x ∈ E, i.e., g(x) ≤ 0, it derives from (7), g(x + th) < g(x) ≤ 0. Thus, d + εd0 ∈ T (x, E). Let ε → 0, and from Lemma 1( i) and (ii) d ∈ clT (x, E) ⊆ clC(x, E) = C(x, E), where the cl denotes the closure of a set.
(7)
850
ACTA MATHEMATICA SCIENTIA
Example 2
Vol.28 Ser.B
Let real Banach space X = C[0, 1], and g : X → R, x = x(t) ∈ X defined
by g(x) =
x(0)− | x(0) | 1 + (x(0)+ | x(0) |)2 , 2 4
then E = {x ∈ X | g(x) ≤ 0} = {x ∈ C[0, 1] | x(0) ≤ 0}, and g (0, d) =
d(0)− | d(0) | (∀d(0) = 0, d ∈ X). 2
g (0, ·) is not subadditive, and G(0, E) = {d ∈ C[0, 1] | d(0) < 0} = ∅, but C(0, E) = E = C[0, 1] = M (0, E). In this case, Theorem 6 does not hold. Lemma 3 Let x ∈ E and i ∈ I(x). If gi is a concave function in some neighborhood at x and is strongly directional differentiable at x, then (i) for all d ∈ M (x, E), there exists δ > 0 such that x + td ∈ E (∀t ∈ (0, δ)). (ii) C(x, E) = M (x, E). Proof For i ∈ I(x), gi is strongly directional differentiable at x and concave, ∀d ∈ M (x, E), ∃δ > 0 such that gi (x, d) ≥
gi (x + td) gi (x + td) − gi (x) = t t
gi (x + td) ≤ tgi (x, d) ≤ 0
(∀t ∈ (0, δ)),
(∀t ∈ (0, δ)).
That is, x + td ∈ E (∀t ∈ (0, δ)). For d ∈ M (x, E), by (i), x + n1 d ∈ E when n is sufficiently large. It derives x ∈ C(x, M ). Thus, M (x, E) ⊆ C(x, E). From Lemma 2 M (x, E) = C(x, E). References 1 Merkovsky R R, Ward D E. General constraint qualifications in nondifferentiable programming. Mathematical Programming, 1990, 47: 389–405 2 Ward D E. A constraint qualifications in quasidifferentiable programming. Optimization, 1991, 22: 661– 668 3 Kuntz L, Scholtes S. Constraint qualifications is in quasidifferentiable optimization. Mathematical Programming, 1993, 60: 339–347 4 Journai A. Constraint qualifications and lagrange multipliers in nondifferentiable programming problems. Journal of Optimization Theory and Applications, 1994, 81: 533–548 5 Kuntz L, Scholtes S. A nonsmooth variant of the Mangasarian–Fromovitz constraint qualification. Journal of Optimization Theory and Applications, 1994, 82: 59–75 6 Li W, Nahak C, Singer I. Constraint qualifications for semi-infinite systems of convex inequalities. SIAM Journal on Optimization, 2000, 11: 31–52 7 Crespi G P, Ginchev I, Rocca M. Minty variational inequalities, increase-along-rays property and optimization. Journal of Optimization Theory and Applications, 2004, 123: 479–496 8 Xu Yihong, Liu Sanyang. Super efficienty in the nearly cone-subconvexlike vector optimization with setvalued functions. Acta Math Sci, 2005, 25B(1): 152–160