Accepted Manuscript
Constraint Qualifications for Convex Optimization without Convexity of Constraints : New Connections and Applications to Best Approximation N.H. Chieu, V. Jeyakumar, G. Li, H. Mohebi PII: DOI: Reference:
S0377-2217(17)30665-3 10.1016/j.ejor.2017.07.038 EOR 14588
To appear in:
European Journal of Operational Research
Received date: Revised date: Accepted date:
8 November 2016 24 April 2017 13 July 2017
Please cite this article as: N.H. Chieu, V. Jeyakumar, G. Li, H. Mohebi, Constraint Qualifications for Convex Optimization without Convexity of Constraints : New Connections and Applications to Best Approximation, European Journal of Operational Research (2017), doi: 10.1016/j.ejor.2017.07.038
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
ACCEPTED MANUSCRIPT
Highlights • Examining convex optimization without the standard convexity of constraint functions • Presenting constraint qualifications for optimality and their new connections • The weakest constraint qualification for optimality of convex optimization is given
AC
CE
PT
ED
M
AN US
CR IP T
• Characterizing best approximation from a convex set without convexity of constraints
1
ACCEPTED MANUSCRIPT
Constraint Qualifications for Convex Optimization without Convexity of Constraints : New Connections and Applications to Best Approximation July 18, 2017
AN US
Abstract
§¶
CR IP T
N. H. Chieu∗, V. Jeyakumar†, G. Li‡ and H. Mohebi
ED
M
We study constraint qualifications and necessary and sufficient optimality conditions for a convex optimization problem with inequality constraints where the constraint functions are continuously differentiable but they are not assumed to be convex. We present constraint qualifications under which the Karush-Kuhn-Tucker conditions are necessary and sufficient for optimality without the convexity of the constraint functions and establish new links among various known constraint qualifications that guarantee necessary Karush-Kuhn-Tucker conditions. We also present a new constraint qualification which is the weakest qualification for the Karush-Kuhn-Tucker conditions to be necessary for optimality of the convex optimization problem. Consequently, we present Lagrange multiplier characterizations for the best approximation from a convex set in the face of nonconvex inequality constraints, extending corresponding known results in the literature. We finally give a table summarizing various links among the constraint qualifications.
AC
CE
PT
Key words: Convex programming, nonconvex constraints, constraint qualifications, best approximation, necessary and sufficient optimality conditions.
∗
Department of Mathematics, Vinh University, Nghe An, Vietnam. E-mail:
[email protected]; Present Address: Department of Applied Mathematics, University of New South Wales, Sydney 2052, Australia. E-mail:
[email protected] † Department of Applied Mathematics, University of New South Wales, Sydney 2052, Australia. E-mail:
[email protected] ‡ Department of Applied Mathematics, University of New South Wales, Sydney 2052, Australia. E-mail:
[email protected] § Department of Mathematics, Shahid Bahonar University of Kerman, P.O. Box: 76169133, Postal Code: 7616914111, Kerman, Iran. E-mail:
[email protected] ¶ Present Address: Department of Applied Mathematics, University of New South Wales, Sydney 2052, Australia. E-mail:
[email protected]
2
ACCEPTED MANUSCRIPT
1
Introduction
Consider the convex optimization problem: (P )
min {f (x) : x ∈ C ∩ K},
x∈Rn
where C is a nonempty closed convex subset of the Euclidean space Rn and f : Rn → R is a continuous convex function. The set K, defined by K := {x ∈ Rn : gj (x) ≤ 0, j = 1, 2, . . . , m},
CR IP T
is a nonempty convex subset of Rn and the functions gj : Rn −→ R, j = 1, 2, . . . , m, are continuously differentiable, but they are not assumed to be convex functions.
AN US
The problem (P ) is commonly referred to as convex programming problem whenever gj ’s are also convex functions (see [1, 8, 21]). It covers a broad class of nonlinear programming problems, including the classical convex programming problems as well as convex minimization problems with quasi-convex constraint functions gj , j = 1, 2, . . . , m as the quasi-convexity [1, 18] of gj , j = 1, 2, . . . , m guarantees that K is a convex set.
ED
M
Constraint qualifications are corner stones for the study of the classical convex programming problems and they guarantee necessary and sufficient conditions for optimality [3, 8, 10, 12]. Over the years, various constraint qualifications have been employed to study convex programming problems in the literature [1, 8, 21]. In particular, Slater’s condition [12] is commonly used to obtain the so-called Karush-Kuhn-Tucker conditions which are necessary and sufficient for optimality. Unfortunately, the characterization of optimality of the problem (P ) by the Karush-Kuhn-Tucker conditions may fail under the Slater’s condition when gj ’s are not convex (see Example 3.4). Recently, Slater’s condition together with an additional condition on the constraints has been shown to guarantee that the Karush-KuhnTucker conditions are necessary and sufficient for optimality of the problem (P ) in the case where C = Rn [7, 9, 14, 19].
AC
CE
PT
The purpose of this paper is to present constraint qualifications under which the KarushKuhn-Tucker conditions are necessary and sufficient for optimality of the problem (P ) without the convexity of the constraint functions gj , j = 1, 2, . . . , m, and to establish new connections among various known constraint qualifications that guarantee necessary Karush-KuhnTucker conditions for the problem (P ), such as the Robinson constraint qualification, the Mangasarian-Fromovitz constraint qualification [2] and the strong conical hull intersection property [10]. We also present a new constraint qualification which is the weakest constraint qualification for the Karush-Kuhn-Tucker conditions to be necessary for optimality of our convex problem (P ). As an application, we establish Lagrange multiplier characterizations for the best approximation from the convex set C ∩ K in the face of non-convex constraint functions gj , j = 1, 2, . . . , m, extending corresponding known results in finite dimensions (see [4, 10, 11] and other references therein). The outline of the paper is as follows. Section 2 presents relationships among various known constraint qualifications of convex optimization (P ). Section 3 establishes the weakest constraint qualification under which the Karush-Kuhn-Tucker conditions are necessary for the problem (P ) and obtains constraint qualifications for guaranteeing Karush-Kuhn-Tucker conditions to be necessary and sufficient for optimality of the problem (P ). Section 4 provides Lagrange multiplier characterizations for the best approximation from the convex set C ∩ K. 3
ACCEPTED MANUSCRIPT
2
CQs Revisited: New Connections
In this section, we examine the constraint system {x ∈ Rn : x ∈ C, gj (x) ≤ 0, j = 1, 2, . . . , m},
(2.1)
K := {x ∈ Rn : gj (x) ≤ 0, j = 1, 2, . . . , m},
(2.2)
where C is a closed convex subset of the Euclidean space Rn ,
AN US
CR IP T
each gj : Rn −→ R, j = 1, 2, . . . , m, is a continuously differentiable function and C ∩ K 6= ∅. Note that K is closed, due to the continuity of the functions gj . We present the constraint qualifications that are used to study optimization over the constraint system (2.1) and their connections when the constraint system is convex in the sense that K is a convex set while gj , j = 1, 2, . . . , m are not convex. We begin by fixing the notation and preliminaries that are used throughout the paper. Let g : Rn −→ Rm be the mapping defined by g(x) := g1 (x), g2 (x), . . . , gm (x) , ∀ x ∈ Rn . (2.3) It is clear that g is a continuously differentiable mapping. Let
I(x) := {j ∈ {1, 2, . . . , m} : gj (x) = 0}
(2.4)
W ∗ := {u ∈ Rn : hu, wi ≤ 0, ∀ w ∈ W }.
(2.5)
be the active index set at x ∈ K. For a subset W of Rn , define the polar cone of W by
M
The nonnegative orthant of Rm is denoted by Rm + and is defined by: m Rm + := {(x1 , x2 , . . . , xm ) ∈ R : xi ≥ 0, i = 1, 2, . . . , m}.
PT
ED
For a nonempty subset D of Rn , the distance from a given point x ∈ Rn to D is defined by d(x, D) := inf u∈D kx − uk. We say that a point u ∈ D is a best approximation of x ∈ Rn if kx − uk = d(x, D). The set of all best approximations of x in D is denoted by PD (x), that is, PD (x) := u ∈ D : kx − uk = d(x, D) .
CE
The following characterization of best approximations is well known (see [4]).
AC
Lemma 2.1. Let D be a closed convex subset of Rn , x ∈ Rn , and u ∈ D. Then, u = PD (x) if and only if x − u ∈ (D − u)∗ . In the following, we recall the notion of the strong conical hull intersection property (the strong CHIP) (see, for example; [4, 5, 6]). n Definition Tm 2.1. (Strong CHIP [4, 5]) Let C1 , C2 , . . . , Cm be closed convex sets in R and let x ∈ j=1 Cj . Then, the collection {C1 , C2 , . . . , Cm } is said to have the strong CHIP at x if \ ∗ X ∗ m m Cj − x = Cj − x . j=1
j=1
The collection T {C1 , C2 , . . . , Cm } is said to have the strong CHIP if it has the strong CHIP at each x ∈ m j=1 Cj . 4
ACCEPTED MANUSCRIPT
Definition 2.2. (Robinson constraint qualification [2]). Let g be as in (2.3), and let ¯ ∈ Ω. One says that the set Ω satisfies the Ω := {x ∈ C : −g(x) ∈ Rm + } = C ∩ K and x Robinson constraint qualification at x¯ (with respect to the given representation) if 0 ∈ int g(¯ x) + ∇g(¯ x)(C − x¯) + Rm + .
CR IP T
Definition 2.3. (i) (Mangasarian-Fromovitz constraint qualification [2]). The set K is said to satisfy the Mangasarian-Fromovitz constraint qualification (M F CQ)[2] at x¯ ∈ K (with respect to the given representation) if there exists v ∈ Rn such that h∇gj (¯ x), vi < 0 for each j ∈ I(¯ x). (ii) (Nondegeneracy condition [13]). One says that K satisfies the nondegeneracy condition at x¯ ∈ K if for each j ∈ {1, 2, . . . , m}, ∇gj (¯ x) 6= 0 whenever j ∈ I(¯ x).
AN US
If the nondegeneracy condition holds at every point x ∈ K, one says that K satisfies the nondegeneracy condition. (iii) (Slater’s condition [1]). The set Ω := C ∩ K is said to satisfy the Slater condition if there exists x0 ∈ C such that gj (x0 ) < 0 for all j = 1, ..., m. Note that (M F CQ) holds at x¯ ∈ x)}j∈I(¯x) is positively linearly P K if and only if {∇gj (¯ independent in the sense that if λj ∇gj (¯ x) = 0 with λj ≥ 0 and j ∈ I(¯ x), then λj = 0 j∈I(¯ x)
M
for all j ∈ I(¯ x). The latter means P that for each λ := (λ1 , λ2 , . . . , λm ) ∈ Rm + \{0} with λ ∇g (¯ x ) = 6 0. λj gj (¯ x) = 0, j = 1, 2, . . . , m, one has m j j=1 j
ED
The following result was proved in [13] using the supporting hyperplane characterization of convex sets from [22, Theorem 1.3.3]. Lemma 2.2. Let K be as in (2.2). If K is convex, then, for each j ∈ {1, 2, . . . , m}, (2.6)
PT
h∇gj (x), u − xi ≤ 0, ∀ x, u ∈ K with j ∈ I(x),
CE
where I(x) is defined by (2.4). Moreover, if the Slater condition holds and the nondegeneracy condition is satisfied, then, (2.6) implies that K is convex. The following example shows that condition (2.6) alone does not guarantee the convexity of K.
AC
Example 2.1. Let g(x) := x2 (x − 1)2 for all x ∈ R, and K := {x ∈ R : g(x) ≤ 0} = {0, 1}. We see that ∇g(x) = 0 for each x ∈ K. Hence, condition (2.6) is satisfied. However, K is not a convex set. Note that both the Slater condition and the nondegeneracy condition do not hold. Theorem 2.1. (A Comparison of Constraint Qualifications) Let K be as in (2.2), and let C be a closed convex subset of Rn , and x¯ ∈ C ∩ K. Then, the following assertions are equivalent: (a) For each λ := (λ1 , λ2 , . . . , λm ) ∈ Rm x) = 0, j = 1, 2, . . . , m, one has + \{0} with λj gj (¯ m X h λj ∇gj (¯ x), v − x¯i < 0 for some v ∈ C; j=1
5
ACCEPTED MANUSCRIPT
(b) Robinson’s constraint qualification holds at x¯. Consequently, if one of assertions (a) and (b) holds, then there exists x0 ∈ C such that gj (x0 ) < 0 for all j = 1, 2, . . . , m (Slater’s condition). Furthermore, if assume that K is convex and there exists x0 ∈ C such that gj (x0 ) < 0 for all j = 1, 2, . . . , m, then, (a) (and thus (b)) is equivalent to the following assertions: (c) For each j ∈ {1, 2, . . . , m}, ∇gj (¯ x) 6= 0 whenever j ∈ I(¯ x) (nondegeneracy condition). (d) For each j ∈ {1, 2, . . . , m}, h∇gj (¯ x), v − x¯i = 6 0 for some v ∈ K with j ∈ I(¯ x), where I(¯ x) is defined by (2.4).
CR IP T
Proof: (a) =⇒ (b). Let (a) be satisfied. Suppose on the contrary that the Robinson constraint qualification does not hold at x¯. That is, 0 6∈ int{g(¯ x) + ∇g(¯ x)(C − x¯) + Rm + },
¯ := where g is defined by (2.3). So, by the convex separation theorem, there exists λ ¯1, λ ¯2, . . . , λ ¯ m ) ∈ Rm \{0} such that (λ
AN US
¯ g(¯ hλ, x) + ∇g(¯ x)(v − x¯) + yi ≥ 0, ∀ v ∈ C, ∀ y ∈ Rm +.
(2.7)
m ¯ ¯ x) = 0, This together with the fact that Rm + is a cone implies that Pm λ ¯∈ R+ \{0} and λj gj (¯ x), v − x¯i ≥ 0 for all j = 1, 2, . . . , m. Therefore, it follows from (2.7) that h j=1 λj ∇gj (¯ v ∈ C, which contradicts the validity of (a). Therefore, the implication (a) =⇒ (b) has been justified.
ED
M
(b) =⇒ (a). Suppose that Robinson’s constraint qualification is satisfied at x¯. Let λ := x) = 0, j = 1, 2, . . . , m. We show that there exists (λ1 , λ2 , . . . , λm ) ∈ P Rm + \{0} with λj gj (¯ m x), v − x¯i < 0. Since Robinson’s constraint qualification is v ∈ C such that h j=1 λj ∇gj (¯ m fulfilled at x¯ and intR+ 6= ∅, by [2, p. 71], there exists v ∈ C such that −[g(¯ x) + ∇g(¯ x)(v − x¯)] ∈ intRm +.
(2.8)
PT
We claim that hλ, g(¯ x) + ∇g(¯ x)(v − x¯)i < 0. Indeed, suppose on the contrary that hλ, g(¯ x) + ∇g(¯ x)(v − x¯)i ≥ 0.
(2.9)
CE
Since λ ∈ Rm x) + ∇g(¯ x)(v − x¯)i ≤ 0. This together with (2.9) + , by (2.8), we have hλ, g(¯ implies that
AC
hλ, yi ≤ 0 = hλ, g(¯ x) + ∇g(¯ x)(v − x¯)i, ∀ y ∈ −Rm +.
This guarantees that g(¯ x) + ∇g(¯ x)(v − x¯) ∈ argmax{hλ, yi : y ∈ −intRm + }.
So, by the classical Fermat rule, λ = 0, which is a contradiction. Hence, hλ, g(¯ x)+ ∇g(¯ x)(v − x¯)i < 0. On P the other hand, λj gj (¯ x) = 0, j = 1, 2, . . . , m. Thus, we have hλ, ∇g(¯ x)(v − x¯)i < 0. That is, h m x), v − x¯i < 0 for some v ∈ C. The latter shows that (a) holds. j=1 λj ∇gj (¯
We now show that (a) (or (b)) implies the Slater condition. It follows from (2.8) and the differentiability of g at x¯ that lim t↓0
g(¯ x + t(v − x¯)) − g(¯ x) x). = ∇g(¯ x)(v − x¯) ∈ −intRm + − g(¯ t 6
ACCEPTED MANUSCRIPT
So, for some t0 > 0 sufficiently small, it holds m g(¯ x + t0 (v − x¯)) ∈ (1 − t0 )g(¯ x) − t0 intRm + ⊂ −intR+ .
Put x0 := x¯ + t0 (v − x¯) ∈ C. Then, −g(x0 ) ∈ intRm + . That is, there exists x0 ∈ C such that gj (x0 ) < 0 for all j = 1, 2, . . . , m. Now, assume that K is convex and there exists x0 ∈ C such that gj (x0 ) < 0 for all j = 1, 2, . . . , m.
AN US
CR IP T
(a) =⇒ (c). Assume that (a) holds. Let j ∈ {1, 2, . . . , m} be arbitrary such that j ∈ I(¯ x). Then, gj (¯ x) = 0. Let λ := ej , where ej is the unit vector in Rm with the j-th component xP ) = 0, i = 1, 2, . . . , m. is 1 and the others 0. So, λ := (λ1 , λ2 , . . . , λm ) ∈ Rm + \{0} and λi gi (¯ Therefore, in view of the hypothesis (a), there exists v ∈ C such that h m x), v−¯ xi < i=1 λi ∇gi (¯ 0. This implies that h∇gj (¯ x), v − x¯i < 0 for some v ∈ C. Hence, ∇gj (¯ x) 6= 0. That is, (c) holds. ¯ := (λ ¯1, λ ¯2, . . . , λ ¯m) ∈ (c) =⇒ (a). Suppose that (c) holds. Assume if possible that there exists λ m ¯ j gj (¯ x) = 0, j = 1, 2, . . . , m such that R+ \{0} with λ m X ¯ j ∇gj (¯ λ x), v − x¯i ≥ 0, ∀ v ∈ C. h j=1
(2.10)
ED
M
¯ = (λ ¯1, λ ¯2, . . . , λ ¯ m ) ∈ Rm \{0}, so there exists j0 ∈ {1, 2, . . . , m} such that But, we have λ + ¯ j > 0. Let J := {j ∈ {1, 2, . . . , m} : λ ¯ j > 0}. Clearly, J 6= ∅ because λ ¯ j ∈ J. Since λ 0 0 ¯ j gj (¯ λ x) = 0, j = 1, 2, . . . , m, it follows that gj (¯ x) = 0 for all j ∈ J. Moreover, in view of (2.10), we conclude that X ¯ j ∇gj (¯ h λ x), v − x¯i ≥ 0, ∀ v ∈ C. (2.11) j∈J
CE
PT
On the other hand, by the assumption, there exists x0 ∈ C such that gj (x0 ) < 0 for all j = 1, 2, . . . , m. Then, since gj (j = 1, 2, . . . , m) is continuous at x0 , there exists r > 0 such that gj (x0 + ru) < 0 for all j = 1, 2, . . . , m, and u ∈ B := {x ∈ Rn : kxk ≤ 1}. That is, x0 + ru ∈ K for all u ∈ B. So, since x¯ ∈ K and K is convex, we conclude from Lemma 2.2 that h∇gj (¯ x), x0 + ru − x¯i ≤ 0, ∀ u ∈ B, ∀ j ∈ J. (2.12)
AC
In particular, for u = 0 ∈ B, one has
h∇gj (¯ x), x0 − x¯i ≤ 0, ∀ j ∈ J.
This together with (2.11) and the fact that x0 ∈ C implies that X ¯ j ∇gj (¯ hλ x), x0 − x¯i = 0.
(2.13)
(2.14)
j∈J
¯ j > 0 for all j ∈ J, we deduce from (2.14) that Again, by using (2.13) and the fact that λ h∇gj (¯ x), x0 − x¯i = 0 for all j ∈ J. So, it follows from (2.12) that h∇gj (¯ x), ui ≤ 0, ∀ u ∈ B, ∀ j ∈ J, 7
ACCEPTED MANUSCRIPT
which implies that ∇gj (¯ x) = 0 for all j ∈ J. This contradicts the validity of (c), because gj (¯ x) = 0 for all j ∈ J. Hence, the implication (c) =⇒ (a) has been justified.
(c) =⇒ (d). Assume that (c) holds. Suppose that there exists j0 ∈ {1, 2, . . . , m} with gj0 (¯ x) = 0 (i.e., j0 ∈ I(¯ x)) such that h∇gj0 (¯ x), v − x¯i = 0, ∀ v ∈ K.
(2.15)
CR IP T
Since, by the assumption, there exists x0 ∈ C such that gj (x0 ) < 0 for all j = 1, 2, . . . , m, and gj (j = 1, 2, . . . , m) is continuous at x0 , so there exists r > 0 such that gj (x0 + ru) < 0 for all j = 1, 2, . . . , m, and all u ∈ B. This implies that x0 + ru ∈ K for all u ∈ B. In view of (2.15), one has h∇gj0 (¯ x), x0 + ru − x¯i = 0, ∀ u ∈ B, and j0 ∈ I(¯ x).
(2.16)
Put u = 0 in (2.16), we conclude that h∇gj0 (¯ x), x0 − x¯i = 0 for j0 ∈ I(¯ x). This together with (2.16) implies that h∇gj0 (¯ x), ui = 0, ∀ u ∈ B, and j0 ∈ I(¯ x).
AN US
This guarantees that ∇gj0 (¯ x) = 0 with j0 ∈ I(¯ x), which contradicts (c). So, (d) holds.
Clearly, (d) implies (c) without the validity of Slater’s condition and the convexity of K. 2
M
Remark 2.1. Note that in Theorem 2.1, if each function gj , j = 1, 2, . . . , m, is convex and the Slater condition holds, then, ∇gj (¯ x) 6= 0 for each j ∈ {1, 2, . . . , m} with gj (¯ x) = 0. That is, the nondegeneracy condition is satisfied at x¯. Indeed, if there exists j ∈ {1, 2, . . . , m} with gj (¯ x) = 0 such that ∇gj (¯ x) = 0, then, by the convexity of the function gj , one has x¯ is a global minimizer of gj . So, the Slater condition does not hold, which is a contradiction.
PT
ED
Remark 2.2. From the proof of Theorem 2.1, we see that the implications (a) =⇒ (c) and (c) ⇐⇒ (d) do not require the convexity of K. However, the convexity of K is essential for the validity of the implication (c) =⇒ (a) (see Example 2.2). Also, even in the case where gj , j = 1, ..., m, are convex, it may happen that ∇gj (x) 6= 0 whenever x ∈ K and j ∈ {1, 2, . . . , m} with gj (x) = 0, while there is no x0 ∈ Rn such that gj (x0 ) < 0 for all j = 1, 2, . . . , m; for example, let C := R2 and g(x1 , x2 ) := (x1 − x2 , x2 − x1 ) for all x1 , x2 ∈ R.
CE
Corollary 2.1. If K is a closed convex set given by (2.2), and x¯ ∈ K, then the following assertions are equivalent: (i) K satisfies (M F CQ) at x¯. (ii) The Slater’s condition holds and the nondegeneracy condition is satisfied at x¯.
AC
Proof: For C := Rn , it is well-known that (see the proof of Theorem 2.1 (the equivalence (a) ⇐⇒ (b)) Robinson’s constraint qualification is equivalent to Mangasarian-Fromovitz’s constraint qualification. So, the result follows from Theorem 2.1 (the equivalence (b) ⇐⇒ (c)). 2 Example 2.2. Let K := {x ∈ R2 : gj (x) ≤ 0, j = 1, 2}, where g1 (x) := x31 − x2 and 3 3 1 ) = g2 ( 12 , 16 ) = − 16 < 0, g2 (x) := −x21 + x2 for all x := (x1 , x2 ) ∈ R2 . We see that g1 ( 21 , 16 2 that is, Slater’s condition holds. Also, one has ∇g1 (x) = (3x1 , −1) and ∇g2 (x) = (−2x1 , 1), which implies that the nondegeneracy holds at x¯ := (0, 0) ∈ K. We note that K is not convex and (M F CQ) is invalid at x¯ = (0, 0). So, in the absence of the convexity of K, the validity of both the Slater and the nondegeneracy conditions at x¯ does not guarantee the validity of (M F CQ) at x¯ ∈ K. 8
ACCEPTED MANUSCRIPT
In the absence of the convexity of K, the Slater condition and the nondegeneracy condition at x¯ do not guarantee the validity of Robinson’s constraint qualification at x¯ (see Example 2.3). Example 2.3. Let K and g1 , g2 be as in Example 2.2. Let C := R2+ . Clearly, C is a closed convex subset of R2 and K is not convex. Also, both the Slater and the nondegeneracy conditions are satisfied at x¯ := (0, 0) ∈ C ∩ K. It is easy to check that g(¯ x) + ∇g(¯ x)(C − 2 x¯) + R+ = R × R+ . Therefore, 0∈ / int[g(¯ x) + ∇g(¯ x)(C − x¯) + R2+ ],
3
CR IP T
and so, Robinson’s constraint qualification does not hold at x¯ ∈ C ∩ K.
Weakest CQ for Necessary Optimality
AN US
In this section, we present a new constraint qualification guaranteeing necessary and sufficient optimality conditions for the problem (P ), where each gj , j = 1, 2, . . . , m is a continuously differentiable function, but is not necessarily convex, while K is a convex set given by (2.2). This CQ is an extension of the sharpened strong CHIP, which was introduced in [10], for inequality constraints.
m [ X
λ∈Rm +
j=1
λj ∇gj (x) : λj gj (x) = 0, j = 1, 2, . . . , m + (C − x)∗ ,
(3.17)
ED
(C ∩ K − x)∗ =
M
Definition 3.1. (G-S Strong CHIP). The pair {C, K} is said to have the generalized sharpened strong conical hull intersection property (the G-S Strong CHIP) at x ∈ C ∩ K if
where λ := (λ1 , λ2 , . . . , λm ). The pair {C, K} is said to have the G-S strong CHIP if it has the G-S strong CHIP at every x ∈ C ∩ K.
CE
PT
The above definition of the G-S strong CHIP generalizes the notion of sharped strong CHIP of [10, Definition 3.2] for the case S := Rm + , which states that (C ∩ K − x)∗ = N g(x)0 + (C − x)∗ ,
(3.18)
AC
where the set
N g(x)0 := {u ∈ Rn : (u, u(x)) ∈
[
λ∈Rm +
epi(λg)∗ }
was used in (3.17) instead of the set m [ X { λj ∇gj (x) : λj gj (x) = 0, j = 1, 2, . . . , m}
λ∈Rm + j=1
n ∗ and g was assumed to be Rm + -convex. Here, for an arbitrarily given function f : R → R, f denotes its conjugate function, that is, f ∗ (u) := supx∈Rn {hu, xi−f (x)}. If gj , j = 1, 2, . . . , m, are differentiable and convex functions, then both definitions coincide.
9
ACCEPTED MANUSCRIPT
We note that, even if K is convex, it may happen m [ X { λj ∇gj (x) : λj gj (x) = 0, j = 1, 2, . . . , m} = 6 N g(x)0 ,
λ∈Rm + j=1
if gj , s (j = 1, 2, . . . , m) are not convex.
This implies that
λ∈R+
epi(λg)∗ = {0} × R+ ,
AN US
[
CR IP T
Example 3.1. Let g(x) := x+x3 for all x ∈ R, C := R, and K := {x ∈ R : g(x) ≤ 0} = R− , which is convex. We see that, for each λ ∈ R+ and each v ∈ R, ( 0, if λ = 0, v = 0, (λg)∗ (v) = +∞, otherwise.
and thus, N g(0)0 = {0}. On the other hand, [ {λ∇g(0) : λg(0) = 0} = R+ . λ∈R+
So,
λ∈S
{λ∇g(0) : λg(0) = 0} = 6 N g(0)0 .
M
[
ED
For each continuous convex function f : Rn −→ R, denote by (Pf ) the following optimization problem:
PT
min
f (x),
x∈Rn
subject to
x ∈ C ∩ K,
(3.19)
CE
where the set K is convex, given by (2.2), and C is a closed convex subset of Rn such that C ∩ K 6= ∅.
AC
We have the following result, which can be viewed as a counterpart of [10, Theorem 3.3] for the case where K is convex while gj , j = 1, 2, . . . , m, are not necessarily convex. Theorem 3.1. (Weakest CQ for Necessary Optimality Conditions) Let x¯ ∈ C ∩ K. Then the following assertions are equivalent: (i) {C, K} has the G-S strong CHIP at x¯; (ii) For each continuous convex function f : Rn → R attaining its global minimizer over C ∩ K at x¯, one has 0 ∈ ∂f (¯ x) +
m [ X
λ∈Rm +
j=1
λj ∇gj (¯ x) : λj gj (¯ x) = 0, j = 1, 2, . . . , m + (C − x¯)∗ ,
(3.20)
where λ := (λ1 , λ2 , . . . , λm ) and ∂f (¯ x) := {v ∈ Rn : hv, x − x¯i ≤ f (x) − f (¯ x) ∀x ∈ Rn }. 10
ACCEPTED MANUSCRIPT
Proof: (i) =⇒ (ii). Suppose that (i) holds. Let f be any continuous convex function such that x¯ ∈ C ∩ K is a global minimizer of (Pf ). Using the Fermat rule and the MoreauRockafellar theorem, we get 0 ∈ ∂f (¯ x) + NC∩K (¯ x) = ∂f (¯ x) + (C ∩ K − x¯)∗ .
0 ∈ {−u} +
m [ X
λ∈Rm +
j=1
CR IP T
So, in view of (3.17), it follows that (3.20) holds. (ii) =⇒ (i). Suppose that (ii) holds. Let u ∈ (C ∩ K − x¯)∗ be arbitrary. Then, by definition of the polar cone (2.5), hu, x − x¯i ≤ 0 for all x ∈ C ∩ K. So, noting that x¯ ∈ C ∩ K, we see that f (x) := −hu, xi, x ∈ Rn , is a continuous convex function attaining its global minimizer over C ∩ K at x¯. By (3.20), λj ∇gj (¯ x) : λj gj (¯ x) = 0, j = 1, 2, . . . , m + (C − x¯)∗ .
In other words,
AN US
m [ X u∈ { λj ∇gj (¯ x) : λj gj (¯ x) = 0, j = 1, 2, . . . , m} + (C − x¯)∗ . λ∈Rm + j=1
This shows that (C ∩ K − x¯) ⊆
m [ X
λ∈Rm +
j=1
λj ∇gj (¯ x) : λj gj (¯ x) = 0, j = 1, 2, . . . , m + (C − x¯)∗ .
(3.21)
M
∗
m [ X { λj ∇gj (¯ x) : λj gj (¯ x) = 0, j = 1, 2, . . . , m} + (C − x¯)∗ .
λ∈Rm + j=1
PT
u∈
ED
To justify the converse inclusion, let us take any
CE
m Then there exist λ := (λ x) = 0, j = 1, 2, . . . , m, and x∗0 ∈ 1 , λ2 , . . . , λm ) ∈ R+ with λj gj (¯ P x) + x∗0 . Let x ∈ C ∩ K be arbitrary. Since K is convex, (C − x¯)∗ such that u = m j=1 λj ∇gj (¯ x¯ + t(x − x¯) ∈ K for all t ∈ (0, 1). The latter means that gj (¯ x + t(x − x¯)) P ≤ 0 for all m j = 1, 2, . . . , m, and all t ∈ (0, 1). This together with the fact that the function j=1 λj gj is Pm differentiable at x¯ and that j=1 λj gj (¯ x) = 0 implies that
AC
m m X X hu, x − x¯i = h λj ∇gj (¯ x) + x∗0 , x − x¯i = h λj ∇gj (¯ x), x − x¯i + hx∗0 , x − x¯i ≤ 0. j=1
j=1
Hence, u ∈ (C ∩ K − x¯)∗ , and consequently, m [ X
λ∈Rm +
j=1
λj ∇gj (¯ x) : λj gj (¯ x) = 0, j = 1, 2, . . . , m + (C − x¯)∗ ⊆ (C ∩ K − x¯)∗ .
Therefore, from (3.21) and (3.22), it follows that the G-S strong CHIP holds at x¯.
11
(3.22) 2
ACCEPTED MANUSCRIPT
Definition 3.2. (KKT Condition). Let K be as in (2.2), for the problem (Pf ), let x¯ ∈ C ∩ K. One says that KKT condition holds at x¯ whenever 0 ∈ ∂f (¯ x) +
m X j=1
λj ∇gj (¯ x) + (C − x¯)∗ ,
for some λj ≥ 0 with λj gj (¯ x) = 0, j = 1, 2, . . . , m.
(3.23)
CR IP T
When f is a differentiable function and C := Rn , the condition (3.23) is called the KarushKuhn-Tucker condition, and λj , j = 1, 2, . . . , m, in (3.23) are called Lagrange multipliers at x¯. Now, we present some sufficient conditions for the G-S strong CHIP to be valid.
AN US
Theorem 3.2. (CQs for Necessary and Sufficient Optimality Conditions) Let K be as in (2.2), and x¯ ∈ C ∩ K. Then, one of the following conditions is sufficient for {C, K} to have the G-S strong CHIP at x¯: (i) K is a convex set and there exists x0 ∈ C with gj (x0 ) < 0 for all j = 1, 2, . . . , m, and the nondegeneracy condition is satisfied at x¯. (ii) K is a convex set, and there exist x b ∈ C and v ∈ C such that h∇gj (b x), vi < 0 for all j ∈ I(b x), and moreover, the nondegeneracy condition is satisfied at x¯.
M
Consequently, under one of the conditions (i) and (ii), x¯ ∈ C ∩ K is a global minimizer of the problem (Pf ), if and only if KKT condition holds at x¯ for (Pf ), where the problem (Pf ) is defined by (3.19).
ED
Proof: Assume that (i) holds. Then, by Theorem 2.1 (the implication (c) =⇒ (b)), Robinson’s constraint qualification is fulfilled at x¯ ∈ Ω := C ∩ K. Let G : Rn → Rm × Rn be defined by G(x) := − g(x), x for all x ∈ Rn , where g is given in (2.3). We have Ω = {x ∈ Rn : G(x) ∈ Rm + × C}. Moreover, by [2, Lemma 2.100],
PT
0 ∈ int{G(¯ x) + ∇G(¯ x)Rn − (Rm + × C)}.
CE
So, from [20, Corollaries 1.15 & 3.9], it follows that T m NΩ (¯ x) = ∇G(¯ x)T NRm (−g(¯ x ), x ¯ ) = −∇g(¯ x ) N − g(¯ x ) + NC (¯ x). ×C R + +
(3.24)
AC
Since C, K are convex sets and S is a convex cone, it holds
NΩ (¯ x) = (C ∩ K − x¯)∗ , NC (¯ x) = (C − x¯)∗ ,
(3.25)
and
−NRm − g(¯ x) = {λ := (λ1 , λ2 , . . . , λm ) ∈ Rm x) = 0, j = 1, 2, . . . , m}. + : λj gj (¯ +
Thus, it follows from (3.24), (3.25) and (3.26) that (C ∩ K − x¯)∗ =
m [ X { λj ∇gj (¯ x) : λj gj (¯ x) = 0, j = 1, 2, . . . , m} + (C − x¯)∗ .
λ∈Rm + j=1
This shows that {C, K} has the G-S strong CHIP at x¯. 12
(3.26)
ACCEPTED MANUSCRIPT
0 ∈ ∂f (¯ x) +
m X j=1
CR IP T
Now, we prove that (ii) implies the validity of the G-S strong CHIP at x¯. Suppose that (ii) holds. By a similar argument as in the proof of Theorem 2.1 (the implication (b) =⇒ (a)), one can show that there exists x0 ∈ C such that gj (x0 ) < 0 for all j = 1, 2, . . . , m. Hence, by (i), {C, K} has the G-S strong CHIP at x¯. Finally, suppose that one of (i) and (ii) holds. By the above, {C, K} has the G-S strong CHIP at x¯. If x¯ is a global minimizer of the problem (Pf ), then, by Theorem 3.1 (the x) = 0 for all implication (i) =⇒ (ii)), there exists λP:= (λ1 , λ2 , . . . , λm ) ∈ Rm + with λj gj (¯ ∗ j = 1, 2, . . . , m such that 0 ∈ ∂f (¯ x) + m λ ∇g (¯ x ) + (C − x ¯ ) . That is, KKT condition j j=1 j holds at x¯. Conversely, assume that KKT condition holds at x¯. Then there exists λ := (λ1 , λ2 , . . . , λm ) ∈ Rm x) = 0 for all j = 1, 2, . . . , m such that + with λj gj (¯ λj ∇gj (¯ x) + (C − x¯)∗ .
On the other hand, we have
j=1
λj ∇gj (¯ x) + (C − x¯)∗ ⊆
m [ X { λj ∇gj (¯ x) : λj gj (¯ x) = 0, j = 1, 2, . . . , m} + (C − x¯)∗
λ∈Rm + j=1
AN US
m X
= (C ∩ K − x¯)∗ .
Therefore, by using Moreau-Rockafellar’s theorem, we get
M
0 ∈ ∂f (¯ x) + (C ∩ K − x¯)∗ = ∂f (¯ x) + ∂δC∩K (¯ x) = ∂(f + δC∩K )(¯ x).
ED
So, due to the convexity, one has x¯ is a global minimizer of the problem (Pf ).
2
AC
CE
PT
Example 3.2. Let K := {x ∈ R2 : gj (x) ≤ 0, j = 1, 2, 3}, where g1 (x) := 1 − x1 x2 , g2 (x) := 1 − x1 and g3 (x) := x1 − x2 for all x := (x1 , x2 ) ∈ R2 . Let C := {x := (x1 , x2 ) ∈ R2 : x21 + x22 ≤ 25}. Then, it is easy to check that C and K are closed convex subsets of R2 such that C ∩ K 6= ∅. Moreover, g1 (2, 4) = −7 < 0, g2 (2, 4) = −1 < 0 and g3 (2, 4) = −2 < 0. That is, gj (x0 ) < 0 for all j = 1, 2, 3, for x0 := (2, 4) ∈ C. Also, it is easy to see that g1 (¯ x) = g2 (¯ x) = g3 (¯ x) = 0 whenever x¯ := (1, 1), and moreover, ∇gj (¯ x) 6= 0 with gj (¯ x) = 0, j = 1, 2, 3. Hence, the nondegeneracy condition holds at x¯. Thus, condition (i) in Theorem 3.2 is satisfied. So, by Theorem 3.2, {C, K} has the G-S strong CHIP at x¯. By choosing C := Rn , the following result follows from Theorem 3.2. Corollary 3.1. (CQ for Optimality when C = Rn [13, Theorem 3]). Consider the problem (P ) with K given by (2.2), and C := Rn . Let the Slater condition hold and let the nondegeneracy condition be satisfied at x¯ ∈ K. Suppose that K is a convex set and f is a differentiable convex function. Then, x¯ is a global minimizer of the problem (P ) if and only if it is a KKT point. Proof: This is an immediate consequence of Theorem 3.2.
2
The following example shows that in the absence of the Slater condition the conclusion of Corollary 3.1 may fail. 13
ACCEPTED MANUSCRIPT
Example 3.3. Let g1 (x1 , x2 ) := x21 − x2 , g2 (x1 , x2 ) := x21 + x2 and f (x1 , x2 ) := x1 + x2 for all x1 , x2 ∈ R, and let C := R2 . We see that K := {(x1 , x2 ) ∈ R2 : gj (x1 , x2 ) ≤ 0, j = 1, 2} = {(0, 0)}, which is convex. Moreover, x¯ := (0, 0) ∈ K is a global minimizer of the problem (P ) at which the nondegeneracy condition is satisfied. However, x¯ is not a KKT point. The reason is that the Slater’s condition does not hold. The following example shows that the nondegeneracy condition is essential for the validity of Corollary 3.1.
CR IP T
Example 3.4. Let g(x) := x3 , f (x) := −x for all x ∈ R, and C := R. We see that K := {x ∈ R : g(x) ≤ 0} = R− , which is convex. Moreover, x¯ := 0 ∈ K is a global minimizer of the problem (P ). Also, it is easy to see that the Slater’s condition is satisfied. However, x¯ is not a KKT point. The reason is that the nondegeneracy condition does not hold at x¯.
AN US
In the sequel, for each x ∈ K, put
m [ X λj ∇gj (x) : λj gj (x) = 0, j = 1, 2, . . . , m}, M (x) := { λ∈Rm +
j=1
(3.27)
where λ := (λ1 , λ2 , . . . , λm ). It is easy to check that M (x) is a convex cone.
M
Lemma 3.1. (Dual Characterization of Polar Cones) Let K be closed and convex, given by (2.2), and let x¯ ∈ K and M (¯ x) be as in (3.27). Assume that Robinson’s constraint qualification holds at x¯. Then, M (¯ x) = (K − x¯)∗ .
PT
ED
Proof: Suppose that Robinson’s constraint qualification holds at x¯ ∈ K. Let C := Rn . Then, Robinson’s constraint qualification holds at x¯ ∈ C ∩ K. Hence, by Theorem 3.2 (i), {C, K} has the G-S strong CHIP at x¯. Since C = Rn , it follows from Definition 3.1 that M (¯ x) = (K − x¯)∗ . 2
CE
It is worth noting that in Lemma 3.1, we have M (¯ x) ⊆ (K − x¯)∗ without the validity of the Robinson’s constraint qualification at x¯.
AC
Proposition 3.1. (Robinson’s CQ & Necessary and Sufficient Optimality) Let K be closed and convex, given by (2.2), and C be a closed convex subset of Rn . Let x¯ ∈ C ∩ K. Assume that Robinson’s constraint qualification holds at x¯. Then, the following assertions hold: (i) {C, K} has the strong CHIP at x¯. (ii) {C, K} has the G-S strong CHIP at x¯. (iii) For each continuous convex function f : Rn −→ R, x¯ is a global minimizer of the problem (Pf ) if and only if KKT condition holds at x¯, where (Pf ) is defined by (3.19). Proof: Suppose that Robinson’s constraint qualification holds at x¯. By Theorem 2.1, the Slater condition is fulfilled. In particular, C ∩ intK 6= ∅. Hence, by the Moreau-Rockafellar theorem, we have (C ∩ K − x¯)∗ = (C − x¯)∗ + (K − x¯)∗ . 14
ACCEPTED MANUSCRIPT
On the other hand, by Lemma 3.1, m [ X (K − x¯) = { λj ∇gj (x) : λj gj (x) = 0, j = 1, 2, . . . , m}. ∗
λ∈Rm + j=1
So, we have (C ∩ K − x¯)∗ = (C − x¯)∗ + (K −P x¯)∗ S = (C − x¯)∗ + { m j=1 λj ∇gj (x) : λj gj (x) = 0, j = 1, 2, . . . , m}. λ∈Rm +
CR IP T
This means that {C, K} has both the strong CHIP and the G-S strong CHIP at x¯. That is, (i) and (ii) hold. Finally, since (ii) holds, (iii) follows from Theorem 3.2. 2 Now, we summarize the constraint qualifications for the convex optimization problem (P ) : CQs in Convex Optimization–Summary of New Connections
M
AN US
Without convexity of gj , j = 1, 2, . . . , m Robinson’s CQ ⇔ Slater and Nondegeneracy Condition ⇒ G-S strong CHIP ⇓6⇑ ⇓6⇑ ⇓6⇑ Nondegeneracy Condition Slater Condition ⇒ Strong CHIP 6⇓6⇑ 6⇓6⇑ G-S Strong CHIP G-S Strong CHIP With convexity of gj , j = 1, 2, . . . , m Slater Condition ⇒ Nondegeneracy Condition 6⇒ Slater Condition Slater Condition ⇔ Robinson’s CQ ⇒ G-S strong CHIP
Characterizing Best Approximation without Convexity of Constraints
PT
4
ED
Note that the notation (;) means that the implication does not always hold.
CE
Now, in this section, we present characterizations of best approximation from the convex set C ∩ K whenever {C, K} has the G-S strong CHIP at some point x¯ ∈ C ∩ K.
AC
Theorem 4.1. (Perturbation Property & Lagrange Multiplier Characterization of Best Approximation) Let K be closed and convex, given by (2.2), and let C be a closed ˜ := C ∩ K. Assume that {C, K} has the G-S strong convex set in Rn . Let x ∈ Rn and x¯ ∈ K CHIP at x¯. Then the following assertions are equivalent: (i) x¯ = PK˜ (x); (ii) There exist λj ≥ 0 with λj gj (¯ x) = 0 for all j = 1, 2, . . . , m such that x¯ = PC (x −
m X j=1
λj ∇gj (¯ x));
(iii) There exist λj ≥ 0 with λj gj (¯ x) = 0 for all j = 1, 2, . . . , m such that ∗
0 ∈ ∂kx − ·k(¯ x) + (C − x¯) + 15
m X j=1
λj ∇gj (¯ x).
ACCEPTED MANUSCRIPT
Proof: Suppose that {C, K} has the G-S strong CHIP at x¯. Then, by Definition 3.1, ˜ − x¯)∗ = (C − x¯)∗ + (K
m [ X { λj ∇gj (¯ x) : λj gj (¯ x) = 0, j = 1, 2, . . . , m}.
λ∈Rm +
(4.28)
j=1
j=1
CR IP T
˜ − x¯)∗ . [(i) ⇐⇒ (ii)]. Now, let x ∈ Rn , and x¯ = PK˜ (x). By Lemma 2.1, one has x − x¯ ∈ (K Therefore, in view of (4.28), there exist λj ≥ 0 with λj gj (¯ x) = 0 for all j = 1, 2, . . . , m such that m X [x − λj ∇gj (¯ x)] − x¯ ∈ (C − x¯)∗ . P So, by using Lemma 2.1, it follows that x¯ = PC (x − m x)), for some λj ≥ 0 with j=1 λj ∇gj (¯ λj gj (¯ x) = 0, j = 1, 2, . . . , m. Therefore, the following implication holds: m X j=1
λj ∇gj (¯ x)), for some λj ≥ 0
AN US
x¯ = PK˜ (x) =⇒ x¯ = PC (x −
with λj gj (¯ x) = 0, j = 1, 2, . . . , m.
Conversely, assume that there exist λj ≥ 0 with λj gj (¯ x) = 0 for all j = 1, 2, . . . , m such that
j=1
λj ∇gj (¯ x)).
M
x¯ = PC (x −
m X
By using Lemma 2.1,
ED
[x −
m X j=1
λj ∇gj (¯ x)] − x¯ ∈ (C − x¯)∗ , for some λj ≥ 0
PT
with λj gj (¯ x) = 0, j = 1, 2, . . . , m.
CE
Then, we conclude that
∗
x − x¯ ∈ (C − x¯) +
m X j=1
λj ∇gj (¯ x), for some λj ≥ 0
AC
with λj gj (¯ x) = 0, j = 1, 2, . . . , m.
˜ − x¯)∗ . Again, by using Lemma 2.1, x¯ = P ˜ (x). Therefore, by (4.28), x − x¯ ∈ (K K
[(i) ⇐⇒ (iii)]. We see that x¯ = PK˜ (x) if and only if x¯ is a global minimizer of the problem (Pf ), where f : Rn −→ R is defined by f (v) := kv − xk, ∀ v ∈ Rn .
Since f is a continuous convex function and C ∩ K is a closed convex set, the necessary and sufficient condition for x¯ to be a global minimizer of the problem (Pf ) is that ˜ − x¯)∗ . 0 ∈ ∂kx − ·k(¯ x) + (C ∩ K − x¯)∗ = ∂kx − ·k(¯ x) + ( K 16
ACCEPTED MANUSCRIPT
Now, the equivalence (i) ⇐⇒ (iii) follows from (4.28).
2
This theorem extends the corresponding known results for best approximation where the constraint functions gj , j = 1, 2, . . . , m are assumed to be convex (c.f. [10, 11]). For various other related characterizations of best approximations in infinite dimensional spaces, the reader is referred to [15, 16, 17].
CR IP T
Acknowledgements. The research of the first three authors was partially supported by a grant from the Australian Research Council.
References
[1] S. Boyd and L. Vandenberghe, Convex optimization, Cambridge University Press, Cambridge, 2004.
AN US
[2] J.F. Bonnans and A. Shapiro, Perturbation analysis of optimization problems, Springer, New York, 2000. [3] M. A. Goberna, F. Guerra-Vazquez and M. I. Todorov, Constraint qualifications in convex vector semi-infinite optimization, European J. Oper, Res, 249 (2016), 32-40. [4] F. Deutsch, Best approximation in inner product spaces, Springer-Verlag, New York, 2000.
M
[5] F. Deutsch, The role of conical hull intersection property in convex optimization and approximation, in Approximation Theory IX, C.K. Chui and L.L. Schumaker, eds., (1998), Vanderbilt University Press, Nashville, TN.
ED
[6] F. Deutsch, W. Li and J. Swetits, Fenchel duality and the strong conical hull intersection property, J. Optim. Theory Appl., 102 (1999), 681–695.
PT
[7] J. Dutta and C.S. Lalitha, Optimality conditions in convex optimization revisited, Optim. Lett., 7 (2013), 221-229.
CE
[8] J.B. Hiriart-Urruty and C. Lemarechal, Convex Analysis and Minimization Algorithms I, Grundlehren der mathematischen Wissenschaften, Springer, 1993.
AC
[9] Q. Ho, Necessary and sufficient KKT optimality conditions in non-convex optimization, Optim Lett., 11 (2017), 41-46.
[10] V. Jeyakumar, The strong conical hull intersection property for convex programming, Math. Program., 106 (2006), 81-92. [11] V. Jeyakumar and H. Mohebi, A global approach to nonlinearly constrained best approximation, Numer. Funct. Anal. Optim., 26 (2005), no. 2, 205-227. [12] V. Jeyakumar and H. Wolkowicz, Generalizations Slater’s constraint qualifications for infinite convex programs, Math Program, 57 (1992), 85-101. [13] J.-B. Lasserre, On representations of the feasible set in convex optimization, Optim Lett., 4 (2010), 1-5. 17
ACCEPTED MANUSCRIPT
[14] J.-B. Lasserre, On convex optimization without convex representation, Optim. Lett., 5 (2011), 549-556. [15] C. Li and X. Jin, Nonlinearly constrained best approximation in Hilbert spaces: the strong CHIP, and the basic constraint qualification, SIAM J. Optim., 13(1) (2002), 228-239. [16] C. Li and K.F. Ng, On best approximation by nonconvex sets and perturbation of nonconvex inequality systems in Hilbert spaces, SIAM J. Optim., 13 (2002), 726-744.
CR IP T
[17] C. Li and K.F. Ng, Constraint qualification, the strong CHIP and best approximation with convex constraints in Banach spaces, SIAM J. Optim., 14 (2003), 584-607. [18] O. L. Mangasarian, Nonlinear Programming, Classics in Applied Mathematics, SIAM Publications, 1994.
AN US
[19] J.-E. Martinez-Legaz, Optimality conditions for pseudoconvex minimization over convex sets defined by tangentially convex constraints, Optim. Lett., 9 (2015), 1017-1023. [20] B.S. Mordukhovich, Variational analysis and generalized differentiation, I: basic theory, Springer, Berlin, 2006. [21] R.T. Rockafellar, Convex Analysis, Princeton University Press, Princeton, 1970.
AC
CE
PT
ED
M
[22] R. Schneider, Convex bodies: the Brunn-Minkowski theory, Cambridge University Press, Cambridge, 2014.
18