Generalized sequential normal compactness in Banach spaces

Generalized sequential normal compactness in Banach spaces

Nonlinear Analysis 79 (2013) 221–232 Contents lists available at SciVerse ScienceDirect Nonlinear Analysis journal homepage: www.elsevier.com/locate...

276KB Sizes 1 Downloads 122 Views

Nonlinear Analysis 79 (2013) 221–232

Contents lists available at SciVerse ScienceDirect

Nonlinear Analysis journal homepage: www.elsevier.com/locate/na

Generalized sequential normal compactness in Banach spaces✩ Bingwu Wang a,b,∗ , Mingming Zhu a , Yali Zhao a a

Bohai University, Jinzhou, PR China

b

Eastern Michigan University, Ypsilanti, MI 48197, USA

article

info

Article history: Received 24 September 2012 Accepted 21 November 2012 Communicated by S. Carl Keywords: Sequential normal compactness Normal cone w∗ -extensibility Restrictive metric regularity Generalized differentiation Variational analysis

abstract Normal compactness conditions are important properties of sets, set-valued mappings in variational analysis, which are generalized versions of the classical Lipschitzian property and are essential for the calculus of generalized differentiation theory. In this paper we propose the notion called the generalized sequential normal compactness, and establish its basic properties and calculus in general Banach spaces. © 2012 Elsevier Ltd. All rights reserved.

1. Introduction The Lipschitzian property has long been recognized as one of the most important and valuable ingredients in the development of variational analysis; in particular, the celebrated Moreau–Rockafellar Theorem in convex analysis

∂(ϕ1 + ϕ2 )(¯x) = ∂ϕ1 (¯x) + ∂ϕ2 (¯x) typically requires one of the convex functions being continuous at the point in discussion, where it is well known that the continuity of a convex function is equivalent to its Lipschitz continuity. In [1], Rockafellar introduced the so-called epiLipschitzian property of sets that are locally isomorphic to the epi-graphs of Lipschitzian functions. In case of convex sets, this property actually means the nonemptiness of the interior of the set, which relates to the typical assumptions in the classical convex separation theorems. The epi-Lipschitzian property was further extended to the compactly epi-Lipschitzian (CEL) property by Borwein and Strójwas [2]. The CEL property holds for any set in finite-dimensional spaces, which is not the case for the epi-Lipschitzian property. In [3], Loewen proved that the CEL condition implies a normal compactness condition, where this condition was used to show the closedness of the graph of the basic normal cone in the norm × w ∗ -topology in reflexive spaces, here w ∗ denotes the weak-star topology on the dual space X ∗ of X . Later the normal compact condition was extended to the sequential normal compactness (SNC) by Mordukhovich and Shao who employed it successfully in Asplund spaces to establish calculus rules in generalized differentiation theory and results in stabilities of set-valued mappings [4,5]. A set Ω is SNC at x¯ ∈ Ω if



   Ω w∗ ∀x∗k ∈  Nεk (xk ; Ω ) with xk → x¯ , x∗k → 0, εk ↓ 0 (as k → ∞) H⇒ x∗k → 0 (as k → ∞) ,

✩ This work was partly conducted during the first author’s visit to Bohai University in July 2012.



Corresponding author. E-mail addresses: [email protected] (B. Wang), [email protected] (M. Zhu), [email protected] (Y. Zhao).

0362-546X/$ – see front matter © 2012 Elsevier Ltd. All rights reserved. doi:10.1016/j.na.2012.11.019

(1.1)

222

B. Wang et al. / Nonlinear Analysis 79 (2013) 221–232 Ω

where x → x¯ means x → x¯ in the norm topology with x ∈ Ω , εk ↓ 0 means the sequence {εk } is nonincreasing and w∗

converges to 0, x∗k → 0 means x∗k converges to 0 in the weak-star topology of X ∗ , x∗k → 0 means x∗k converges to 0 in the norm topology of X ∗ , and  Nε (x; Ω ) stands for the ε -normal cone (see Section 2). The partial sequential normal compactness (PSNC, [6,7]) is very related to the graph of Lipschitz-like mappings (see Section 3). The form below in product spaces was formulated by Mordukhovich and Wang [8–10]: Let J = {1, . . . , n}, J1 ⊂ J, J2 = J \ J1 , Xi (i ∈ J ) be Banach spaces, and Ω be a subset of X1 × · · · Xn , then Ω is said to be PSNC at x¯ ∈ Ω with respect to Ω {Xi | i ∈ J1 } if for all x∗k = (x∗1k , . . . , x∗nk ) ∈  Nεk (xk ; Ω ) with xk → x¯ , εk ↓ 0,   w∗   x∗ik → 0 (i ∈ J1 ) H⇒ x∗ik → 0 (i ∈ J1 ) . ∗ xik → 0 (i ∈ J2 )

(1.2)

The strong partial sequential normal compactness (strong PSNC, or SPSNC) was proposed in [8–10] in the further development of generalized differentiation theory. It turns out to be a crucial addition to establish more general calculus rules of generalized variational constructions and of the compactness properties themselves. We say that Ω is SPSNC with respect to {Xi | i ∈ J1 } at x¯ if (1.2) is replaced by w∗

x∗ik → 0 (i ∈ J ) H⇒ x∗ik → 0 (i ∈ J1 ) .









(1.3)

SNC, PSNC, and SPSNC now are integrated parts of the generalized differentiation theory in variational analysis, see [8–11] and references therein for an extensive exposition of this matter; the readers are also referred to [12–14] for further development in this direction. In the above definitions of the compactness properties, the product space X1 × · · · × Xn contains three kinds of components: the enforcing components {Xj | j ∈ J2 } in the definition of PSNC, the free components {Xj | j ∈ J2 } in the definition of SPSNC, and the target components {Xj | j ∈ J1 } in both definitions. In the talk [15], Wang introduced the mixed sequential normal compactness, which is renamed the generalized sequential normal compactness (GSNC) here, that unifies all the above compactness notions in one. In this definition, the product space contains all three kinds of components. It turns out that GSNC is not just a simple generalization of the SNC/PSNC/SPSNC property; in fact, it can further unify almost all the existent calculus results of normal compactness and related calculus results of generalized differentiations and improve them to next level. In this paper, we present basic properties and new characterizations of GSNC in Banach spaces, as well as calculus of GSNC including the sum rule, chain rule, etc. The calculus of GSNC and its applications to generalized differentiation theory can be greatly improved in Asplund spaces; we will discuss this issue in another paper [16], where almost all results in [8–10] are improved and developed in a more unified manner under the new framework of GSNC. The paper is organized as follows. In Section 2, we first recall some basic concepts and notations, then introduce the GSNC property with some preliminary discussions including relations to existing notions. Section 3 contains further examples and some characterizations of the GSNC property; in particular, we establish a characterizations of the PSNC/SPSNC properties of smooth functions. Section 4 provides calculus rules of the GSNC property under various set/mapping operations in Banach spaces. 2. The generalized sequential normal compactness (GSNC) We basically follow [11] for notations. All spaces in consideration are Banach spaces. By X ∗ we mean the dual space of X , and by BX we mean the closed unit ball of X (when there is no ambiguity, we omit the subscript X for simplicity). B(¯x; δ) stands for the closed ball in X with center x¯ and radius δ . For a product space, the norm is the typical ℓ1 -norm, π denotes the usual projection of product space onto its subspace (also the quotient mapping to a quotient space), and I denotes the embedding of a subspace into the product space. For example, for the product space X × Y , ∥(x, y)∥ = ∥x∥ + ∥y∥, πX (x, y) = x, πY (x, y) = y, IX (x) = (x, 0), IY (y) = (0, y) for all x ∈ X , y ∈ Y . For any u∗ ∈ (X × Y )∗ , and (x, y) ∈ X × Y , ⟨u∗ , (x, y)⟩ = ⟨u∗ ◦ IX , x⟩ + ⟨u∗ ◦ IY , y⟩ with u∗ ◦ IX ∈ X ∗ , u∗ ◦ IY ∈ Y ∗ . Conversely, we can regard each functional in X ∗ × Y ∗ as a member of (X × Y )∗ in similar manner. In this way we treat (X × Y )∗ as X ∗ × Y ∗ . We use B (X , Y ) to denote the space of all continuous linear operators from X to Y . For a given T ∈ B (X , Y ), T ∗ : Y ∗ → X ∗ denotes its adjoint. Let T ∈ B (X × Y , Z ), then we can treat T ∗ ∈ B (Z ∗ , X ∗ × Y ∗ ) since we treat (X × Y )∗ as X ∗ × Y ∗ . In this case T ∗ = (IX∗ ◦ T ∗ , IY∗ ◦ T ∗ ) with ⟨T ∗ (z ∗ ), (x, y)⟩ = ⟨IX∗ (T ∗ (z ∗ )), x⟩ + ⟨IY∗ (T ∗ (z ∗ )), y⟩ for all z ∗ ∈ Z ∗ and x ∈ X , y ∈ Y . Given f : X → Y , f is said strictly differentiable at x¯ with derivative ∇ f (¯x) if lim

x,u→¯x,x̸=u

f (u) − f (x) − ∇ f (¯x)(u − x)

∥u − x∥

= 0.

(2.4)

When x is fixed as x¯ in (2.4), we say that f is Fréchet differentiable at x¯ . When f is differentiable, the rate of strict differentiability rf (¯x; δ) [12] at x¯ with radius δ is defined as

B. Wang et al. / Nonlinear Analysis 79 (2013) 221–232

rf (¯x; δ) :=

223

∥f (u) − f (x) − ∇ f (¯x)(u − x)∥ . ∥u − x∥ x,u∈B(¯x;δ),x̸=u sup

It is clear that rf (¯x; δ) → 0 as δ ↓ 0 when f is strictly differentiable at x¯ . For a given nonempty set Ω ⊂ X with x¯ ∈ Ω , the set  Nε (¯x; Ω ) of ε -normals of Ω at x¯ is given by

 ⟨ x∗ , x − x¯ ⟩  Nε (¯x; Ω ) := x ∈ X | lim sup ≤ε . ∥x − x¯ ∥ Ω 





x→¯x

When ε = 0, the set is a cone which is called the Fréchet normal cone of Ω at x¯ and is denoted by  N (¯x; Ω ). If an equivalent norm is used in the above definition, the Fréchet normal cone does not change. It is clear that  N (¯x; Ω ) ⊂  Nε (¯x; Ω ) for any ε ≥ 0. When the space is an Asplund space (i.e., every continuous convex function on the space is Fréchet differentiable on a dense subset of its domain), for a closed set Ω ⊂ X one has [17],

 Nε (¯x; Ω ) ⊂

 { N (x; Ω ) | x ∈ B(¯x; δ) ∩ Ω } + (ε + γ )BX ∗

(2.5)

for any ε ≥ 0, δ, γ > 0. Let Ωi ⊂ Xi with x¯ i ∈ Ωi (i = 1, 2), then it is easy to verify that the following relations hold for any

ε ≥ 0:

 Nε ((¯x1 , x¯ 2 ); Ω1 × Ω2 ) ⊂  Nε (¯x1 ; Ω1 ) ×  Nε (¯x2 ; Ω2 ) ⊂  N2ε ((¯x1 , x¯ 2 ); Ω1 × Ω2 ).

(2.6)

For set-valued mapping/multifunction Fi : Xi ⇒ Yi (i = 1, 2), F1 ⊕ F2 : X1 × X2 → Y1 × Y2 is defined as (F1 ⊕ F2 )(x1 , x2 ) = F1 (x1 ) × F2 (x2 ) for all x1 ∈ X1 , x2 ∈ X2 . When X1 = X2 = X , F1 × F2 : X ⇒ Y1 × Y2 is defined as (F1 × F2 )(x) = F1 (x) × F2 (x) for all x ∈ X . Now we introduce the generalized sequential normal compactness (GSNC), which is the main object of this paper. Definition 2.1. Let J = {1, . . . , n}, J1 , J2 ⊂ J , Ω ⊂ X :=



i∈J

Xi with x¯ ∈ Ω . Then we say Ω is generalized sequentially

normally compact (GSNC) at x¯ ∈ Ω with respect to {Xi | i ∈ J1 } through {Xi | i ∈ J2 } if for all x∗k = (x∗1k , . . . , x∗nk ) ∈  Nεk (xk ; Ω ) Ω

with xk → x¯ , εk ↓ 0, (1.2) holds. It is easy to observe by (2.5) that we can replace the ε -normal cones in Definition 2.1 by the corresponding Fréchet normal cones when X is an Asplund space, this is particularly true when all Xi ’s are Asplund spaces. Note that, in contrast to the dinfition of PSNC/SPSNC in Section 1, here we don’t require J1 ∪ J2 = J. Therefore, now the product space X1 × · · · × Xn may contain all three kinds of components discussed in Section 1: the enforcing components {Xj | j ∈ J2 }, the free components {Xj | j ∈ J \ (J1 ∪ J2 )}, and the target component {Xj | j ∈ J1 }. When there is no free component (i.e., J1 ∪ J2 = J), GSNC reduces to PSNC; when there is no enforcing component (i.e., J2 = ∅), GSNC reduces to SPSNC; when neither enforcing nor free component presents (i.e., J1 = J , J2 = ∅), GSNC reduces to SNC. In this way, the notion of GSNC unifies the SNC, the PSNC, and the SPSNC in one. For simplicity, we also say Ω is GSNC with respect to J1 through J2 when it is GSNC with respect to {Xi | i ∈ J1 } through {Xi | i ∈ J2 }. Next we see some examples to illustrate the relations between the GSNC and the existing notions. Let Ω := {(u, u, 0) ∈ X × X × X | u ∈ X } with X an arbitrary infinite dimensional Banach space, and x¯ = (0, 0, 0). Then

     Nε (u, u, 0); Ω = (x∗1 , x∗2 ) ∈ X ∗ × X ∗ | ∥x∗1 + x∗2 ∥ ≤ 2ε × X ∗

(2.7)

for all u ∈ X . If we consider the second component of X × X × X as the enforcing component, then by (2.7), Ω is GSNC at x¯ with respect to the first component; however, we can show that Ω is not PSNC at x¯ with the same enforcing component. In fact, due to the infinite dimensionality of X and the classical Josefson–Nissenzweig Theorem ([18,19], or [20, chapter 12]), there is Nεk (¯x; Ω ) by (2.7) for a w ∗ -null sequence {x∗k } on X ∗ with ∥x∗k ∥ = 1 for each k ∈ N. Then we can check (εk x∗k , −εk x∗k , x∗k ) ∈  any given sequence εk ↓ 0. This shows that Ω is not PSNC with respect to the first and the third components. Therefore, in general GSNC is weaker than the PSNC with the same enforcing component. On the other hand, it can also be checked that (x∗k , −x∗k , 0) ∈  Nεk (¯x; Ω ) by (2.7) and hence Ω is not SPSNC at x¯ with respect to the first component; this shows that GSNC is generally weaker than the SPSNC with the same target components. Next, consider the set Λ := {(u, u, u) | u ∈ X } with X an infinite dimensional Banach space and x¯ = (0, 0, 0). Then

     Nε (u, u, u); Λ = (x∗1 , x∗2 , x∗3 ) ∈ X ∗ × X ∗ × X ∗ | ∥x∗1 + x∗2 + x∗3 ∥ ≤ 3ε

(2.8)

for all u ∈ X . Then it is clear that Λ is PSNC at x¯ with respect to the first component. On the other hand, for a given sequence εk ↓ 0, like the above example we can choose a w ∗ -null sequence {x∗k } consisted of unit vectors. Then by (2.8) it follows (x∗k , εk x∗k , −x∗k ) ∈  Nεk (¯x; Λ). Hence Λ is not GSNC with respect to the first component through the second component. This shows that GSNC is generally stronger than PSNC when the target components are the same. By the discussions above, we conclude that, in general, (i) GSNC is weaker than the SPSNC, but stronger than the PSNC, when the target components are the same, and (ii) GSNC is weaker than the PSNC when the enforcing components are the same.

224

B. Wang et al. / Nonlinear Analysis 79 (2013) 221–232

3. Further examples and characterizations This section contains further examples and some characterizations of the GSNC property; in particular, we establish a characterizations of the PSNC/SPSNC properties of smooth mappings. We first consider the properties involving CEL. For a given set Ω ⊂ X × Y , denote Ωy = {x ∈ X | (x, y) ∈ Ω }. The following concept first appears in [13]: Definition 3.1. A set Ω ⊂ X × Y is said uniformly compactly epi-Lipschitzian (uniformly CEL) with respect to y around (¯x, y¯ ) ∈ Ω if there are neighborhoods U of x¯ , V of y¯ , and O of the origin of X , a compact set C ⊂ X , and a number ty > 0 for each y ∈ V such that

Ωy ∩ U + tO ⊂ Ωy + tC whenever t ∈ (0, ty ), y ∈ V with Ωy ̸= ∅.

(3.9)

If the compact set C in the above can be selected as a singleton, then Ω is said uniformly epi-Lipschitzian (uniformly EL) with respect to y around (¯x, y¯ ). With almost the same argument as in the proof of Proposition 5.4(ii) in [13], we have the following result that gives an important case of the uniform CEL/EL sets when convexity is involved. It easily implies the uniform CEL/EL property of a convex set with nonempty interior. Proposition 3.2. Let Ω ⊂ X × Y with (¯x, y¯ ) ∈ Ω . Assume that there is a neighborhood V of y¯ such that Ωy is convex for all y ∈ V with Ωy ̸= ∅, and the interior of ∩y∈V ,Ωy ̸=∅ Ωy is nonempty, then Ω is uniformly epi-Lipschitzian (and hence uniformly CEL) with respect to y around (¯x, y¯ ). For general uniformly CEL set, the following result holds: Proposition 3.3. If Ω ⊂ X × Y is uniformly CEL with respect to y around (¯x, y¯ ), then it is SPSNC with respect to X at (¯x, y¯ ). Proof. Let U , V , O, C be given in Definition 3.1. Since C is compact, it is bounded. Then we can choose M > 0 so that Ω w ∥v∥ ≤ M for all v ∈ C . Pick εk ↓ 0, (xk , yk ) → (¯x, y¯ ), and (x∗k , y∗k ) → (0, 0) with (x∗k , y∗k ) ∈  Nεk ((xk , yk ); Ω ). We need to show that x∗k → 0 (as k → ∞). By definitions, there are δk , γk ↓ 0 such that ∗

⟨x∗k , x − xk ⟩ + ⟨y∗k , y − yk ⟩ ≤ (εk + γk )(∥x − xk ∥ + ∥y − yk ∥)

(3.10)

holds for all (x, y) ∈ B((xk , yk ); δk ) ∩ Ω . Without loss of generality, we assume that (xk , yk ) ∈ U × V for all k. Choose η > 0 such that ηBX ⊂ O. Since xk ∈ Ωyk ∩ U, for each h ∈ BX , there is 0 < t < δk /(η + M ), vh ∈ C such that x k + t η h − t v h ∈ Ωy k due to (3.9), which means (xk + t ηh − t vh , yk ) ∈ Ω . Moreover,

∥t ηh − t vh ∥ = t ∥ηh − vh ∥ <

δk (η + M ) = δk ; η+M

hence (xk + t ηh − t vh , yk ) ∈ B((xk , yk ); δk ) ∩ Ω . Now substituting (x, y) in (3.10) by (xk + t ηh − t vh , yk ) and canceling t, it derives

⟨x∗k , ηh − vh ⟩ ≤ (εk + γk )∥ηh − vh ∥ ∀h ∈ BX . Consequently,

   vh   + 1 ⟨x∗ , vh ⟩ ⟨x∗k , h⟩ ≤ (εk + γk )  h −  η η k   ≤ (εk + γk ) 1 +

M

+

η

1

η

sup⟨x∗k , v⟩ v∈C

∀h ∈ BX .

This implies

 ∥xk ∥ ≤ (εk + γk ) 1 + ∗

M

 +

η

1

η

sup⟨x∗k , v⟩. v∈C

It remains to show that supv∈C ⟨xk , v⟩ → 0 as k → ∞. To the contrary, there is  ε > 0, a sequence kj → ∞, and vj ∈ C such that ⟨x∗kj , vj ⟩ >  ε for all j ∈ N. On the other hand, since C is compact, we may assume without loss of generality that vj → v¯ . ∗

Then

|⟨x∗kj , vj ⟩| ≤ |⟨x∗kj , v¯ ⟩| + |⟨x∗kj , vj − v¯ ⟩| ≤ |⟨x∗kj , v¯ ⟩| + ∥x∗kj ∥ · ∥vj − v¯ ∥ → 0

B. Wang et al. / Nonlinear Analysis 79 (2013) 221–232

225

w∗

as j → ∞ due to the fact x∗kj → 0 (hence {x∗kj } is also bounded by the uniform boundedness principle). This is a contradiction, and the proof is complete.



To proceed, we involve another notion proposed in [13], which is a generalization of the well-known Lipschitz-like property originally proposed by Aubin [21]. The Lipschitz-like property is crucial in many aspects of theory and applications in variational analysis; see [11] for more discussions and references. Definition 3.4. F : X × Y ⇒ Z is uniformly Lipschitz-like with respect to y around (¯x, y¯ , z¯ ) if there exist ℓ > 0, and neighborhoods U of x¯ , V of y¯ , W of z¯ , such that F (x1 , y) ∩ W ⊂ F (x2 , y) + ℓ∥x1 − x2 ∥BZ

(3.11)

for all x1 , x2 ∈ U , y ∈ V . When we omit Y , y in the definition, it becomes the normal Lipschitz-like property. Next result relates the uniform Lipschitzlike property to the GSNC property. Theorem 3.5. If F is uniformly Lipschitz-like around (¯x, y¯ , z¯ ), then gph F is GSNC with respect to X through Z . Proof. By the definition of the uniform Lipschitz-like property, there is δ > 0 such that U = B(¯x; δ), V = B(¯y; δ), W = B(¯z ; δ) are the neighborhoods in Definition 3.4. Let εk ↓ 0, (xk , yk , zk ) ∈ gph F , (x∗k , y∗k , zk∗ ) ∈  Nεk ((xk , yk , zk ); gph F ), with w∗

(xk , yk , zk ) → (¯x, y¯ , z¯ ), (x∗k , y∗k ) → 0, zk∗ → 0 as k → ∞. Without loss of generality, assume ∥xk − x¯ ∥, ∥yk − y¯ ∥, ∥zk − z¯ ∥ < δ/2 for all k ∈ N. Now by the definition of the ε -normal cones, there are δk′ < δ/2, γk ↓ 0 such that ⟨x∗k , x − xk ⟩ + ⟨y∗k , y − yk ⟩ + ⟨zk∗ , z − zk ⟩ ≤ εk + γk ∥x − xk ∥ + ∥y − yk ∥ + ∥z − zk ∥

(3.12)

for all x ∈ B(xk ; δk′ ), y ∈ B(yk ; δk′ ), z ∈ B(zk ; δk′ ) with z ∈ F (x, y). Let ηk := min{ℓ−1 δk′ , δk′ }. Then for each u ∈ B(xk ; ηk ) ⊂ U, by (3.11) it follows F (xk , yk ) ∩ W ⊂ F (u, yk ) + ℓ∥u − xk ∥BZ . Therefore, for zk ∈ F (xk , yk ) ∩ W there exists w ∈ F (u, yk ) with u ∈ B(xk ; ηk ) such that

∥w − zk ∥ ≤ ℓ∥u − xk ∥ ≤ ℓηk ≤ ℓ(ℓ−1 δk′ ) = δk′ .

(3.13)

Now in (3.12) letting x := u ∈ B(xk ; ηk ), y = yk , z := w ∈ F (u, yk ) satisfying (3.13), it derives

⟨x∗k , u − xk ⟩ ≤ (εk + γk )(∥u − xk ∥ + ∥w − zk ∥) + ∥zk∗ ∥ · ∥w − zk ∥   ≤ (εk + γk )(1 + ℓ) + ℓ∥zk∗ ∥ ∥u − xk ∥ ∀u ∈ B(xk ; ηk ); hence

∥x∗k ∥ ≤ (εk + γk )(1 + ℓ) + ℓ∥zk∗ ∥ → 0 as k → ∞ since ∥zk∗ ∥ → 0.



Next we study the GSNC property of smooth mappings. Recall that a closed subspace L ⊂ X is w ∗ -extensible [12] in X if any w ∗ -null sequence in L∗ contains a subsequence that can be extended to a w ∗ -null sequence in X ∗ . It can be shown that, any complemented subspace is w ∗ -extensible, and if the closed unit ball of X ∗ is w ∗ -sequentially compact, then any closed subspaces are w ∗ -extensible in X . The class of spaces with w ∗ -sequentially compact dual balls are quite broad that contains all separable spaces, WCG spaces, Asplund spaces, weak-Asplund spaces, spaces with smooth renorms, etc. For more discussions and new developments of the w ∗ -extensibility, we refer the readers to the recent paper [22]. The lemma below [22, part of Theorem 5.2] gives some technical tools for our study. One can find the proof and further results in [22]. Lemma 3.6. Let T ∈ B (X , Y ) with T (X ) closed in Y . Then the following assertions are equivalent: (a) dim T (X ) < ∞. w∗

(b) T (X ) is w ∗ -extensible in Y , and for any sequence {y∗k } ⊂ Y ∗ with y∗k → 0, one has T ∗ y∗k → 0 (as k → ∞). To establish the characterization of the SPSNC result, we first show the following lemma, which is also interesting for its own sake.

226

B. Wang et al. / Nonlinear Analysis 79 (2013) 221–232

Lemma 3.7. Let f : X → Y . Then the following assertions hold: (i) If f is Fréchet differentiable at x¯ and gph f is SPSNC at (¯x, f (¯x)) with respect to X , then for any sequence {y∗k } ⊂ Y ∗ , w∗

y∗k → 0 H⇒ ∇ f (¯x)∗ (y∗k ) → 0







(k → ∞).



(3.14)

(ii) If (3.14) holds for all sequences {yk } ⊂ Y and f is strictly differentiable at x¯ , then gph f is SPSNC at (¯x, f (¯x)) with respect to X . ∗



Proof. For (i), if f is Fréchet differentiable at x¯ , then we can directly check by the definition of Fréchet normal cones that

(−∇ f (¯x)∗ (y∗ ), y∗ ) ∈  N ((¯x, f (¯x)); gph f ) ∀y∗ ∈ Y ∗ .

(3.15)

Then (3.14) follows from the SPSNC assumption on f . w∗

For (ii), let εk ↓ 0, xk → x¯ , yk := f (xk ), and (x∗k , y∗k ) ∈  Nεk ((xk , yk ); gph f ) with (x∗k , y∗k ) → 0. We need to show x∗k → 0. Choose δk , γk ↓ 0 with ∥xk − x¯ ∥ < δk /2. By the definition of ε -normal cones, there is δk′ < δk /2 such that

⟨x∗ , x − xk ⟩ + ⟨y∗k , f (x) − yk ⟩ ≤ εk + γk ∀x ∈ B(xk ; δk′ ). ∥x − xk ∥ + ∥f (x) − yk ∥

(3.16)

Since f is strictly differentiable at x¯ , it is Lipschitzian around this point; let ℓ denote the Lipschitzian modulus. By the definition of the rate of strict differentiability,

∇ f (¯x)(x − xk ) ∈ f (x) − yk + rf (¯x; δk )∥x − xk ∥BY

∀x ∈ B(xk ; δk′ ).

(3.17)

By (3.16) and (3.17), for all x ∈ B(xk ; δk′ ),

⟨x∗k , x − xk ⟩ + ⟨∇ f (¯x)∗ (y∗k ), x − xk ⟩ = ⟨x∗k , x − xk ⟩ + ⟨y∗k , ∇ f (¯x)(x − xk )⟩ ≤ ⟨x∗k , x − xk ⟩ + ⟨y∗k , f (x) − yk ⟩ + rf (¯x; δk )∥y∗k ∥ · ∥x − xk ∥ ≤ (εk + γk )(∥x − xk ∥ + ∥f (x) − yk ∥) + rf (¯x; δk )∥y∗k ∥ · ∥x − xk ∥ ≤ (εk + γk )(1 + ℓ)∥x − xk ∥ + rf (¯x; δk )∥y∗k ∥ · ∥x − xk ∥. Therefore

∥x∗k + ∇ f (¯x)∗ (y∗k )∥ ≤  εk := (εk + γk )(1 + ℓ) + rf (¯x; δk )∥y∗k ∥.

(3.18)

Since {y∗k }w ∗ -converges, it is bounded by the uniform boundedness principle; hence  εk → 0 as k → 0. By (3.14), we also have ∇ f (¯x)∗ y∗k → 0 as k → ∞. So x∗k → 0 by (3.18).  Now we present the characterization of the SPSNC property of smooth functions. Theorem 3.8. If f : X → Y is strictly differentiable at x¯ , and dim ∇ f (¯x)(X ) < ∞,

(3.19)

then gph f is SPSNC with respect to X . Conversely, if gph f is SPSNC with respect to X , f is Fréchet differentiable at x¯ , and ∇ f (¯x)(X ) is closed and w ∗ -extensible in Y , then (3.19) holds. Proof. It follows from Lemmas 3.6 and 3.7.



Recall that a linear operator T ∈ B (X , Y ) is compact if the closure of T (BX ) is compact. It is well known that, if T (X ) is closed in Y , then T is compact if and only if dim T (X ) < ∞ (see, e.g., [23]); therefore we have the following corollary. Corollary 3.9. If f is strictly differentiable at x¯ with ∇ f (¯x)(X ) closed and w ∗ -extensible in Y . Then gph f is SPSNC with respect to X at (¯x, f (¯x)) if and only if the operator ∇ f (¯x) is compact. More results along this direction can be found in next section (Theorem 4.6). Next, we study the PSNC property of the graph of a smooth mapping with respect to the image/range space. It is well known that a set-valued mapping F is Lipschitzlike if and only if its inverse F −1 is metrically regular; therefore the graph of a metrically regular mapping is PSNC with respect to the image space. In case of strictly differentiable mappings, metric regularity is equivalent to the surjectivity of the derivative of the mapping due to the famous Lyusternik–Graves theorem. Hence the graph of a strictly differentiable mapping with surjective derivative is PSNC with respect to the image space. In the following we will establish a complete characterization of this PSNC property of smooth mappings; it particularly shows that the surjectivity of the derivative can be relaxed in this case. First we establish the following lemma.

B. Wang et al. / Nonlinear Analysis 79 (2013) 221–232

227

Lemma 3.10. Let T ∈ B (X , Y ) with T (X ) closed in Y . Then the following assertions are equivalent: (a) For any sequence {y∗k } ⊂ Y ∗ ,



w∗

y∗k → 0, T ∗ (y∗k ) → 0 H⇒ [y∗k → 0]



(k → ∞).

(3.20)

(b) T (X ) is finite-codimensional in Y . Proof. To prove (a) ⇒ (b), assume to the contrary that dim Y /T (X ) = ∞. By the Josefson–Nissenzweig Theorem, there is w∗

a sequence {zk∗ } ⊂ (Y /T (X ))∗ such that zk∗ → 0 as k → ∞ and ∥zk∗ ∥ = 1 for all k ∈ N. Let π be the usual projection from w∗

Y to the quotient space Y /T (X ), and y∗k := zk∗ ◦ π . Then y∗k ∈ Y ∗ , y∗k |T (X ) = 0, y∗k → 0, and ∥y∗k ∥ = 1. It can be verified that T ∗ (y∗k ) = 0. So (a) fails. We proceed to prove (b)⇒(a). If T (X ) is finite codimensional, it is complemented in Y by a finite-dimensional space Z . w∗

Arbitrarily pick {y∗k } ⊂ Y ∗ with y∗k → 0 and T ∗ (y∗k ) → 0 (k → ∞). Then y∗k = (y∗k |T (X ) , y∗k |Z ) ∈ T (X )∗ × Z ∗ . It is clear that y∗k |Z → 0 (k → ∞) because dim Z < ∞. It remains to show that y∗k |T (X ) → 0. Regarding T as a surjective operator from X to T (X ) (which is closed by assumptions, hence a Banach space itself), then there is µ > 0 such that ∥T ∗ y∗ ∥ ≥ µ∥y∗ ∥ for all y∗ ∈ T (X )∗ (see, e.g., [23, Theorem 4.13]). Now we have T ∗ (y∗k |T (X ) ) = T ∗ (y∗k ) → 0; it follows that y∗k |T (X ) → 0 (as k → ∞).  Using the above lemma, we now have Theorem 3.11. If f : X → Y is strictly differentiable at x¯ , and codim∇ f (¯x)(X ) < ∞,

(3.21)

then gph f is PSNC with respect to Y . Conversely, if gph f is PSNC with respect to Y , f is Fréchet differentiable at x¯ , then (3.21) holds. w∗

Proof. For the first assertion, following the proof of Lemma 3.7-(ii), we pick xk , x∗k , y∗k and assume x∗k → 0, y∗k → 0 (k → ∞). Then by (3.18), ∇ f (¯x)∗ (y∗k ) → 0. Hence y∗k → 0 due to Lemma 3.10. For the second assertion, we notice that (3.15) holds in this case; so does (3.20) due to the PSNC assumption; and then the conclusion follows from Lemma 3.10.  4. GSNC calculus in Banach spaces We explore calculus rules of the generalized sequential normal compactness under set/mapping operations in this section. To start with, we consider the set/mapping products. Theorem 4.1. Let Ωi ∈ Xi1 × Xi2 × Xi3 and assume that Ωi is GSNC at x¯ i ∈ Ωi with respect to Xi1 through Xi2 (i = 1, 2). Then Ω1 × Ω2 is GSNC at (¯x1 , x¯ 2 ) with respect to {X11 , X21 } through {X12 , X22 }. Proof. The result follows from (2.6).



In the case of set-valued mappings, Theorem 4.1 takes the following form: Theorem 4.2. Let Fi : Xi1 × Xi2 × Xi3 ⇒ Yi1 × Yi2 × Yi3 and assume that gph Fi is GSNC at (¯xi , y¯ i ) ∈ gph Fi with respect to {Xi1 , Yi1 } through {Xi2 , Yi2 } (i = 1, 2). Then gph (F1 ⊕ F2 ) is GSNC at (¯x1 , x¯ 2 , y¯ 1 , y¯ 2 ) with respect to {X11 , X21 , Y11 , Y21 } through {X12 , X22 , Y12 , Y22 }. Proof. Noticing that gph F can be regarded as gph F1 × gph F2 , the results then follow from Theorem 4.1.



Next result involves the GSNC property under projection mappings. Recall that a set-valued mapping F : X ⇒ Y is called inner semicontinuous at (¯x, y¯ ) ∈ gph F if for any sequence xk → x¯ with F (xk ) ̸= ∅, there is a sequence yk ∈ F (xk ) that contains a subsequence converging to y¯ . Theorem 4.3. Let m, n ∈ N, J1 = {1, . . . , n}, J2 = {1, . . . , m}, Jij ⊂ Ji (i,  j = 1, 2), J2 ⊂ J2 , X = X1 × · · · × Xn , Y = Y1 × · · · × Ym , F : X ⇒ Y with y¯ ∈ F (¯x), and π is the projection of Y onto Z := j∈J2 Yj . Assume that gph F is GSNC with respect to {Xj | j ∈ J11 } ∪ {Yj | j ∈ J21 } through {Xj | j ∈ J12 } ∪ {Yj | j ∈ J22 } at (¯x, y¯ ), and the mapping G: X × Z ⇒ Y defined by G(x, z ) := F (x) ∩ π −1 (z ) (∀x ∈ X , z ∈ Z ) is inner semicontinuous at (¯x, π (¯y), y¯ ). Then gph (π ◦ F ) is GSNC with respect to {Xj | j ∈ J11 } ∪ {Yj | j ∈ J21 ∩ J2 } through {Xj | j ∈ J12 } ∪ {Yj | j ∈ J22 ∩ J2 } at (¯x, π (¯y)). Proof. Let W be the product space of components of Y with indices not in in  J2 . Without loss of generality, we write Y = Z × W . Pick any εk ↓ 0, (xk , zk ) → (¯x, z¯ ), (x∗k , zk∗ ) ∈  Nε ((xk , zk ); gph (πY ◦ F )) with z¯ = π (¯y), zk ∈ (π ◦ F )(xk ). By the inner semicontinuity assumption, one can find a sequence yk ∈ F (xk )∩π −1 (zk ) that contains a subsequence converging to y¯ . By the same reasonings like those at the end of the proof of Theorem 4.11 later in this section, it is sufficient to consider this subsequence for the conclusion. On the other hand, it can be verified that (x∗k , zk∗ , 0) ∈  Nεk ((xk , yk ); gph F ) by the definition the ε -normal cones; then the conclusion follows. 

228

B. Wang et al. / Nonlinear Analysis 79 (2013) 221–232

In the case of Cartesian product of mappings, we have Theorem 4.4. Let X = X1 × X2 × X3 , Y = Y1 × Y2 × Y3 , F1 : X ⇒ Y , F2 : X ⇒ Z . Suppose gph F1 is GSNC at (¯x, y¯ ) ∈ gph F1 with respect to {X1 , Y1 } through {X2 , Y2 }, and F2 is Lipschitz-like around (¯x, z¯ ) ∈ gph F2 . Then gph F is GSNC at (¯x, y¯ , z¯ ) with respect to {X1 , Y1 } through {X2 , Y2 , Z }, where F = F1 × F2 . Proof. First choose δ > 0 such that for all u1 , u2 ∈ B(¯x; δ), F2 (u1 ) ∩ B(¯z ; δ) ⊂ F2 (u2 ) + ℓ∥u2 − u1 ∥BZ

(4.22)

  for some ℓ > 0. Pick εk ↓ 0, (xk , yk , zk ) ∈ gph F with (xk , yk , zk ) → (¯x, y¯ , z¯ ), (xk , yk , zk ) ∈  Nεk (xk , yk , zk ); gph F with ∗





w∗

(x∗k , y∗k , zk∗ ) → (0, 0, 0), (x2k , y∗2k , zk∗ ) → (0, 0) (k → ∞), where x∗k = (x∗1k , x∗2k , x∗3k ), y∗k = (y∗1k , y∗2k , y∗3k ). Without loss of generality, assume ∥xk − x¯ ∥, ∥yk − y¯ ∥, ∥zk − z¯ ∥ < δ/2. Then we can find δk′ < δ/2, γk ↓ 0 such that (3.12) holds for all x ∈ B(xk ; δk′ ), y ∈ B(yk ; δk′ ), z ∈ B(zk ; δk′ ) with (y, z ) ∈ F (x). Let ηk := min{ℓ−1 δk′ , δk′ }. Then for u ∈ B(xk ; ηk ), apply (4.22) for the case u1 replaced by xk , u2 replaced by u, and notice that zk ∈ F2 (xk ) ∩ B(¯x; δ), there exists z ∈ F2 (u) such that ∥z − zk ∥ ≤ ℓ∥u − xk ∥ ≤ ℓηk ≤ δk′ .

(4.23)

Consequently, for u ∈ B(xk ; ηk ), any v ∈ F1 (u) ∩ B(yk ; δk′ ), and z ∈ F2 (u) satisfying (4.23), substituting (x, y, z ) in (3.12) by (u, v, z ), we have

⟨x∗k , u − xk ⟩ + ⟨y∗k , v − yk ⟩ ≤ (εk + γk )(∥u − xk ∥ + ∥v − yk ∥ + ∥z − zk ∥) + ∥zk∗ ∥ · ∥z − zk ∥ ≤ (εk + γk )(∥u − xk ∥ + ∥v − yk ∥ + ℓ∥u − xk ∥) + ℓ∥zk∗ ∥ · ∥u − xk ∥   ≤ (εk + γk )(1 + ℓ) + ℓ∥zk∗ ∥ (∥u − xk ∥ + ∥v − yk ∥) holds for all u ∈ B(xk ; ηk ) and v ∈ F1 (u) ∩ B(yk ; δk′ ), which implies that

(x∗k , y∗k ) ∈  Nεk ((xk , yk ); gph F1 ) with  εk := (εk + γk )(1 + ℓ) + ℓ∥zk∗ ∥. Noticing that  εk → 0 since ∥zk∗ ∥ → 0 as k → ∞, by the GSNC property of gph F1 , it derives (x∗1k , y∗1k ) → 0 due to the assumption (x∗2k , y∗2k ) → 0; and the proof is complete.  We can get many special cases of Theorem 4.4; for instance, the following result holds. Corollary 4.5. Let F1 : X ⇒ Y , F2 : X ⇒ Z , and assume that gph F1 is SPSNC with respect to X at (¯x, y¯ ) ∈ gph F1 , F2 is Lipschitzlike around (¯x, z¯ ) ∈ gph F2 . Then gph (F1 × F2 ) is GSNC at (¯x, y¯ , z¯ ) with respect to X through Z . Proof. This is the case of Theorem 4.4 when X2 = X3 = Y1 = Y2 = {0}.



Combining Theorem 4.3, Corollary 4.5 and Theorem 3.8, we reach the following characterization of the GSNC property involving differentiable mappings. Theorem 4.6. Let f : X → Y , G: X ⇒ Z , and F : X ⇒ Y × Z with F = f × G. Then the following hold: (i) If f is strictly differentiable at x¯ and dim ∇ f (¯x)(X ) < ∞, G is Lipschitz-like around (¯x, z¯ ) ∈ gph G, then gph F is GSNC with respect to X through Z at (¯x, f (¯x), z¯ ). (ii) If gph F is GSNC with respect to X through Z at (¯x, f (¯x), z¯ ), f is Fréchet differentiable at x¯ with ∇ f (¯x)(X ) closed and w ∗ -extensible in Y , and G is inner semicontinuous at (¯x, z¯ ), then dim ∇ f (¯x)(X ) < ∞, and gph G is PSNC with respect to X at (¯x, z¯ ). Proof. Assertion (i) comes from Corollary 4.5 and Theorem 3.8. Assertion (ii) comes from Theorems 3.8 and 4.3. Note that the inner semicontinuity property of F ∩ π −1 holds under the assumptions made, where π stands for πY or πZ , the projections from Y × Z to Y and Z , respectively.  Next we establish a sum rule involving set-valued mappings and strictly differentiable functions. Theorem 4.7. Let X = X1 × X2 × X3 , Y = Y1 × Y2 , Z = Z1 × Z2 and F : X ⇒ Y × Z , g1 : X → Y , g2 : X → Z , g = g1 × g2 such that g1 , g2 are strictly differentiable at x¯ = (¯x1 , x¯ 2 , x¯ 3 ) and gph g1 is SPSNC with respect to X . Then the following two statements are equivalent with y¯ = (¯y1 , y¯ 2 ), z¯ = (¯z1 , z¯2 ): (a) gph F is GSNC with respect to Z1 through Z2 , and GSNC with respect to {X1 , Y1 , Z1 } through {X2 , Z2 } at (¯x, y¯ , z¯ ) (b) gph (F + g ) is GSNC with respect to Z1 through Z2 , and GSNC with respect to {X1 , Y1 , Z1 } through {X2 , Z2 } at (¯x, y¯ + g1 (¯x), z¯ + g2 (¯x))

B. Wang et al. / Nonlinear Analysis 79 (2013) 221–232

229

Proof. We only need to show (a)⇒(b) because F = (F + g ) + (−g ). Pick any εk ↓ 0 and gph (F +g )

(xk , yk + g1 (xk ), zk + g2 (xk )) −→ (¯x, y¯ + g1 (¯x), z¯ + g2 (¯x)),   (x∗k , y∗k , zk∗ ) ∈  Nε (xk , yk + g1 (xk ), zk + g2 (xk )); gph (F + g )

(4.24)

k

w∗

∗ ∗ with x∗k = (x∗1k , x∗2k , x∗3k ), y∗k = (y∗1k , y∗2k ), zk∗ = (z1k , z2k ) and (x∗k , y∗k , zk∗ ) → 0. Clearly (yk , zk ) ∈ F (xk ). Choose δk ↓ 0 such that xk ∈ B(¯x; δk ). By the definition of ε -normals and (4.24), there are sequences ηk , γk ↓ 0 such that for all (u, v, w) ∈ gph F ∩ B((xk , yk , zk ); ηk ) ⊂ X × Y × Z ,

⟨x∗k , u − xk ⟩ + ⟨y∗k , v + g1 (u) − yk − g1 (xk )⟩ + ⟨zk∗ , w + g2 (u) − zk − g2 (xk )⟩   ≤ εk + γk )(∥u − xk ∥ + ∥v + g1 (u) − yk − g1 (xk )∥ + ∥w + g2 (u) − zk − g2 (xk )∥ .

(4.25)

By the strict differentiability of g1 and g2 , gi (u) − gi (xk ) ∈ ∇ gi (¯x)(u − xk ) + Ri B

(i = 1, 2),

(4.26)

where Ri := rgi (¯x; δk + ηk )∥u − xk ∥ (i = 1, 2). Letting u1k = yk , u2k = zk , then ∗







⟨u∗ik , gi (u) − gi (xk )⟩ ≥ ⟨u∗ik , ∇ gi (¯x)(u − xk )⟩ − ∥u∗ik ∥Ri = ⟨∇ gi (¯x)∗ (u∗ik ), u − xk ⟩ − ∥u∗ik ∥Ri (i = 1, 2).

(4.27)

On the other hand, let the Lipschitz modulus of g1 , g2 be ℓ around x¯ , then for k large enough,

∥v + g1 (u) − yk − g1 (xk )∥ ≤ ∥v − yk ∥ + ℓ∥u − xk ∥, ∥w + g2 (u) − zk − g2 (xk )∥ ≤ ∥w − zk ∥ + ℓ∥u − xk ∥.

(4.28) (4.29)

Combining (4.27)–(4.29), it follows from (4.25) that

⟨x∗k + ∇ g1 (¯x)∗ (y∗k ) + ∇ g2 (¯x)∗ (zk∗ ), u − xk ⟩ + ⟨y∗k , v − yk ⟩ + ⟨zk∗ , w − zk ⟩   ≤ εk ∥u − xk ∥ + ∥v − yk ∥ + ∥w − zk ∥

(4.30)

for all (u, v, w) on gph F that are close to (¯x, y¯ , z¯ ), where

 εk := (2ℓ + 1)(εk + γk ) + ∥y∗k ∥rg1 (¯x; δk + ηk ) + ∥zk∗ ∥rg2 (¯x; δk + ηk ). Consequently,

(x∗k + ∇ g1 (¯x)∗ (y∗k ) + ∇ g2 (¯x)∗ (zk∗ ), y∗k , zk∗ ) = (x∗1k + IX∗1 (∇ g1 (¯x)∗ (y∗k )) + IX∗1 (∇ g2 (¯x)∗ (zk∗ )), x∗2k + IX∗2 (∇ g1 (¯x)∗ (y∗k )) + IX∗2 (∇ g2 (¯x)∗ (zk∗ )), x∗3k ∗ ∗ + IX∗3 (∇ g1 (¯x)∗ (y∗k )) + IX∗3 (∇ g2 (¯x)∗ (zk∗ )), y∗1k , y∗2k , z1k , z2k )∈ Nεk ((xk , yk , zk ); gph F ),

(4.31)

where IXi is the embedding of Xi into X (i = 1, 2, 3). It is clear that  εk ↓ 0 because rgi (¯x; δk + ηk ) ↓ 0 (i = 1, 2) and {∥y∗k ∥}, {∥zk∗ ∥} are both bounded due to the uniform boundedness principle and the w∗ -convergence of {y∗k }, {zk∗ }. ∗ → 0, then by and (4.31) and the GSNC property of gph F at (¯x, y¯ , z¯ ) with respect to Z1 through Z2 , we have Now if z2k ∗ z1k → 0, and therefore gph (F + g ) is GSNC at (¯x, y¯ + g1 (¯x), z¯ + g2 (¯x)) with respect to Z1 through Z2 . In this case, it also holds that IX∗1 (∇ g2 (¯x)∗ (zk∗ )) → 0,

IX∗2 (∇ g2 (¯x)∗ (zk∗ )) → 0 (k → ∞).

(4.32)

w∗

Since y∗k → 0, by the SPSNC assumptions on g1 and Lemma 3.7, we have ∇ g1 (¯x)∗ (y∗k ) → 0; so IX∗1 (∇ g1 (¯x)∗ (y∗k )) → 0,

IX∗2 (∇ g1 (¯x)∗ (y∗k )) → 0 (k → ∞).

(4.33)

It follows that, if we in addition assume x2k → 0 (k → ∞), then ∗

x∗2k + IX∗2 (∇ g1 (¯x)∗ (y∗k )) + IX∗2 (∇ g2 (¯x)∗ (zk∗ )) → 0 (k → ∞). Consequently, the GSNC property of gph F with respect to {X1 , Y1 , Z1 } through {X2 , Z2 } at (¯x, y¯ , z¯ ) implies x∗1k + IX∗1 (∇ g1 (¯x)∗ (y∗k )) + IX∗1 (∇ g2 (¯x)∗ (zk∗ )) → 0,

(4.34)

y1k → 0. ∗

By (4.32)–(4.34), x∗1k → 0 (as k → ∞). Therefore gph (F + g ) is GSNC with respect to {X1 , Y1 , Z1 } through {X2 , Z2 } at (¯x, y¯ + g1 (¯x), z¯ + g2 (¯x)). The proof is complete. 

230

B. Wang et al. / Nonlinear Analysis 79 (2013) 221–232

Note that in Theorem 4.7, the assumption ‘‘GSNC with respect to Z1 through Z2 ’’ on gph F or gph (F + g ) is separate/additional to the main GSNC assumption. Next we single out two situations when this assumption can be omitted. Corollary 4.8 corresponds to the case of Theorem 4.7 when Y2 = {0}, and Corollary 4.9 corresponds to the case of Theorem 4.7 when Z1 = {0}. Corollary 4.8. Let X = X1 × X2 , Y = Y1 × Y2 , Z = Z1 × Z2 , F : X ⇒ Y × Z , g1 : X → Y , g2 : X → Z , g = g1 × g2 , and g1 , g2 are strictly differentiable at x¯ and gph g1 is SPSNC with respect to X . Then gph (F + g ) is GSNC with respect to {X1 , Y1 , Z1 } through Z2 at (¯x, y¯ + g1 (¯x), z¯ + g2 (¯x)) ∈ gph (F + g ) if and only if gph F is GSNC with respect to {X1 , Y1 , Z1 } through Z2 at (¯x, y¯ , z¯ ). Corollary 4.9. Let X = X1 × X2 × X3 , Y = Y1 × Y2 , F : X ⇒ Y × Z , g1 : X → Y , g2 : X → Z , g = g1 × g2 , and g1 , g2 are strictly differentiable at x¯ and gph g1 is SPSNC with respect to X . Then gph (F + g ) is GSNC with respect to {X1 , Y1 } through {X2 , Z } at (¯x, y¯ + g1 (¯x), z¯ + g2 (¯x)) ∈ gph (F + g ) if and only if gph F is GSNC with respect to {X1 , Y1 } through {X2 , Z } at (¯x, y¯ , z¯ ). Setting X2 = Y1 = Z1 = {0} in Theorem 4.7, we also get the following special case. Corollary 4.10. Let X = X1 × X2 , F : X ⇒ Y × Z , g1 : X → Y , g2 : X → Z , g = g1 × g2 , and g1 , g2 are strictly differentiable at x¯ and gph g1 is SPSNC with respect to X . Then gph (F +g ) is GSNC with respect to {X1 } through Z at (¯x, y¯ +g1 (¯x), z¯ +g2 (¯x)) ∈ gph (F +g ) if and only if gph F is GSNC with respect to {X1 } through Z at (¯x, y¯ , z¯ ). We should mention that Theorem 4.7 also covers the results of non-GSNC (i.e., PSNC/SNC) situations like Theorem 1.70 in [11]. In fact, in Theorem 4.7 setting X2 = X3 = Y1 = Y2 = Z1 = {0}, then we get the PSNC case of Theorem 1.70 in [11], and setting X2 = X3 = Y1 = Y2 = Z2 = {0}, then we get the SNC case. We proceed to establish chain rules that involve a set-valued mapping and a differentiable function. First we consider the case when the outer function is differentiable. Theorem 4.11. Let X = X1 × X2 × X3 , Y = Y1 × Y2 × Y3 , Z = Z1 × Z2 × Z3 , F : X ⇒ Y , gi : Yi → Zi (i = 1, 2, 3), g = g1 ⊕ g2 ⊕ g3 , and y¯ = (¯y1 , y¯ 2 , y¯ 3 ) ∈ F (¯x), z¯ = g (¯y). Assume that gi is strictly differentiable at y¯ i (i = 1, 2, 3), ∇ g1 (¯y1 )(Y1 ) is finite codimensional in Z1 , gph F is GSNC at (¯x, y¯ ) with respect to {X1 , Y1 } through {X2 , Y2 }, and F ∩ g −1 : X × Z ⇒ Y (defined by (F ∩ g −1 )(x, z ) = F (x) ∩ g −1 (z ) for all x ∈ X , z ∈ Z ) is inner semi-continuous at (¯x, z¯ , y¯ ). Then gph (g ◦ F ) is GSNC at (¯x, z¯ ) with respect to {X1 , Z1 } through {X2 , Z2 }. w∗ Proof. Pick any εk ↓ 0, (xk , zk ) ∈ gph (g ◦ F ), (x∗k , zk∗ ) ∈  Nεk ((xk , zk ); gph (g ◦ F )) with (xk , zk ) → (¯x, z¯ ), (x∗k , zk∗ ) → (0, 0). w∗

Then ∇ g (¯y)∗ (zk∗ ) → 0. Since (F ∩ g −1 )(xk , zk ) ̸= ∅, by the inner semi-continuity of F ∩ g −1 at (¯x, z¯ , y¯ ), there is a subsequence ykj of yk ∈ F (xk ) ∩ g −1 (zk ) converging to y¯ . Choose δj ↓ 0 such that ykj ∈ B(¯y; δj ) for all j ∈ N. By the definition of ε -normals, there are sequences ηj , γj ↓ 0 such that

⟨x∗kj , u − xkj ⟩ + ⟨zk∗j , w − zkj ⟩ ≤ (εkj + γj )(∥u − xkj ∥ + ∥w − zkj ∥)

(4.35)

holds for all u ∈ B(xkj ; ηj ), w ∈ B(zkj ; ηj ) with w ∈ (g ◦ F )(u). Then there exists ηj′ ∈ (0, ηj ) such that g (v) ∈ B(zkj ; ηj ) for all v ∈ B(ykj ; ηj′ ) when j is large due to the continuity of g around y¯ . By (4.35) it follows

⟨x∗kj , u − xkj ⟩ + ⟨zk∗j , g (v) − g (ykj )⟩ ≤ (εkj + γj )(∥u − xkj ∥ + ∥g (v) − g (ykj )∥)

(4.36)

holds for all large j and u ∈ B(xkj ; ηj′ ), v ∈ B(ykj ; ηj′ ) with v ∈ F (u). Now applying the arguments for g similar to those in (4.26)–(4.27), by (4.36) it derives that, for all large j and u ∈ B(xkj ; ηj′ ), v ∈ B(ykj ; ηj′ ) with v ∈ F (u),

⟨x∗kj , u − xkj ⟩ + ⟨∇ g (¯y)∗ (zk∗j ), v − ykj ⟩ ≤  εj (∥u − xkj ∥ + ∥v − ykj ∥), where ℓ is the Lipschitz modulus of g around y¯ , and

 εj := (ℓ + 1)(εkj + γj ) + ∥zk∗j ∥rg (¯y; δj + ηj′ ) ↓ 0 (k → ∞). Consequently, x∗kj , ∇ g (¯y)∗ (zk∗j ) ∈  Nεj (xkj , ykj ); gph F









(4.37)

holds for all large j. We can check that

  ∗ ∗ ∗ ∇ g (¯y)∗ (zk∗j ) = ∇ g1 (¯y1 )∗ (z1k ), ∇ g2 (¯y2 )∗ (z2k ), ∇ g3 (¯y3 )∗ (z3k ) ; j j j so (4.37) can be written in the form ∗ ∗ ∗ x∗1kj , x∗2kj , x∗3kj , ∇ g1 (¯y1 )∗ (z1k ), ∇ g2 (¯y2 )∗ (z2k ), ∇ g3 (¯y3 )∗ (z3k ) ∈ Nεj (xkj , ykj ); gph F , j j j



which holds for all large j.







(4.38)

B. Wang et al. / Nonlinear Analysis 79 (2013) 221–232

231

∗ ∗ Now consider the case when x∗2k , z2k → 0 as k → ∞; then ∇ g2 (¯y2 )∗ (z2k ) → 0 (k → ∞). By the GSNC assumption on ∗ ∗ ∗ F and (4.38), it follows that x1kj → 0 and ∇ g1 (¯y1 ) (z1kj ) → 0 (j → ∞). Due to the finite codimensionality of ∇ g1 (¯y1 )(Y1 )

∗ in Z1 and Lemma 3.10, z1k → 0 as j → ∞. Since the original sequences are arbitrarily chosen, this is sufficient to conclude j

∗ ∗ that x∗1k → 0 and z1k → 0 as k → ∞ (otherwise, choose subsequences x∗kl of x∗k , zk∗l of zk∗ that satisfy ∥x∗1kl ∥, ∥z1k ∥ ≥ ε0 for ∗ ∗ some ε0 > 0 and for all l ∈ N. Then apply the above procedure to the sequence xkl , zkl and we reach a contradiction). 

Note that Theorem 4.11 implies Theorem 4.3. In fact, in Theorem 4.11 replace Xi by



j∈J1i

Xj , Yj by



j∈J2i

Yj , and Zi by

     j∈J2i Yj (i = 1, 2, 3), where Ji3 = Ji \ (Ji1 ∪ Ji2 ) (i = 1, 2). Yj = Yj if j ∈ J2 , and Yj = {0} if j ̸∈ J2 , and let g1 , g2 , g3 be



the appropriate direct sums of identity and zero functions, then Theorem 4.3 follows. In the case when Y and Z do not hold product structures, we have a special case of Theorem 4.11 as below.

Corollary 4.12. Let X = X1 × X2 × X3 , F : X ⇒ Y , and g: Y → Z such that g is strictly differentiable at y¯ , and F ∩ g −1 is inner semi-continuous at (¯x, z¯ , y¯ ) with z¯ = g (¯y) and y¯ ∈ F (¯x). Then the following two assertions holds: (i) If gph F is GSNC with respect to X1 at (¯x, y¯ ) through {X2 , Y }, then gph (g ◦ F ) is GSNC at (¯x, z¯ ) with respect to X1 through {X2 , Z }. (ii) If ∇ g (¯y)(Y ) is finite codimensional in Z and gph F is GSNC at (¯x, y¯ ) with respect to {X1 , Y } through X2 , then gph (g ◦ F ) is GSNC at (¯x, z¯ ) with respect to {X1 , Z } through X2 . Proof. (i) corresponds to the case of Theorem 4.11 when Y1 = Y3 = Z1 = Z3 = {0}, and (ii) corresponds to the case when Y2 = Y3 = Z2 = Z3 = {0}.  Regarding the non-GSNC case, when X2 = X3 = Y1 = Y3 = Z1 = Z3 = {0}, Theorem 4.11 reduces to Theorem 1.72(i) in [11]. However, when X2 = X3 = Y2 = Y3 = Z2 = Z3 = {0}, Theorem 4.11 actually gives an enhanced version of Theorem 1.72(ii) in [11], where the surjectivity of the derivative mapping ∇ g (¯y) is replaced by the finite codimensionality of its range. We formulate the result as below. Corollary 4.13. Let F : X ⇒ Y , and g: Y → Z such that g is strictly differentiable at y¯ , and F ∩ g −1 is inner semi-continuous at (¯x, z¯ , y¯ ) with z¯ = g (¯y) and y¯ ∈ F (¯x). Assume that ∇ g (¯y)(Y ) is finite codimensional in Z and gph F is SNC at (¯x, y¯ ), then gph (g ◦ F ) is SNC at (¯x, z¯ ). We move on to consider the chain rules when the inner mapping is smooth. This relates to the GSNC property of inverse images under smooth mappings. Recall that a mapping f : X → Y is restrictively metrically regular (RMR) around x¯ ∈ X if the restricted mapping f : X → f (X ) is metrically regular at x¯ when f (X ) is regarded as a metric space. The RMR property was proposed in [12] and it was demonstrated in this paper that this property is crucial for many calculus rules in generalized differentiation theory; in particular, such mappings preserve the SNC/PSNC property. The following result is a generalization of Theorem 5.1 in [12] in the case of GSNC involving inverse images. Here the cases of PSNC and SPSNC are unified. Theorem 4.14. Let X = X1 × X2 × X3 , Y = Y1 × Y2 × Y3 , fi : Xi → Yi (i = 1, 2, 3), f = f1 ⊕ f2 ⊕ f3 , and Ω ⊂ Y with f (¯x) ∈ Ω , x¯ = (¯x1 , x¯ 2 , x¯ 3 ) ∈ X . Assume that fi is strictly differentiable at x¯ i and RMR around this point (i = 1, 2, 3). Then the following hold: (i) If Ω ∩ f (X ) is GSNC at f (¯x) with respect to Y1 through Y2 , and ∇ f3 (¯x3 )(X3 ) is w ∗ -extensible in Y3 , then f −1 (Ω ) is GSNC at x¯ with respect to X1 through X2 . (ii) If f −1 (Ω ) is GSNC at x¯ with respect to X1 through X2 , and ∇ f1 (¯x1 ) is finite codimensional in Y1 , then Ω ∩ f (X ) is GSNC at f (¯x) with respect to Y1 through Y2 . Proof. The proof can be obtained following the proof of Theorem 5.1 in [13] combining the PSNC and SPSNC part together.  To conclude the study, we present the chain rule where the inner mapping is strictly differentiable. Theorem 4.15. Let X = X1 × X2 × X3 , Y = Y1 × Y2 × Y3 , Z = Z1 × Z2 × Z3 , fi : Xi → Yi (i = 1, 2, 3), f = f1 ⊕ f2 ⊕ f3 , and G: Y ⇒ Z with y¯ = f (¯x) and z¯ ∈ G(¯y), where x¯ = (¯x1 , x¯ 2 , x¯ 3 ) ∈ X , y¯ ∈ Y , z¯ ∈ Z . Assume that fi is strictly differentiable at x¯ i and RMR around this point (i = 1, 2, 3). Then the following hold: (i) If gph Gf is GSNC at (¯y, z¯ ) with respect to {Y1 , Z1 } through {Y2 , Z2 }, and ∇ f3 (¯x3 )(X3 ) is w ∗ -extensible in Y3 , where Gf is the restriction of G on f (X ), then gph (G ◦ f ) is GSNC at (¯x, z¯ ) with respect to {X1 , Z1 } with respect to {X2 , Z2 }. (ii) If gph (G ◦ f ) is GSNC at (¯x, z¯ ) with respect to {X1 , Z1 } with respect to {X2 , Z2 }, and ∇ f1 (¯x1 ) is finite codimensional in Y1 , then gph Gf is GSNC at (¯y, z¯ ) with respect to {Y1 , Z1 } through {Y2 , Z2 }. Proof. Let E be the identity mapping from on Z , then gph (G ◦ f ) = (f ⊕ E )−1 (gph G). So the results follow from Theorem 4.14. 

232

B. Wang et al. / Nonlinear Analysis 79 (2013) 221–232

Acknowledgment The research of the third author was supported by the Doctoral Initiating Foundation of Liaoning Province (20071097) and partially supported by the National Natural Science Foundation of China (61070242). References [1] R.T. Rockafellar, Directional Lipschitzian functions and subdifferential calculus, Proc. Lond. Math. Soc. (3) 39 (1979) 331–355. [2] J.M. Borwein, H.M. Strójwas, Tangential approximations, Nonlinear Anal. 9 (1985) 1347–1366. [3] P.D. Lowen, Limits of Fréchet normals in nonsmooth analysis, in: A.D. Ioffe, et al. (Eds.), Optimization and Nonlinear Analysis, in: Pitman Res. Notes Math. Ser., vol. 244, Longman, Harlow, UK, 1992, pp. 178–188. [4] B.S. Mordukhovich, Y. Shao, Nonsmooth sequential analysis in Asplund spaces, Trans. Amer. Math. Soc. 348 (1996) 1235–1280. [5] B.S. Mordukhovich, Y. Shao, Stability of set-valued mappings in infinite dimensions: point criteria and applications, SIAM J. Control Optim. 35 (1) (1997) 285–314. [6] B.S. Mordukhovich, Coderivatives of set-valued mappings: calculus and applications, Nonlinear Anal. 30 (1997) 3059–3070. [7] B.S. Mordukhovich, Y. Shao, Nonconvex coderivative calculus for infinite-dimensional multifunctions, Set-Valued Anal. 4 (1996) 205–236. [8] B.S. Mordukhovich, B. Wang, Sequential normal compactness in variational analysis, Nonlinear Anal. 47 (2001) 717–728. [9] B.S. Mordukhovich, B. Wang, Extensions of generalized differential calculus in Asplund spaces, J. Math. Anal. Appl. 272 (2002) 164–186. [10] B.S. Modukhovich, B. Wang, Calculus of sequential normal compactness in variational analysis, J. Math. Anal. Appl. 282 (2003) 63–84. [11] B.S. Mordukhovich, Variational Analysis and Generalized Differentiation, I: Basic Theory, Springer, Berlin, 2006. [12] B.S. Mordukhovich, B. Wang, Restrictive metric regularity and generalized differential calculus in Banach spaces, Int. J. Math. Math. Sci. 50 (2004) 2650–2683. [13] B.S. Mordukhovich, B. Wang, Generalized differentiation of parameter-dependent sets and mappings, Optimization 57 (1) (2008) 17–40. [14] N.M. Nam, B. Wang, Metric regularity, tangential distances and generalized differentiation in Banach spaces, Nonlinear Anal. 75 (2012) 1496–1506. [15] B. Wang, Mixed sequential normal compactness conditions in variational analysis, in: The Sixth Midwest Optimization Seminar, Wayne State University, October, 2004. [16] B. Wang, Generalized sequential normal compactness in Asplund spaces. Preprint. [17] M. Fabian, Subdifferentiability and trustworthiness in the light of a new variational principle of Borwein and Preiss, Acta Univ. Carolin. Ser. Math. Phys. 30 (1989) 51–56. [18] B. Josefson, Weak sequential convergence in the dual of a Banach space does not imply norm convergence, Ark. Mat. 13 (1975) 79–89. [19] A. Nissenzweig, w ∗ sequential convergence, Israel J. Math. 22 (1975) 266–272. [20] J. Diestel, Sequences and Series in Banach Spaces, Springer-Verlag, 1984. [21] J.-P. Aubin, Lipschitz behavior of solutions to convex minimization problems, Math. Oper. Res. 9 (1984) 87–111. [22] B. Wang, Y. Zhao, W. Qian, On the weak-star extensibility, Nonlinear Anal. 74 (2011) 2109–2115. [23] W. Rudin, Functional Analysis, McGraw-Hill, 1991.