Generalized Hukuhara Gâteaux and Fréchet derivatives of interval-valued functions and their application in optimization with interval-valued functions

Generalized Hukuhara Gâteaux and Fréchet derivatives of interval-valued functions and their application in optimization with interval-valued functions

ˆ ´ Generalized Hukuhara Gateaux and Frechet Derivatives of Interval-valued Functions and their Application in Optimization with Interval-valued Funct...

573KB Sizes 0 Downloads 48 Views

ˆ ´ Generalized Hukuhara Gateaux and Frechet Derivatives of Interval-valued Functions and their Application in Optimization with Interval-valued Functions

Journal Pre-proof

ˆ ´ Generalized Hukuhara Gateaux and Frechet Derivatives of Interval-valued Functions and their Application in Optimization with Interval-valued Functions Debdas Ghosh, Ram Surat Chauhan, Radko Mesiar, Amit Kumar Debnath PII: DOI: Reference:

S0020-0255(19)30874-6 https://doi.org/10.1016/j.ins.2019.09.023 INS 14854

To appear in:

Information Sciences

Received date: Revised date: Accepted date:

24 January 2019 9 July 2019 13 September 2019

Please cite this article as: Debdas Ghosh, Ram Surat Chauhan, Radko Mesiar, Amit Kumar Debnath, ˆ ´ Generalized Hukuhara Gateaux and Frechet Derivatives of Interval-valued Functions and their Application in Optimization with Interval-valued Functions, Information Sciences (2019), doi: https://doi.org/10.1016/j.ins.2019.09.023

This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. © 2019 Published by Elsevier Inc.

Generalized Hukuhara Gˆateaux and Fr´echet Derivatives of Interval-valued Functions and their Application in Optimization with Interval-valued Functions Debdas Ghosha , Ram Surat Chauhana , Radko Mesiarb,c,∗, Amit Kumar Debnatha a

Department of Mathematical Sciences, Indian Institute of Technology (BHU) Varanasi Uttar Pradesh–221005, India b Faculty of Civil Engineering, Slovak University of Technology, Radlinsk´eho 11, 810 05, Bratislava c Palack´y University Olomouc, Faculty of Science, Department of Algebra and Geometry, 17. listopadu 12, 771 46 Olomouc, Czech Republic

Abstract In this article, the notions of gH-directional derivative, gH-Gˆateaux derivative and gH-Fr´echet derivative for interval-valued functions are proposed. The existence of gH-Fr´echet derivative is shown to imply the existence of gH-Gˆateaux derivative and the existence of gH-Gˆateaux derivative is observed to indicate the presence of gH-directional derivative. For an intervalvalued gH-Lipschitz function, it is proved that the existence of gH-Gˆateaux derivative implies the existence of gH-Fr´echet derivative. It is observed that for an interval-valued convex function on a linear space, the gH-directional derivative exists at any point for every direction. Concepts of linear and monotonic interval-valued functions are studied in the sequel. Further, it is shown that the proposed derivatives are useful to check the convexity of an interval-valued function and to characterize efficient points of an optimization problem with interval-valued objective function. It is observed that at an efficient point of an interval-valued function, none of its gH-directional derivatives dominates zero and the gH-Gˆateaux derivative must contain zero. The entire study is supported by suitable illustrative examples. Keywords: Interval-valued functions, gH-continuity, gH-directional derivative, gH-Gˆateaux derivative, gH-Fr´echet derivative, Chain rule, Efficient point. AMS Mathematics Subject Classification (2010): 26A24 · 90C30 · 65K05. 1. Introduction Now-a-days, due to inherent uncertainty in many real world phenomena, the study of the functions with interval coefficients demands a significant study. Moore [26], [27] introduced the concept of interval analysis to deal with interval-valued data and the functions with interval ∗

Corresponding author Email addresses: [email protected] (Debdas Ghosh), [email protected] (Ram Surat Chauhan), [email protected] (Radko Mesiar), [email protected] (Amit Kumar Debnath)

Preprint submitted to Information Sciences

September 18, 2019

coefficients. Thereafter, many researchers have developed the theory and applications of interval arithmetic and interval-valued functions (IVFs), for instances, see [2], [4], [10], [19], [23], [24], [35], [37], etc. and their references. On Moore’s interval arithmetic [27], many research attempts endeavoured to develop calculus and algebra with intervals. In [29], it is shown that interval systems with Moore’s interval arithmetic does not form a group, because for a nonzero interval A, there does not exist any interval B such that A ⊕ B = 0. For the same reason, many conventional results in the calculus for real-valued functions are not true for interval-valued functions. For instance a constant interval-valued function is not differentiable [13]. Thus, for the theoretical framework of calculus of interval-valued functions and interval analysis, Hukuhara [17] introduced a new concept of difference of two intervals, namely the Hukuhara difference. Although the concept of Hukuhara difference satisfies A H A = 0 but the Hukuhara difference of A H B can be calculated only when the length of A is greater than that of B. To overcome this inefficiency of the Hukuhara difference, Stefanni and Bede [33] introduced a generalized concept of the Hukuhara difference, namely gH-difference which satisfies the property A gH A = 0 and the gH-difference can be calculated for any pair of intervals. Later, apart from Moore’s interval arithmetic, Piegat and Landowski [31], [32] have introduced another concept of interval arithmetic, namely RDM interval arithmetic, which also assures that A − A = 0 for any interval A. Most of the results of RDM interval arithmetic are similar to the Moore’s interval arithmetic except the substraction of an interval from itself. However, in this article we use Moore’s interval arithmetic with gH-difference instead of RDM interval arithmetic (see Note 2 for the reason). Calculus plays an important role to observe the characteristics of a function. On the calculus of IVFs, the concept of differentiability of IVFs, based on Hukuhara difference of intervals, was initially introduced by Hukuhara [17]. But this definition of Hukuhara differentiability is restrictive [10]. In general, if F(x) = C f (x), where C is a closed and bounded interval and f (x) is a real-valued function with f 0 (x) < 0, then F is not Hukuhara differentiable [5]. In order to refine the calculus of IVFs, the ideas of generalized Hukuhara continuity (gHcontinuity), gH-differentiability and gH-gradient [13], [23], [33], [36] have been introduced with the help of Moore’s interval arithmetic and gH-difference of intervals [33]. With the help of the existing calculus of IVFs, a few concepts about the problems of optimization and differential equation of IVFs have been discussed in [1], [8], [18], [5], [23] and [37]. Wu [37], [38] used the concept of Hukuhara differentiability to study KKT conditions of optimization problems with IVFs. Wu [37], [38] has also illustrated the solution concept of optimization problems with IVFs by imposing a partial ordering on the set of all closed intervals and applying the existing calculus of IVFs. In 2013, the KKT conditions, based on gH-differentiability, of optimization problems with IVFs have been illustrated by Chalco-Cano and others [9]. After that, Bhurji and Panda [7] have defined interval-valued function in the parametric form and studied its properties and developed a methodology to study the existence of the efficient solution of an optimization problem with IVFs. Ghosh [13] has introduced a new definition of gH-differentiability and proposed a Newton method [13] and an updated Newton method [14] to capture the efficient solution of an optimization problem with IVFs. 2

Recently, Ghosh et al. [40] have proposed the theory of alternatives and hence the KKT optimality condition for IOPs. Importantly, it is shown in [40] that KKT optimality conditions appear with inclusion relations instead of equations. A brief review and correctness of the existing literature on the algebra of gH-differentiable interval-valued functions has been reported in [12]. Apart from gH-differentiability of IVFs, with the help of so-called horizontal membership functions granular derivative has been introduced in [25]. An interesting notion on the H-difference for Z-number-valued analysis has been introduced in [3]. Alongside the analysis of interval-valued functions, set-valued functions are also studied in the literature, for instances, see [16], [39], and the references therein. An abstract local tangency property for a gH-differntiable interval-valued functions and a KKT-like optimality conditions for nondominated solutions of constrained IVF have been developed in [34]. A KKT-optimaility condition for interval and fuzzy optimization in several variables under total and directional generalized differentiability has been derived in [11]. A few optimality conditions with inclusion relation for gH-differentiable interval-valued functions has been derived in [30]. For a detailed survey on optimality conditions for IVF, one can refer to the references of [34] and [15]. In this article, we endeavor to develop the notions of generalized derivatives, such as gHGˆateaux derivative and gH-Fr´echet derivative for IVFs. The main motivation behind the study is the following two points— • An interval-valued function can be observed as a bunch of infinitely many real-valued functions [6], and for a vector-valued (possibly infinite dimensional) function the idea of a derivative is the Fr´echet derivative. • To develop some numerical algorithm to capture the complete set of efficient solutions of an IOP, the idea of directional derivative and Gˆateaux derivative inevitably arise. Although in [4] the authors defined gH-directional derivative, the proposed definition of gH-directional derivative in this article is more simple, and we prove that both have identical meaning. To define gH-Gˆateaux derivative and gH-Fr´echet derivative and to study their properties we propose the concepts of linear IVF and bounded linear IVF. In the sequel, we also define monotonic and bounded IVFs. Throughout the paper we use the ordering concept of intervals given in [6] which has an important role in investigating efficient solutions of an interval optimization problem [7], [13], [15], [21], [22]. Rest of the article is presented in the following sequences. The next section provides the basic terminologies and definitions on intervals and IVFs. In the section 3, we define the gH-directional derivative of IVFs and prove that the gH-directional derivative of an intervalvalued convex function always exists. Further, with the help of gH-directional derivative, we provide a necessary and sufficient condition for an efficient point of an interval optimization problem (IOP) with IVFs. The concept of linearity and the gH-Gˆateaux derivative for IVFs are illustrated in section 4. In section 4, we also prove that at an efficient point of an interval-valued function its gH-Gˆateaux derivative must contain zero. After that, the notion of gH-Fr´echet 3

derivative and its interrelation with gH-Gˆateaux derivative are discussed in section 5. It is also shown there that the chain rule holds for the gH-Fr´echet derivative of an IVFs. Finally, the last section draws future directions of the study. 2. Preliminaries and Terminologies In this section, we provide basic terminologies and notations on intervals and IVFs those are used throughout the paper. 2.1. Interval Arithmetic Let R be the set of real numbers, R+ be the set of nonnegative real numbers, and I(R) be the set of all closed and bounded intervals. We use bold capital letters A, B, C, · · · to denote the elements of I(R).   Consider two intervals A = [a, a] and B = b, b . The addition of A and B, denoted A ⊕ B, is defined by   A ⊕ B = a + b, a + b .

The subtraction of B from A, denoted A B, is defined by   A B = a − b, a − b .

The multiplication by a real number λ to A, denoted λ A or A λ, is defined by ( [λa, λa], if λ ≥ 0 λ A=A λ= [λa, λa], if λ < 0. We use the following definition for the difference between a pair of intervals since it is the most general definition of difference (see [13] for the details on why it is the most general). Definition 2.1. (gH-difference of intervals [33]). Let A and B be two elements of I(R). The gH-difference between A and B, denoted A gH B, is defined by the interval C such that A = B ⊕ C or B = A C.

  Note 1. (See [33]). It is to be noted that for two intervals A = [a, a] and B = b, b , and

    A gH B = min a − b, a − b , max a − b, a − b (−1) (A gH B) = B gH A.

  Definition 2.2. (Dominance of intervals [6]). Let A = [a, a] and B = b, b be two elements of I(R). Note that A and B can be presented by ) A = [a, a] = {a(t) | a(t) = a + t (a − a) , 0 ≤ t ≤ 1} and     (2.1) B = b, b = b(t) b(t) = b + t b − b , 0 ≤ t ≤ 1 , respectively. 4

(i) B is said to be dominated by A if a(t) ≤ b(t) for all t ∈ [0, 1], and then we write A  B; (ii) B is said to be strictly dominated by A if A  B and there exists a t0 ∈ [0, 1] such that a(t0 ) 6= b(t0 ), and then we write A ≺ B; (iii) B is said to be not dominated by A if a(t) > b(t) for at least one t ∈ [0, 1], and then we write A ⊀ B; (iv) if neither A ≺ B nor B ≺ A, we say that none of A and B dominates the other, or A and B are not comparable. Lemma 2.1. For two elements A and B of I(R), (i) A  B ⇐⇒ A gH B  0 and (ii) A ⊀ B ⇐⇒ A gH B ⊀ 0. Proof. See the Appendix A. Note 2. Throughout the article we use Moore’s interval arithmetic with gH-difference. One may attempt to apply RDM interval arithmetic [31], [32] instead of Moore’s interval arithmetic with gH-difference. However, it is noteworthy that RDM interval arithmetic does not satisfy Lemma 2.1, which is used to characterize the efficient points of an IOP in Sections 3 and 4. For instance, let us consider the intervals A = [1, 3] and B = [2, 5]. Here A ≺ B. But according to RDM interval arithmetic, we have [1, 3] = 1 + α1 (3 − 1) = 1 + 2α1 , α1 ∈ [0, 1] and [2, 5] = 2 + α2 (5 − 2) = 2 + 3α2 , α2 ∈ [0, 1]. Therefore, according to RDM interval arithmetic, the difference between A and B is [1, 3] − [2, 5] = 1 + 2α1 − 2 − 3α2 , α1 , α2 ∈ [0, 1] = −1 + 2α1 − 3α2 , α1 , α2 ∈ [0, 1] = [−4, 1] ⊀ 0. Hence, A ≺ B =⇒ 6 A − B ≺ 0,

i.e., the property (i) in Lemma 2.1 does not hold with respect to RDM interval arithmetic. Further, for the same pair of intervals A = [1, 3] and B = [2, 5], we see that A − B ⊀ 0, but A ≺ B. Hence, the property (ii) in Lemma 2.1 is also not true with respect to RDM interval arithmetic. ¯] in I(R), the function k.kI(R) : I(R) → Definition 2.3. (Norm on I(R) [26]). For an A = [a, a + R , defined by kAkI(R) = max{|a|, |¯ a|}, 5

is a norm on I(R). Note 3. (See [24]). The set of all closed and bounded intervals I(R) equipped with the norm k.kI(R) is a normed quasilinear space with respect to the operations {⊕, gH , }. Lemma 2.2. For all A, B ∈ I(R), kAkI(R) − kBkI(R) ≤ kA gH BkI(R) .

(2.2)

Proof. See the Appendix B. Lemma 2.3. For all A, B, C ∈ I(R), kA gH BkI(R) ≤ k(A gH C) ⊕ (C gH B)kI(R) .

(2.3)

Proof. See the Appendix C. Remark 2.3.1. It is noteworthy that although ⊕ is associative in I(R), for two intervals A and C in I(R), the interval ((A gH C) ⊕ C) is not always equal to A. For instance, consider A = [6, 9] and C = [3, 7]. Then, (A gH C) ⊕ C = [2, 3] ⊕ [3, 7] = [5, 10] 6= A. Thus, the Lemma 2.3 is not an obvious property of k · kI(R) on the elements in I(R). Corollary 2.1. For all A, B ∈ I(R), kA gH BkI(R) ≤ kAkI(R) + kBkI(R) .

(2.4)

Proof. In Lemma 2.3, by taking C = 0 we get kA gH BkI(R) ≤ kA gH 0kI(R) + k0 gH BkI(R) = kAkI(R) + kBkI(R) .

2.2. Interval-valued Functions The function FCkv : X → I(R), where X is a nonempty subset of Rn , is known as an IVF that depends on k intervals in the interval vector Ckv = (C1 , C2 , . . . , Ck )T , where Cj = [cj , cj ] ∈ I(R) for j = 1, 2, · · · , k. Parametrically, the vector Ckv is observed by the following set n c(t) c(t) = (c1 (t1 ), c2 (t2 ), · · · , ck (tk ))T , cj (tj ) = cj + tj (cj − cj ), o t = (t1 , t2 , · · · , tk )T , 0 ≤ tj ≤ 1, j = 1, 2, · · · , k .

Thus, the function FCkv can be presented as a collection of bunch of real-valued functions fc(t) ’s, i.e., for all x ∈ X ,  FCkv (x) = fc(t) (x)|fc(t) : X → R, c(t) ∈ Ckv , t ∈ [0, 1]k . 6

The function FCkv can also be presented in another way. Let f (x) = min fc(t) (x) and f (x) = max fc(t) (x). t∈[0,1]k

t∈[0,1]k

Then, for each argument point x ∈ X , FCkv can be presented by

 FCkv (x) = fλ (x) fλ = λf + (1 − λ)f , 0 ≤ λ ≤ 1 .

Or, simply we can write

  FCkv (x) = f (x), f (x) .

Unless mentioned otherwise, in the rest of the article, we use F to denote FCkv and k·k for the usual Euclidean norm of Rn .   Definition 2.4. (Monotonic IVF). An IVF F(x) = f (x), f (x) from a nonempty subset X of Rn to I(R) is said to be monotonic increasing if for all x1 , x2 ∈ X , x1 ≤ x2 implies F(x1 )  F(x2 ). The function F is said to be monotonic decreasing if for all x1 , x2 ∈ X , x1 ≤ x2 implies F(x2 )  F(x1 ). Remark 2.4.1. It is easy to verify that if an IVF F is monotonic increasing (or monotonic decreasing) on X then both the real-valued functions f and f are monotonic increasing (or monotonic decreasing) on X and vice-versa. Definition 2.5. (Bounded IVF). An IVF F from a nonempty subset X of Rn to I(R) is said to be bounded below on X if there exists an interval A ∈ I(R) such that A  F(x) for all x ∈ X . The function F is said to be bounded above on X if there exists an interval A0 ∈ I(R) such that F(x)  A0 for all x ∈ X . The function F is said to be bounded on X if it is both bounded below and above.

Remark 2.5.1. It is easy to check that if the IVF F is bounded below (or bounded above) on X , then both the real-valued functions f and f are bounded below (or bounded above) on X and vice-versa. Definition 2.6. (Interval-valued convex function [37]). Let X ⊆ Rn be a convex set. An IVF F : X → I(R) is said to be convex on X if F(λx1 + λ0 x2 )  λ F(x1 ) ⊕ λ0 F(x2 ) for all x1 , x2 ∈ X and for all λ, λ0 ∈ [0, 1], where λ + λ0 = 1. 7

  Lemma 2.4. (See [37]). Let F(x) = f (x), f (x) be an IVF on a nonempty subset X of Rn . Then, lim F(λ) exists if and only if lim f (λ) and lim f (λ) exist and x→x0

x→x0

lim F(λ) =

x→x0

x→x0



 lim f (λ), lim f (λ) .

x→x0

x→x0

Definition 2.7. (gH-continuity [13]). Let F be an IVF on a nonempty subset X of Rn . The function F is said to be gH-continuous at x¯ ∈ X if for any h ∈ Rn with x¯ + h ∈ X , lim (F(¯ x + h) gH F(¯ x)) = 0.

khk→0

Definition 2.8. (gH-Lipschitz continuous IVF). An IVF F : X → I(R) is said to be gHLipschitz continuous on X ⊆ Rn if there exists M > 0 such that kF(x) gH F(y)kI(R) ≤ M kx − ykX for all x, y ∈ X . The constant M is called a Lipschitz constant. Definition 2.9. (Efficient point [13]). Let X be a nonempty subset of Rn and F : Rn → I(R) be an IVF. A point x¯ ∈ X is said to be an efficient point of an IOP: min F(x) x∈X

(2.5)

if F(x) ⊀ F(¯ x) for all x ∈ X . 3. gH-Directional Derivative of Interval-valued Functions In this section, we define the gH-directional derivative of IVFs and prove its existence for a convex function. Further, we present an optimality condition for efficient point of an IOP with IVFs. Definition 3.1. (gH-directional derivative). Let F be an IVF on a nonempty subset X of Rn . Let x¯ ∈ X and h ∈ Rn . If the limit 1 (F(¯ x + λh) gH F(¯ x)) λ→0+ λ lim

exists, then the limit is said to be gH-directional derivative of F at x¯ in the direction h, and it is denoted by FD (¯ x)(h). Example 3.1. For instance, we consider the function F(x1 , x2 ) = C1 x1 ⊕ C2 (x2 ex1 ), where C1 , C2 ∈ I(R). We note that 1 (F(λh1 , λh2 ) gH F(0, 0)) λ→0+ λ  1 = lim λ C1 h1 ⊕ λ C2 (h2 eλh1 ) λ→0+ λ = C1 h1 ⊕ C2 h2 . lim

8

Thus, the gH-directional derivative of F at (0, 0) in direction (h1 , h2 ) exists and FD (0, 0)(h) = C1 h1 ⊕ C2 h2 . Note 4. According to Bao [4], the gH-directional derivative of F at x¯ in the direction h exists if and only if both the limits 1 1 (F(¯ x + λh) gH F(¯ x)) and lim (F(¯ x) gH F(¯ x − λh)) λ→0+ λ λ→0− λ lim

exist and they are equal. However, we note that 1 (F(¯ x + λh) gH F(¯ x)) = λ→0+ λ

1 (F(¯ x + (−δ)h) gH F(¯ x)) , where λ = −δ δ→0− −δ 1 x) gH F(¯ x − δh)) , by Note 1. = lim (F(¯ δ→0− δ

lim

lim

Hence, the existence of the limit in Definition 3.1 suffices to check the existence of the gHdirectional derivative. Lemma 3.1. Let Y be a linear subspace of Rn and F : Y → I(R) be a convex function on Y. Then for any x¯ and h in Y, the function Φ : R+ \{0} → I(R), defined by Φ(λ) =

1 (F(¯ x + λh) gH F(¯ x)) for all λ > 0, λ

is monotonically increasing. Proof. As F is a convex function, for any 0 < s ≤ t, we have s t−s and λ2 = t t (λ1 F(z) ⊕ λ2 F(¯ x)) gH F(¯ x), where z = x¯ + th   min λ1 f (z) + λ2 f (¯ x) − f (¯ x), λ1 f (z) + λ2 f (¯ x) − f (¯ x) ,   max λ1 f (z) + λ2 f (¯ x) − f (¯ x), λ1 f (z) + λ2 f (¯ x) − f (¯ x)   min λ1 f (z) − λ1 f (¯ x), λ1 f (z) − λ1 f (¯ x) ,   max λ1 f (z) − λ1 f (¯ x), λ1 f (z) − λ1 f (¯ x) λ1 (F(z) gH F(¯ x)).

F(¯ x + sh) gH F(¯ x) = F(λ1 (¯ x + th) + λ2 x¯) gH F(¯ x), where λ1 =  = = = Therefore,

1 1 (F(¯ x + sh) gH F(¯ x))  (F(¯ x + th) gH F(¯ x)). s t Consequently, we have Φ(s)  Φ(t). Thus, Φ is a monotonic increasing function. Lemma 3.2. Let Y ⊆ Rn be a linear subspace of Rn and F : Y → I(R) be a convex function on Y. Then, for each x¯ and h in Y, F(¯ x) gH F(¯ x − h) 

1 (F(¯ x + λh) gH F(¯ x)) for all λ > 0. λ 9

Proof. Due to the convexity of F on Y, for any x¯, h in Y and λ > 0, we have   1 λ F(¯ x) = F (¯ x + λh) + (¯ x − h) 1+λ 1+λ 1 λ  F(¯ x + λh) ⊕ F(¯ x − h). 1+λ 1+λ This implies     (1 + λ)f (¯ x), (1 + λ)f (¯ x)  f (z) + λf (y), f (z) + λf (y) , where z = x¯ + λh and y = x¯ − h.

Thus, we get

x) ≤ f (z) + λf (y) (1 + λ)f (¯ or, f (¯ x) − f (y) ≤

1 (f (z) − f (¯ x)). λ

Similarly,

1 (f (z) − f (¯ x)). λ Hence, in view of the last two inequalities, we obtain     x) − f (y), f (¯ x) − f (y) , max f (¯ x) − f (y), f (¯ x) − f (y) min f (¯    1   min f (z) − f (¯ x), f (z) − f (¯ x) , max f (z) − f (¯ x), f (z) − f (¯ x) . λ f (¯ x) − f (y) ≤

Thus, we get

F(¯ x) gH F(y)  Hence, F(¯ x) gH F(¯ x − h) 

1 (F(z) gH F(¯ x)) . λ

1 (F(¯ x + λh) gH F(¯ x)) for all λ > 0. λ

Theorem 3.1. Let Y be a real linear subspace of Rn and F : Y → I(R) be a convex function on Y. Then, at any x¯ ∈ Y, gH-directional derivative FD (¯ x)(h) exists for every direction h ∈ Y. Proof. Let x¯ and h be two elements in Y. Define a function Φ : R+ \ {0} → I(R) by   1 Φ(λ) = φ(λ), φ(λ) = (F(¯ x + λh) gH F(¯ x)) . λ

Therefore, for all λ > 0, we obtain 

  1  φ(λ), φ(λ) = min f (¯ x + λh) − f (¯ x), f (¯ x + λh) − f (¯ x) , λ   max f (¯ x + λh) − f (¯ x), f (¯ x + λh) − f (¯ x) . 10

Since F is convex on Y, by Lemma 3.2, we have F(¯ x) gH F(¯ x − h) 

1 (F(¯ x + λh) gH F(¯ x)) = Φ(λ) for all λ ∈ R+ . λ

Hence, the IVF Φ is bounded below. Further, by Lemma 3.1, Φ is monotonically increasing. Thus, in view of Remark 2.4.1 and 2.5.1 both the real-valued functions φ and φ are monotonically increasing and bounded below. Therefore, the limits limλ→0+ φ(λ) and limλ→0+ φ(λ) exist and by Lemma 2.4 we see that the limit limλ→0+ φ(λ) exists. Hence, the function F has a gH-directional derivative at x¯ ∈ Y, in the direction h ∈ Y. Theorem 3.2. (Characterization of efficient points). Let X be a nonempty subset of Rn , F : X → I(R) be an IVF, and x¯ ∈ X be an efficient point of the IOP (2.5). If the function F has a gH-directional derivative at x¯ in the direction x − x¯ for any x ∈ X , then FD (¯ x)(x − x¯) ⊀ 0 for all x ∈ X .

(3.1)

The converse is true when X is convex and F is convex on X . Proof. Let x¯ ∈ X be an efficient point of the IVF F. For any point x ∈ X , the gH-directional derivative of F at x¯ in the direction x − x¯ is given by 1 (F(¯ x + λ(x − x¯)) gH F(¯ x)). λ→0+ λ

FD (¯ x)(x − x¯) := lim

Since the point x¯ is an efficient point of the function F, for any x ∈ X and λ > 0 with x¯ + λ(x − x¯) ∈ X , we get F(¯ x + λ(x − x¯)) ⊀ F(¯ x) or, F(¯ x + λ(x − x¯)) gH F(¯ x) ⊀ 0, by Lemma 2.1 1 or, lim λ (F(¯ x + λ(x − x¯)) gH F(¯ x)) ⊀ 0 λ→0+

or, FD (¯ x)(x − x¯) ⊀ 0. To prove the latter part, we assume that X is convex and the function F is convex on X . Then, by Theorem 3.1, for any x ∈ X , F has a gH-directional derivative at x¯ ∈ X in every direction x − x¯, where x ∈ X . Let FD (¯ x)(x − x¯) ⊀ 0 for all x ∈ X . If possible let x¯ be not an efficient point of F. So there exists at least one x0 ∈ X such that F(x0 )  F(¯ x).

11

Therefore, for any λ ∈ (0, 1] we have λ F(x0 )  λ F(¯ x) 0 0 or, λ F(x ) ⊕ λ F(¯ x)  λ F(¯ x) ⊕ λ0 F(¯ x), where λ0 = 1 − λ or, λ F(x0 ) ⊕ λ0 F(¯ x)  (λ + λ0 ) F(¯ x) = F(¯ x). Due to the convexity of F on X , we have F(¯ x + λ(x0 − x¯)) = F(λx0 + λ0 x¯)  λ F(x0 ) ⊕ λ0 F(¯ x)  F(¯ x) 0 or, F(¯ x + λ(x − x¯)) gH F(¯ x)  0 0 1 or, lim λ (F(¯ x + λ(x − x¯)) gH F(¯ x))  0 λ→0+

or, FD (¯ x)(x0 − x¯)  0. This is clearly contradictory the assumption that FD (¯ x)(x − x¯) ⊀ 0 for all x ∈ X . Hence, x¯ is the efficient point of F. F(x) = [x2 − 2x + 1, x2 + 2]. Example 3.2. Consider the IOP: min x∈[− 25 , 32 ]   The gH-directional derivative of F at a point x ∈ − 52 , 32 in a direction h ∈ R is

(3.2)

1 (F(x + λh) gH F(x)) λ→0+ λ

FD (x)(h) := lim

    1 = lim (x + λh)2 − 2(x + λh) + 1, (x + λh)2 + 2 gH x2 − 2x + 1, x2 + 2 λ→0+ λ ( [2h(x − 1), 2hx], if h ≥ 0 = [2hx, 2h(x − 1)], otherwise. • Case 1. Let x ∈ [0, 1]. In this case, 2hx ≥ 0 for h ≥ 0 and 2h(x − 1) ≥ 0 for h ≤ 0. Hence, FD (x)(h) ⊀ 0 for all h ∈ R.  • Case 2. Let x ∈ 1, 23 . Then, for h < 0 we notice that FD (x)(h) ≺ 0.  2  • Case 3. Let x ∈ − 5 , 0 . Then, for h > 0 we see that FD (x)(h) ≺ 0.

Thus, the relation (3.1) of theorem 3.1 holds only for x ∈ [0, 1]. In Figure 1, the objective function F is depicted by the shaded region. From the Figure 1, we also see that each x ∈ [0, 1] is an efficient point of the IOP (3.2).

12

y 4

3

2

1

0.5

1.0

1.5

x

Figure 1: The objective function F(x) of Example 3.2

4. gH-Gˆateaux Derivative of Interval-valued Functions In this section we define the linearity concept and gH-Gˆateaux derivative for IVFs. Definition 4.1. (Linear IVF). Let Y be a linear subspace of Rn . The function F : Y → I(R) is said to be linear if (i) F(λx) = λ F(x) for all x ∈ Y and for all λ ∈ R and (ii) for all x, y ∈ Y, either F(x)⊕F(y) = F(x+y) or none of F(x)⊕F(y) and F(x+y) dominates the other.   Note 5. Let F : Y → I(R) be a linear IVF. If F(x) = f (x), f (x) , then f (λx) = λf (x) and f (λx) = λf (x) for λ < 0. Lemma 4.1. If an IVF F : Y → I(R) on a linear subspace Y of Rn is linear, then F(x) ⊀ 0 =⇒ 0 ⊀ F(−x). Proof. Let F(x) ⊀ 0. Then, [f (x), f (x)] ⊀ 0 or, f (x) > 0 or, −f (x) < 0 or, f (−x) < 0, by Note 5 or, 0 ⊀ [f (−x), f (−x)] or, 0 ⊀ F(−x).

13

Definition 4.2. (Bounded linear operator). Let X be a normed linear space. A linear IVF L : X → I(R) is said to be a bounded linear operator if there exists K > 0 such that kL(h)kI(R) ≤ KkhkX for all h ∈ X . Lemma 4.2. Let X be a normed linear space. If the linear IVF L : X → I(R) is gHcontinuous at the zero vector of X , then it is a bounded linear operator. Proof. By the definition of gH-continuity of the IVF L : X → I(R) at the zero vector of X , there exists δ > 0 such that kL(h) gH L(0)kI(R) ≤ 1 for all h ∈ X with khkX < δ. By Definition 4.1, we note that, L(0) = 0. Hence, kL(h)kI(R) ≤ 1 for all h ∈ X with khkX < δ.

(4.1)

Thus, for all nonzero h ∈ X , we have

 

khk

h

X kL(h)kI(R) = L δ

, by the Definition 4.1

δ khkX I(R)

 

khkX h

=

L δ

δ khkX I(R)

1 khkX , by the inequality (4.1). ≤ δ

Hence, L is a bounded linear operator. Lemma 4.3. Let the IVF F : R2 → I(R) be defined by F(x1 , x2 ) = x1 [a, a] ⊕ x2 [b, b]. Then, F is a linear IVF. Proof. See the Appendix D. Note 6. By a straightforward extension of the number of variables, the proof of the Lemma 4.3 shows that the IVF F : Rn → I(R) which is defined by F(x1 , x2 , . . . , xn ) = x1 [a1 , a1 ] ⊕ x2 [a2 , a2 ] ⊕ . . . ⊕ xn [an , an ] is a linear IVF on Rn . Definition 4.3. (gH-Gˆateaux derivative). Let X be a nonempty open subset of Rn and F be an IVF on X . If at x¯ ∈ X , the limit 1 (F(¯ x + λh) gH F(¯ x)) λ→0+ λ

FG (¯ x)(h) := lim

14

exists for all h ∈ Rn and FG (¯ x) is a gH-continuous linear IVF from Rn to I(R), then FG (¯ x) is said to be gH-Gˆateaux derivative of F at x¯. If F has a gH-Gˆateaux derivative at x¯, then FG (¯ x) is said to be gH-Gˆateaux differentiable at x¯. Example 4.1. Consider the function F(x1 , x2 ) = C1 x1 ⊕ C2 (x22 ex1 ) with C1 , C2 ∈ I(R). For any (h1 , h2 ) ∈ R2 , 1 (F((0, 0) + λ(h1 , h2 )) gH F(0, 0)) λ→0+ λ  1 = lim λ C1 h1 ⊕ λ2 C2 (h22 eλh1 ) λ→0+ λ = C1 h1 .

FG (0, 0)(h1 , h2 ) :=

lim

Clearly FG (0, 0) is linear and gH-continuous on R2 . Therefore, F is a gH-Gˆateaux differentiable function at (0, 0). Remark 4.3.1. From Definitions 3.1 and 4.3 it is evident that if an IVF F on a nonempty open subset X of Rn has gH-Gˆateaux derivative at x¯ ∈ X , then F has gH-directional derivative at x¯ in every direction h ∈ Rn . However, the converse is not true. For instance, consider the IVF  ( x1  C 1 x1 ⊕ C 2 x2 e x 2 if x2 6= 0 F(x1 , x2 ) = 0 otherwise, where C1 , C2 ∈ I(R). The gH-directional derivative of F at (0, 0) ∈ R2 in the direction (h1 , h2 ) ∈ R2 is 1 (F((0, 0) + λ(h1 , h2 )) gH F(0, 0)) λ→0+ λ 1 = lim F(λh1 , λh2 ) λ→0+ λ  ( h1  C1 h1 ⊕ C2 h2 e h2 , if h2 6= 0 = 0, otherwise.

FD (0, 0)(h1 , h2 ) :=

lim

Hence, F has gH-directional derivative at (0, 0) in every direction (h1 , h2 ). But FD (0, 0) is not gH-continuous, and hence F has no gH-Gˆateaux derivative at (0, 0). Theorem 4.1. Let X be a nonempty open convex subset of Rn and the function F : X → I(R) has gH-Gˆateaux derivative at every x¯ ∈ X . If the function F is convex on X , then F(y) ⊀ FG (x)(y − x) ⊕ F(x) for all x, y ∈ X . Proof. Let the function F be convex on X . Then, for any x, y ∈ X and λ, λ0 ∈ (0, 1] with λ + λ0 = 1, we have F(x + λ(y − x)) = F(λy + λ0 x)  λ F(y) ⊕ λ0 F(x). 15

Consequently, F(x + λ(y − x)) gH F(x)  (λ F(y) ⊕ λ0 F(x)) gH F(x)   = λf (y) + λ0 f (x), λf (y) + λ0 f (x)] gH [f (x), f (x)  = min{λf (y) + λ0 f (x) − f (x), λf (y) + λ0 f (x) − f (x)}, =



max{λf (y) + λ0 f (x) − f (x), λf (y) + λ0 f (x) − f (x)}

min{λf (y) − λf (x), λf (y) − λf (x)},

max{λf (y) − λf (x), λf (y) − λf (x)}  = λ min{f (y) − f (x), f (y) − f (x)},





 max{f (y) − f (x), f (y) − f (x)} = λ (F(y) gH F(x)), which implies

1 (F(x + λ(y − x)) gH F(x))  F(y) gH F(x). λ As λ → 0+, we obtain FG (x)(y − x)  F(y) gH F(x) for all x, y ∈ X .

(4.2)

If possible, let Therefore,

F(y 0 ) ≺ F(x0 ) ⊕ FG (x0 )(y 0 − x0 ) for some x0 , y 0 ∈ X . F(y 0 ) gH F(x0 ) ≺ FG (x0 )(y 0 − x0 ),

which contradicts (4.2). Hence,

F(y) ⊀ FG (x)(y − x) ⊕ F(x) for all x, y ∈ X . Theorem 4.2. (Characterization of efficient points). Let F : X → I(R) be an IVF on a nonempty open subset X of Rn and x¯ ∈ X be an efficient point of the IOP (2.5). If the function F has a gH-Gˆateaux derivative at x¯ in every direction h ∈ Rn , then 0 ∈ FG (¯ x)(h) for all h ∈ Rn . The converse is true if X is convex and F is convex on X . Proof. Let the IVF F has a gH-Gˆateaux derivative at x¯. According to Theorem 3.2, we have FG (¯ x)(x − x¯) ⊀ 0 for all x ∈ X . Let h be an arbitrary point in X such that x = x¯ + h. Then, by equation (4.3), we get FG (¯ x)(h) ⊀ 0. 16

(4.3)

Again, if we take x = x¯ − h, then by equation (4.3) and Lemma 4.1 we obtain 0 ⊀ FG (¯ x)(h). Hence, by the last two relations, for all h ∈ X , we have 0 ∈ FG (¯ x)(h). Conversely, we consider that the function F is convex on X and F has a gH-Gˆateaux derivative at x¯ in every direction h ∈ X . Let 0 ∈ FG (¯ x)(h) for all h. Then, FG (¯ x)(h) ⊀ 0 and 0 ⊀ FG (¯ x)(h) for all h. Since FG (¯ x)(h) ⊀ 0 for all h, by taking h = x − x¯, we note that FG (¯ x)(x − x¯) ⊀ 0 for all x ∈ X or, FD (¯ x)(x − x¯) ⊀ 0 for all x ∈ X , by Definition 4.3 and Remark 4.3.1. Hence, by Theorem 3.2, x¯ is an efficient point of the IOP (2.5). Example 4.2. Let us consider the IOP of Example 3.2. Here, 1 FG (x)(h) := lim (F(x + λh) gH F(x)) λ→0+ λ ( [2h(x − 1), 2hx] if h ≥ 0 = [2hx, 2h(x − 1)] otherwise. = 2h [x − 1, x].

  Clearly, FG (x) is linear (see Lemma 4.3) and gH-continuous in h ∈ R for each x ∈ − 52 , 32 . We see that at each x ∈ [0, 1], 0 ∈ FG (x)(h) for all h ∈ R. Example 4.3. In this example, we show that the condition ‘0 ∈ FG (¯ x)(h) for all h ∈ Rn ’ in Theorem 4.3 is necessary for an efficient point but not sufficient for nonconvex IVFs. Consider F(x) = [1, 2] (−x2 ) = [−2x2 , −x2 ] and the IOP min F(x).

x∈[−9, 0]

(4.4)

By a sketch of the graph of F(x), it is clear that F is not a convex IVF. For x¯ = 0 and h ∈ R, 1 1 (F(¯ x + λh) gH F(¯ x)) = lim (F(λh)) = 0, λ→0+ λ λ→0+ λ lim

which is linear and gH-continuous. Therefore, F has a gH-Gˆateaux derivative and at x¯ and FG (¯ x)(h) = 0. Note that 0 ∈ FG (¯ x)(h) for all h ∈ R, 17

but x¯ is not an efficient point of the IOP (4.3) because F(x) ≺ 0 = F(¯ x) for all x ∈ [−9, 0). 5. gH-Fr´echet Derivatives of Interval-valued Functions It is noteworthy that gH-Gˆateaux derivative does not imply the gH-continuity of intervalvalued function. For instance, consider the following example. Example 5.1. Let C = [a, b] and F : R2 → I(R) be defined by ( x8 x2  1 2 C if (x1 , x2 ) 6= (0, 0) 4 x16 1 +x2 F(x1 , x2 ) = 0 otherwise. Then gH-directional derivative of F at (0, 0) ∈ R2 in the direction (h1 , h2 ) ∈ R2 is given by FD (0, 0)(h) =

lim

λ→0+

1 (F((0, 0) ⊕ λ(h1 , h2 )) gH F(0, 0)) = λ

lim

λ→0+

1 F(λh) = 0. λ

Clearly FD (0, 0) is gH-continuous and linear in h. Therefore, F has gH-Gˆateaux derivative at (0, 0). But lim (F(h1 , h2 ) gH F(0, 0)) 6= 0 khk→0

Thus, F is not gH-continuous at (0, 0). In this section, we present a stronger concept of a derivative for an IVF from which gHcontinuity is implied. In the following definition, we use the fact that I(R) is a normed quasilinear space (see Note 3). Definition 5.1. (gH-Fr´echet derivative). Let X be a nonempty open subset of Rn and F : X → I(R) be an IVF. For an x¯ ∈ X , if there exists a gH-continuous and linear mapping G : X → I(R) with the following property lim

kF(¯ x + h) gH F(¯ x) gH G(h)kI(R) khk

khk→0

= 0,

then F is said to have a gH-Fr´echet derivative at x¯ and we write G = FF (¯ x). Example 5.2. Consider the IVF F(x1 , x2 ) = C1 x1 ⊕ C2 x22 , where C1 and C2 are any two closed intervals. For (0, 0) ∈ R2 and any h = (h1 , h2 ) ∈ R2 , we have 1 (F((0, 0) + λ(h1 , h2 )) gH F(0, 0)) λ→0+ λ  1 = lim λ C1 h1 ⊕ λ2 C2 h22 λ→0+ λ = C1 h1 .

FG (0, 0)(h) :=

lim

18

We note that FG (0, 0) is a gH-continuous and linear mapping (by Note 6) from R2 to I(R). Taking G = FG (0, 0), we see that lim

kF(h) gH F(0, 0) gH G(h)kI(R)

khk kC1 h1 ⊕ C2 h22 gH 0 gH C1 h1 kI(R) p lim khk→0 h21 + h22 kC2 h22 kI(R) lim p 2 khk→0 h1 + h22 h22 kC2 kI(R) lim p 2 khk→0 h1 + h22 h2 k lim p 22 2 , where k = kC2 kI(R) khk→0 h1 + h2 0. khk→0

= = = = =

Hence, the Fr´echet derivative of F at (0, 0) is FF (0, 0)(h1 , h2 ) = G(h1 , h2 ) = C1 h1 . Theorem 5.1. Let X be a normed linear subspace of Rn . If the function F : X → I(R) has a gH-Fr´echet derivative at x¯ ∈ X , then the function F is gH-continuous at x¯. Proof. Let FF (¯ x) be the Fr´echet derivative of F at x¯. Evidently, FF (¯ x) is a gH-continuous and linear IVF. Hence, by Lemma 4.2, there exists an α > 0 such that kFF (¯ x)(h)kI(R) ≤ αkhk for all h ∈ Rn . As the function F : X → I(R) is gH-Fr´echet differentiable at x¯ ∈ X , for ε > 0 and x¯ + h ∈ B(¯ x, ε), by Definition 5.1, we have kF(¯ x + h) gH F(¯ x) gH FF (¯ x)(h)kI(R) ≤ εkhk. Hence, with the help of Lemma 2.2, for all h with x¯ + h ∈ B(¯ x, ε), kF(¯ x + h) gH F(¯ x)kI(R) = ≤ ≤ =

kF(¯ x + h) gH F(¯ x)kI(R) − kFF (¯ x)(h)kI(R) + kFF (¯ x)(h)kI(R) kF(¯ x + h) gH F(¯ x) gH FF (¯ x)(h)kI(R) + kFF (¯ x)(h)kI(R) εkhk + αkhk (ε + α)khk.

Thus, lim (F(¯ x + h) gH F(¯ x)) = 0, and hence the function F is gH-continuous at x¯. khk→0

Theorem 5.2. Let X be a nonempty open subset of Rn . If the gH-Fr´echet derivative of the function F : X → I(R) at some x¯ ∈ X exists, then the gH-Gˆateaux derivative of F at x¯ along any h ∈ Rn exists and both the derivative values are equal.

19

Proof. Let FF (¯ x) be the gH-Fr´echet derivative of F at x¯ ∈ X . Then, we have lim

kF(¯ x + λh) gH F(¯ x) gH FF (¯ x)(λh)kI(R) kλhk

λ→0+

= 0 for all h ∈ Rn \{0}

1 kF(¯ x + λh) gH F(¯ x) gH FF (¯ x)(λh)kI(R) = 0 for all h ∈ Rn \{0}. λ→0+ λ

or, lim

(5.1)

Since FF (¯ x) is linear, and thus FF (¯ x)(λ h) = λ FF (¯ x)(h), the equation (5.1) gives 1 (F(¯ x + λh) gH F(¯ x) gH λ FF (¯ x)(h)) = 0 for all h ∈ Rn \{0} λ→0+ λ 1 x + λh) gH F(¯ x)] = FF (¯ x)(h) for all h ∈ Rn \{0}. or, lim [F(¯ λ→0+ λ lim

Therefore, F is gH-Gˆateaux differentiable at x¯ and FG (¯ x) = FF (¯ x). Example 5.3. There are some IVFs which are gH-Gˆateaux differentiable but not gH-Fr´echet differentiable. For instance, for C = [a, b], consider the IVF F : R2 → I(R) which is defined by  ( x2 x p 2 2 1 2 if (x1 , x2 ) 6= (0, 0) x + x 4 2 1 2 C, x1 +x2 F(x1 , x2 ) = 0, otherwise. At (0, 0) ∈ R2 , for h = (h1 , h2 ) ∈ R2 , we note that 1 1 (F((0, 0) + λ(h1 , h2 )) gH F(0, 0)) = lim F(λh1 , λh2 ) = 0. λ→0+ λ λ→0+ λ lim

Since the function G(h) = 0 is gH-continuous and linear in h, F has gH-Gˆateaux derivative at (0, 0) and FG (0, 0)(h) = 0 for all h ∈ R2 . If F is Fr´echet differentiable at x¯, then by Theorem 5.2, FF (0, 0)(h) = FG (0, 0)(h) = 0. But we notice that lim

khk→0

= lim

kF(h) gH F(0, 0) gH FG (0, 0)(h)kI(R) khk

kF(h)kI(R)

khk p | h21 + h22 | kCkI(R) p = lim khk→0 (h41 + h22 ) h21 + h22 p | h21 h2 h21 + h22 | kCkI(R) p = lim khk→0 (h41 + h22 ) h21 + h22 | h21 h2 | k = lim , where k = kC2 kI(R) 2 khk→0 h4 1 + h2 khk→0

h21 h2

which does not exist. Thus, F is not gH-Fr´echet differentiable at (0, 0). 20

Theorem 5.3. (Chain rule). Let X and S be nonempty open subsets of Rn and R, respectively. If the function f : X → S is differentiable at x ∈ X and G : S → I(R) has a gH-Fr´echet derivative at f (x). Suppose there exists A ∈ I(R) such that for all k ∈ R, GF (y)(k) = k A. Then, the composite function G ◦ f : X → I(R) has a gH-Fr´echet derivative at x ∈ X , and (G ◦ f )F (x) = ∇f (x) GF (f (x)). Proof. Let F = G ◦ f and y = f (x). By the definitions of differentiability of f at x and gH-Fr´echet derivative of G at y = f (x), we have f (x + h) − f (x) = hT ∇f (x) + khkξ(h), where lim ξ(h) = 0 and

khk→0

G(y + k) gH G(y) = GF (y)(k) ⊕ k E(k), where lim E(k) = 0. k→0

Let φ(h) = hT ∇f (x) + khk ξ(h). To prove the theorem, we show that FF (x) = ∇f (x) GF (y). We note that F(x + h) gH F(x) = = = = = =

G(f (x + h)) gH G(f (x))  G f (x) + hT ∇f (x) + khk ξ(h) gH G(y) G(y + φ(h)) gH G(y) GF (y)(φ(h)) ⊕ φ(h) E(φ(h)) φ(h) A ⊕ φ(h) E(φ(h))  hT ∇f (x) + khk ξ(h) A ⊕ φ(h) E(φ(h))

= hT ∇f (x) A + khk ξ(h) A ⊕ φ(h) E(φ(h)). Therefore,

1

F(x + h) gH F(x) gH hT (∇f (x) A) khk→0 khk   φ(h) E(φ(h) = lim ξ(h) A ⊕ khk→0 khk   T   h ∇f (x) = lim ξ(h) A ⊕ + ξ(h) E(φ(h) khk→0 khk = 0, since lim φ(h) = 0. lim

khk→0

As the IVF hT (∇f (x) A) is linear in h and gH-continuous, the function F is gH-Fr´echet differentiable at x ∈ X and FF (x) = ∇f (x) A = ∇f (x) GF (f (x)). Example 5.4. Let f : R+ × R+ → R+ be defined by f (x1 , x2 ) = sin(x1 ) + ex2 and G : R+ → I(R) be the IVF G(x) = x2 [1, 2] ⊕ x [−1, 0]. Then, G ◦ f : R+ × R+ → I(R) has the expression

(G ◦ f )(x1 , x2 ) = (sin(x1 ) + ex2 )2 [1, 2] ⊕ (sin(x1 ) + ex2 )[−1, 0] 21

and ∇f (¯ x1 , x¯2 ) = (cos(¯ x1 ), ex¯2 ). Let x¯ = f (¯ x1 , x¯2 ) = sin(¯ x1 ) + ex¯2 . Then, for every h ∈ R+ , we have 1 λ→0 λ 1 = lim λ→0 λ 1 = lim λ→0 λ 1 = lim λ→0 λ lim

  G(¯ x + λh) gH G(¯ x)    2 2 (¯ x + λh) [1, 2] ⊕ (¯ x + λh) [−1, 0] gH x¯ [1, 2] ⊕ x¯ [−1, 0]    2  2 2 2 (¯ x + λh) − (¯ x + λh), 2(¯ x + λh) gH x¯ − x¯, 2¯ x

 min{(¯ x + λh)2 − (¯ x + λh) − x¯2 + x¯, 2(¯ x + λh)2 − 2¯ x2 },  max{(¯ x + λh)2 − (¯ x + λh) − x¯2 + x¯, 2(¯ x + λh)2 − 2¯ x2 }     = min lim λ1 (¯ x + λh)2 − (¯ x + λh) − x¯2 + x¯ , lim λ1 2(¯ x + λh)2 − 2¯ x2 ,  λ→0 1 λ→0 1   2 2 x + λh) − (¯ x + λh) − x¯ + x¯ , lim λ 2(¯ x + λh)2 − 2¯ x2 max lim λ (¯ λ→0

λ→0

= [min{2¯ xh − h, 4¯ xh}, max{2¯ xh − h, 4¯ xh}] = [2¯ xh − h, 4¯ xh] = [2¯ x − 1, 4¯ x] h,

which is a linear gH-continuous IVF (see Note 6). Thus, the gH-Gˆateaux derivative of G at x¯ ∈ R is GG (¯ x) := [2¯ x − 1, 4¯ x]. One can check that lim

|h|→0

kG(¯ x + h) gH G(¯ x) gH GG (¯ x)(h)kI(R) |h|

= 0.

Hence, GF (¯ x) = GG (¯ x) = [2¯ x − 1, 4¯ x] and thus GF (f (¯ x1 , x¯2 )) = [2 sin(¯ x1 ) + 2ex¯2 − 1, 4 sin(¯ x1 ) + 4ex¯2 ]. Again, for every (h1 , h2 ) ∈ R+ × R+ we have

 1  G ◦ f ((¯ x1 , x¯2 ) + λ(h1 , h2 )) gH G ◦ f (¯ x1 , x¯2 ) λ→0 λ 1  = lim (sin(¯ x1 + λh1 ) + ex¯2 +λh2 )2 [1, 2] λ→0 λ  ⊕ (sin(¯ x1 + λh1 ) + ex¯2 +λh2 ) [−1, 0] lim

 (sin(¯ x1 ) + e ) [1, 2] ⊕ (sin(¯ x1 ) + e ) [−1, 0] x ¯2 2

x ¯2

gH   1 = lim (sin(¯ x1 + λh1 ) + ex¯2 +λh2 )2 − (sin(¯ x1 + λh1 ) + ex¯2 +λh2 ), λ→0 λ  2(sin(¯ x1 + λh1 ) + ex¯2 +λh2 )2   x1 ) + ex2¯ ), 2(sin(¯ x1 ) + ex¯2 )2 gH (sin(¯ x1 ) + ex¯2 )2 − (sin(¯ 22

 1  x1 + λh1 ) + ex¯2 +λh2 ) min (sin(¯ x1 + λh1 ) + ex¯2 +λh2 )2 − (sin(¯ λ→0 λ x1 + ex¯2 ), − (sin(¯ x1 + ex¯2 )2 + (sin(¯ x1 + ex¯2 ) , 2(sin(¯ x1 + λh1 ) + ex¯2 +λh2 )2 − 2(sin(¯

= lim

x1 + λh1 ) + ex¯2 +λh2 ) max{(sin(¯ x1 + λh1 ) + ex¯2 +λh2 )2 − (sin(¯ x1 ) + ex¯2 ), − (sin(¯ x1 + ex¯2 )2 + (sin(¯  2(sin(¯ x1 + λh1 ) + ex¯2 +λh2 )2 − 2(sin(¯ x1 ) + ex¯2 )   = min 2h1 cos(¯ x1 )(sin(¯ x1 ) + ex¯2 ) + 2h2 ex¯2 (sin(x¯1 ) + ex¯2 ) − (h1 cos(¯ x1 + h2 ex¯2 ), x1 + ex¯2 ) , 4h1 cos(¯ x1 )(sin(¯ x1 ) + ex¯2 ) + 4h2 ex¯2 (sin(¯  max 2h1 cos(¯ x1 )(sin(¯ x1 ) + ex¯2 ) + 2h2 ex¯2 (sin(x¯1 ) + ex¯2 ) − (h1 cos(¯ x1 + h2 ex¯2 ),  4h1 cos(¯ x1 )(sin(¯ x1 ) + ex¯2 ) + 4h2 ex¯2 (sin(¯ x1 + ex¯2 )  = (h1 , h2 )(cos(x¯1 ), ex¯2 )(2 sin(x¯1 ) + 2ex¯2 − 1),  (h1 , h2 )(cos(x¯1 ), ex¯2 )(4 sin(x¯1 ) + 4ex¯2 ) = (h1 , h2 )(cos(x¯1 ), ex¯2 ) [2 sin(x¯1 ) + 2ex¯2 − 1, 4 sin(x¯1 ) + 4ex¯2 ]. Thus, the gH-Fr´echet derivative of G ◦ f at the point (¯ x1 , x¯2 ) is (G ◦ f )F (¯ x1 , x¯2 ) = (cos(x¯1 ), ex¯2 ) [2 sin(x¯1 ) + 2ex¯2 − 1, 4 sin(x¯1 ) + 4ex¯2 ] = GF (f (¯ x1 , x¯2 )) ∇f (¯ x1 , x¯2 ). Theorem 5.4. Let the IVF F : Rn → I(R) be gH-Lipschitz continuous on Rn and gH-Gˆateaux differentiable at x¯ ∈ Rn . Then, F is gH-Fr´echet differentiable at x¯. Proof. Let S be the closed unit sphere of Rn . Since S is totally bounded, for any given ε > 0, there exists a finite set A = {a1 , a2 , · · · , ak } ⊂ S n R such that S ⊂ ai ∈A B(ai , ε), where B(ai , ε) is the open ball of radius ε centered at ai .

Thus, for any s ∈ S there exists ai ∈ A such that ks − ai k < ε. Since F is gH-Lipschitz continuous, there exists M > 0 such that

kF(x) gH F(y)kI(R) ≤ M kx − yk for all x, y ∈ Rn . As F is gH-Gˆateaux differentiable at an x¯, for any i = 1, 2, . . . , k, we have a continuous linear mapping FG (¯ x) such that FG (¯ x)(ai ) := lim

λ→0+

1 (F(¯ x + λai ) gH F(¯ x)) λ

1 (F(¯ x + λai ) gH F(¯ x) gH λ FG (¯ x)(ai )) = 0. λ→0+ λ

i.e., lim

23

Thus, for the chosen ε > 0, for each i = 1, 2, . . . , k, there exists δi > 0 such that for |λ| < δi , kF(¯ x + λai ) gH F(¯ x) gH λ FG (¯ x)(ai )kI(R) < ε |λ|.

(5.2)

Hence, with the help of Lemma 2.3 and Corollary 2.1, for any s ∈ S and |λ| < δ = min{δ1 , δ2 , . . . , δk }, we get ≤ ≤ ≤ = Hence,

kF(¯ x + λs) gH F(¯ x) gH λ FG (¯ x)(s)kI(R) kF(¯ x + λs) gH F(¯ x + λai )kI(R) + kF(¯ x + λai ) gH F(¯ x) gH FG (¯ x)(s)kI(R) + kλ (FG (¯ x)(s) gH FG (¯ x)(ai ))kI(R)  M + M + B |λ|ε, for some B > 0, by (5.2) and Lemma 4.2  2M + B |λ|ε  2M + B kλ sk ε, as ksk = 1. kF(¯ x + λs) gH F(¯ x) gH λ FG (¯ x)(s)kI(R) = 0, λ→0 kλ sk lim

and so F is gH-Fr´echet differentiable at x¯. 6. Conclusion and Future Directions

In this article, majorly four concepts on interval-valued functions have been studied—gHdirectional derivative (Definition 3.1), gH-Gat´eaux derivative (Definition 4.3), gH-Fr´echet derivative (Definition 5.1) and linear IVFs (Definition 4.1). It has been shown that the Gat´eaux and Fr´echet derivative values are helpful to characterize convexity of an IVF (Theorem 4.1) and to characterize the efficient solutions of an IOP (Theorems 3.2 and 4.2) with interval-valued objective functions. One can trivially notice that in the degenerate case, each of the Definitions 3.1, 4.3 and 5.1 reduces to the respective conventional definition for the real-valued functions. For the Definition 4.1, it is noteworthy that the condition ‘none of F(x) ⊕ F(y) and F(x + y) dominates the other’ makes the definition a nontrivial generalization of real-valued linear functions. Exclusion of this condition from Definition 4.1 will make the definition very restrictive and will not allow the expression such as F(x) = x C, x ∈ R, for any C ∈ I(R), to be a linear IVF; for instance, note that F(2 − 1) = [2, 3] 6= [1, 4] = 2 [2, 3] ⊕ (−1) [2, 3]. Our future research will be focused on the notions of gH-subgradient as well as gHsubdifferential for IVFs. Also, we shall attempt to propose a subgradient technique to find out the complete set of efficient points of IOPs. Further, the notion of gH-directional derivative will be rigorously used in solving IOPs. The applications of the proposed gH-Gˆateaux and gH-Fr´echet derivatives in control systems and differential equations in a noisy or uncertain environment will be also shown shortly. Study of a control system or a differential equation in noisy environment eventually appear due to the incomplete information (e.g., demand for a product) or unpredictable changes (e.g., changes in the climate) in the system. The general control problem in a noisy or uncertain 24

environment that we shall consider to study is the following: min J(x, u, t) dx subject to = F(x, u, t), dt x(0) = x0 , where x : [0, ∞) → Rn and u : [0, ∞) → Rm are state and control variables, respectively, and J and F are two gH- Fr´echet differentiable interval-valued functions with respect to the the control variables u. In such a control problem, we shall show the applications of gH-Gˆateaux and gH-Fr´echet derivatives to find the optimal control of the system. Appendix A. Proof of the Lemma 2.1 Proof. As the part (ii) is clearly followed from part (i), we provide the proof only for the part (i). Let A = [a, a ¯] and B = [b, ¯b]. Here we recall the representation (2.1) and     A gH B = min a − b, a − b , max a − b, a − b .

Let A  B. Then, by Definition 2.2, we note that

AB a − a) = a(t) ≤ b(t) = b + t(¯b − b) for all t ∈ [0, 1] =⇒ a + t(¯ =⇒ a(0) ≤ b(0) and a ¯(1) ≤ ¯b(1) =⇒ a ≤ b and a ¯ ≤ ¯b =⇒ a − b ≤ 0 and a ¯ − ¯b ≤ 0 =⇒ A gH B  0. Conversely, let A gH B  0. Then, a − b ≤ 0 and a ¯ − ¯b ≤ 0, i.e., a ≤ b and a ¯ ≤ ¯b. Depending on b < a ¯ or a ¯ ≤ b, we break the analysis into two cases. • Case 1. Let b < a ¯. Then, a ≤ b < a ¯ ≤ ¯b. We prove that a(t) ≤ b(t) for all t ∈ [0, 1]. On contrary, let there exists t0 ∈ [0, 1], such that a(t0 ) > b(t0 ). Since a ≤ b and a ¯ ≤ ¯b, therefore t0 6= 0 and t0 6= 1. Thus, t10 > 1. Note that from a(t0 ) = a + t0 (¯ a − a), we have   1 1 a ¯ = t0 a(t0 ) − t0 − 1 a. 25

Similarly ¯b = As a(t0 ) > b(t0 ),

1 t0

b(t0 ) −

> 1 and a ≤ b, we see that

1 t0

a ¯ =

1 t0

a(t0 ) −



1 t0

 −1 a >



1 t0

1 t0

 − 1 b. b(t0 ) −



1 t0

 − 1 b = ¯b.

This is contradictory to a ¯ ≤ ¯b. Hence, for any t ∈ [0, 1], a(t) ≤ b(t). Thus, A  B. • Case 2. Let a ¯ ≤ b. Since a(t) and b(t) are increasing functions, for any t ∈ [0, 1] we have a(t) ≤ a(1) = a ¯ ≤ b = b(0) ≤ b(t). Hence, A  B and the proof is complete. Appendix B. Proof of the Lemma 2.2 Proof. If possible, let the inequality (2.2) be not true. Therefore, there exists a pair of intervals A = [a, a] and B = [b, b] for which kAkI(R) − kBkI(R) > kA gH BkI(R) . Then,

i.e.,

kAkI(R) > kA gH BkI(R) + kBkI(R) ≥ k(A gH B) ⊕ BkI(R)

kAkI(R) > k(A gH B) ⊕ BkI(R) .

(B.1)

According to the definition of gH-difference, we have

If (B.2) is true, then

either A gH B = [a − b, a − b]

(B.2)

or A gH B = [a − b, a − b].

(B.3)

(A gH B) ⊕ B = [a − b + b, a − b + b] = [a, a] = A i.e., kAkI(R) = k(A gH B) ⊕ BkI(R) , which contradicts (B.1). If (B.3) is true, then (A gH B) ⊕ B = [a − b + b, a − b + b].

We now consider the following two cases.

26

(B.4)

• Case 1. Let kAkI(R) = |a|. Since a ≤ a and |a| ≥ |a|, a must be nonpositive, i.e., a ≤ 0. In view of the relations (B.1) and (B.4), we have | a |> max{| a − b + b |, | a − b + b |}

i.e., | a |> | a − b + b |.

(B.5)

By (B.3), we have a − b ≤ a − b, or, a − b + b ≤ a ≤ 0. Therefore, | a |≤| a − b + b |, which contradicts the relation (B.5).

• Case 2. Let kAkI(R) = |a|. Then, a ≥ 0 and from (B.1) and (B.4) we obtain | a |> max{| a − b + b |, | a − b + b |}. Thus, | a |>| a − b + b | .

(B.6)

According to (B.3) we have a − b ≤ a − b, which implies 0 ≤ a ≤ a − b + b. Therefore, | a |≤| a − b + b |,

which contradicts the relation (B.6). Hence, (2.2) must be true for all A, B ∈ I(R). Appendix C. Proof of the Lemma 2.3

Proof. If possible, let the inequality (2.3) be not true. Therefore, there exist three intervals A = [a, a], B = [b, b] and C = [c, c] such that kA gH BkI(R) > k(A gH C) ⊕ (C gH B)kI(R) .

(C.1)

According to the definition of gH-difference of two intervals, either A gH B = [a − b, a − b] or A gH B = [a − b, a − b]. Similarly, and

either A gH C = [a − c, a − c] or A gH C = [a − c, a − c] C gH B = [c − b, c − b] or C gH B = [c − b, c − b].

Then, one of the following holds true:

27

(C.2)

(a) (A gH C) ⊕ (C gH B) = [a − b, a − b] (b) (A gH C) ⊕ (C gH B) = [a − c + c − b, a − c + c − b] (c) (A gH C) ⊕ (C gH B) = [a − c + c − b, a − c + c − b] (d) (A gH C) ⊕ (C gH B) = [a − b, a − b]. • Case 1. Let A gH B = [a − b, a − b] and kA gH BkI(R) = |a − b|. Then, a − b ≤ 0. (a) If (A gH C)⊕(C gH B) = [a−b, a−b], then k(A gH C) ⊕ (C gH B)kI(R) = |a − b| = kA gH BkI(R) , which is a contradiction to the inequality (C.1). (b) Let (A gH C) ⊕ (C gH B) = [a − c + c − b, a − c + c − b] which has came from the fact that A gH C = [a − c, a − c] and C gH B = [c − b, c − b]. Thus, a − c ≤ a − c and c − b ≤ c − b.

(C.3)

From the inequality (C.1), we obtain |a − b| > |a − c + c − b| and |a − b| > |a − c + c − b|.

(C.4)

Since a − b ≤ 0, irrespective of (a − c + c − b) is nonnegative or nonpositive, we get from the first inequality of (C.4) that a − b = −|a − b| < −|a − c + c − b| ≤ a − c + c − b. Hence, c − b < c − b, which is a contradiction to the inequality (C.3). (c) If (A gH C) ⊕ (C gH B) = [a − c + c − b, a − c + c − b], then proceeding similar to the Case 1b, we arrive at the contradicting inequality a − c < a − c.

(d) If (A gH C) ⊕ (C gH B) = [a − b, a − b], k(A gH C) ⊕ (C gH B)kI(R) = |a − b| = kA gH BkI(R) , which is a contradiction to the inequality (C.1). • Case 2. Let A gH B = [a − b, a − b] and kA gH BkI(R) = |a − b|. Then, a − b ≥ 0. (a) If (A gH C)⊕(C gH B) = [a−b, a−b], then k(A gH C) ⊕ (C gH B)kI(R) = |a − b| = kA gH BkI(R) , which is a contradiction to the inequality (C.1). (b) If (A gH C) ⊕ (C gH B) = [a − c + c − b, a − c + c − b], then proceeding similar to the Case 1b, we arrive at the contradicting inequality c − b > c − b.

28

(c) If (A gH C) ⊕ (C gH B) = [a − c + c − b, a − c + c − b], then then proceeding similar to the Case 1b, we arrive at the contradicting inequality a−c > a−c. (d) If (A gH C) ⊕ (C gH B) = [a − b, a − b], k(A gH C) ⊕ (C gH B)kI(R) = |a − b| = kA gH BkI(R) , which is a contradiction to the inequality (C.1). • Case 3. Let A gH B = [a − b, a − b] and kA gH BkI(R) = |a − b|. All the four subcases for this case are similar to the Case 2. • Case 4. Let A gH B = [a − b, a − b] and kA gH BkI(R) = |a − b|. All the four subcases for this case are similar to the Case 1. We notice that in all the possible subcases of the above four possible cases we arrive at a contradiction to the inequality (C.1). Therefore, for all A, B, C ∈ I(R), kA gH BkI(R) ≤ k(A gH C) ⊕ (C gH B)kI(R) .

Appendix D. Proof of the Lemma 4.3 Proof. First we show that F(λ(x1 , x2 )) = λ F(x1 , x2 ) for all λ ∈ R. • Case 1. Let λ < 0. For this case, there are following four subcases. (a) If x1 < 0 and x2 < 0, then λx1 > 0 and λx2 > 0. Therefore, F(λ(x1 , x2 )) = = = = = =

(λx1 ) [a, a] ⊕ (λx2 ) [b, b] [λx1 a + λx2 b, λx1 a + λx2 b] λ ([x1 a + x2 b, x1 a + x2 b]) λ ([x1 a, x1 a] ⊕ [x2 b, x2 b]) λ (x1 [a, a] ⊕ x2 [b, b]) λ F(x1 , x2 ).

(b) If x1 < 0 and x2 ≥ 0, then λx1 > 0 and λx2 ≤ 0. Thus, F(λ(x1 , x2 )) = = = = = = 29

(λx1 ) [a, a] ⊕ (λx2 ) [b, b] [λx1 a + λx2 b, λx1 a + λx2 b] λ ([x1 a + x2 b, x1 a + x2 b]) λ ([x1 a, x1 a] ⊕ [x2 b, x2 b]) λ (x1 [a, a] ⊕ x2 [b, b]) λ F(x1 , x2 ).

(c) If x1 ≥ 0 and x2 < 0, then λx1 ≤ 0 and λx2 > 0. Therefore, F(λ(x1 , x2 )) = = = = = =

(λx1 ) [a, a] ⊕ (λx2 ) [b, b] [λx1 a + λx2 b, λx1 a + λx2 b] λ ([x1 a + x2 b, x1 a + x2 b]) λ ([x1 a, x1 a] ⊕ [x2 b, x2 b]) λ (x1 [a, a] ⊕ x2 [b, b]) λ F(x1 , x2 ).

(d) If x1 ≥ 0 and x2 ≥ 0, then λx1 ≤ 0 and λx2 ≤ 0. So, F(λ(x1 , x2 )) = = = = = =

(λx1 ) [a, a] ⊕ (λx2 ) [b, b] [λx1 a + λx2 b, λx1 a + λx2 b] λ ([x1 a + x2 b, x1 a + x2 b]) λ ([x1 a, x1 a] ⊕ [x2 b, x2 b]) λ (x1 [a, a] ⊕ x2 [b, b]) λ F(x1 , x2 ).

From all the subcases of Case 1, we have F(λ(x1 , x2 )) = λ F(x1 , x2 ) for every λ < 0. • Case 2. Let λ ≥ 0. (a) If x1 ≥ 0 and x2 ≥ 0, then λx1 ≥ 0 and λx2 ≥ 0. Therefore, F(λ(x1 , x2 )) = = = = =

(λx1 ) [a, a] ⊕ (λx2 ) [b, b] [λx1 a + λx2 b, λx1 a + λx2 b] λ ([x1 a + x2 b, x1 a + x2 b]) λ (x1 [a, a] ⊕ x2 [b, b]) λ F(x1 , x2 ).

(b) If x1 ≥ 0 and x2 < 0, then λx1 ≥ 0 and λx2 ≤ 0. Hence, F(λ(x1 , x2 )) = = = = =

30

(λx1 ) [a, a] ⊕ (λx2 ) [b, b] [λx1 a + λx2 b, λx1 a + λx2 b] λ ([x1 a + x2 b, x1 a + x2 b]) λ (x1 [a, a] ⊕ x2 [b, b]) λ F(x1 , x2 ).

(D.1)

(c) If x1 < 0 and x2 ≥ 0, then λx1 ≤ 0 and λx2 ≥ 0. Thus, F(λ(x1 , x2 )) = = = = =

(λx1 ) [a, a] ⊕ (λx2 ) [b, b] [λx1 a + λx2 b, λx1 a + λx2 b] λ ([x1 a + x2 b, x1 a + x2 b]) λ (x1 [a, a] ⊕ x2 [b, b]) λ F(x1 , x2 ).

(d) If x1 < 0 and x2 < 0, then λx1 ≤ 0 and λx2 ≤ 0. Therefore, F(λ(x1 , x2 )) = = = = =

(λx1 ) [a, a] ⊕ (λx2 ) [b, b] [λx1 a + λx2 b, λx1 a + λx2 b] λ ([x1 a + x2 b, x1 a + x2 b]) λ (x1 [a, a] ⊕ x2 [b, b]) λ F(x1 , x2 ).

Hence, from all the subcases of Case 2, we have F(λ(x1 , x2 )) = λ F(x1 , x2 ) for every λ ≥ 0.

(D.2)

Next, we show that 1. when x1 and x2 have the same sign, and y1 and y2 have the same sign, F((x1 , y1 ) + (x2 , y2 )) = F(x1 , y1 ) ⊕ F(x2 , y2 ), 2. when x1 and x2 have different signs, and y1 and y2 have the same sign, F((x1 , y1 ) + (x2 , y2 )) and F(x1 , y1 ) ⊕ F(x2 , y2 ) are not comparable, 3. when x1 and x2 have the same sign, and y1 and y2 have different signs, F((x1 , y1 ) + (x2 , y2 )) and F(x1 , y1 ) ⊕ F(x2 , y2 ) are not comparable, and 4. when x1 and x2 have different signs, and y1 and y2 have different signs, then F((x1 , y1 ) + (x2 , y2 )) and F(x1 , y1 ) ⊕ F(x2 , y2 ) are not comparable. • Case 1. Let x1 and x2 have the same sign, and y1 and y2 have the same sign. A straightforward calculation proves that F((x1 , y1 )+(x2 , y2 )) = (x1 +x2 ) [a, a]⊕(y1 +y2 ) [b, b] = F(x1 , y1 )⊕F(x2 , y2 ).

31

• Case 2. Suppose that x1 and x2 have different signs, and y1 and y2 have the same sign. Since y1 and y2 have the same sign, evidently, (y1 + y2 ) [b, b] = y1 [b, b] ⊕ y2 [b, b]. Thus, to prove that F((x1 , y1 ) + (x2 , y2 )) and F(x1 , y1 ) ⊕ F(x2 , y2 ) are not comparable it is sufficient to prove that when x1 and x2 have different signs, (x1 + x2 ) [a, a] and x1 [a, a] ⊕ x2 [a, a] are not comparable. (a) For x1 > 0 and x2 < 0 with x1 + x2 < 0, we have (x1 + x2 ) [a, a] = [(x1 + x2 )a, (x1 + x2 )a] and x1 [a, a] ⊕ x2 [a, a] = [x1 a, x1 a] ⊕ [x2 a, x2 a] = [x1 a + x2 a, x1 a + x2 a]. If possible let (x1 + x2 ) [a, a] and x1 [a, a] ⊕ x2 [a, a] be comparable. Then, either (x1 + x2 )a > x1 a + x2 a and (x1 + x2 )a > x1 a + x2 a, or (x1 + x2 )a < x1 a + x2 a and (x1 + x2 )a < x1 a + x2 a.

(D.3) (D.4)

If (x1 + x2 )a > x1 a + x2 a, then x1 a + x2 a > x1 a + x2 a or, x1 a > x1 a or, x1 a + x2 a > x1 a + x2 a or, x1 a + x2 a > (x1 + x2 )a, which is a contradiction to the second inequality of (D.3). If (x1 + x2 )a < x1 a + x2 a, then x1 a < x 1 a or, x1 a + x2 a < x1 a + x2 a or, x1 a + x2 a < (x1 + x2 )a, which is a contradiction to the second inequality of (D.4). Hence, none of (D.3) and (D.4) is true. Thus, (x1 + x2 ) [a, a] and x1 [a, a] ⊕ x2 [a, a] are not comparable.

(b) For x2 > 0 and x1 < 0 with x1 + x2 < 0, the proof is similar to the Case 2a. 32

(c) For x1 < 0 and x2 > 0 with x1 + x2 > 0, we have (x1 + x2 ) [a, a] = [(x1 + x2 )a, (x1 + x2 )a] and x1 [a, a] ⊕ x2 [a, a] = [x1 a, x1 a] ⊕ [x2 a + x2 a] = [x1 a + x2 a, x1 a + x2 a]. If possible let (x1 + x2 ) [a, a] and x1 [a, a] ⊕ x2 [a, a] be comparable. Then, either (x1 + x2 )a > x1 a + x2 a and (x1 + x2 )a > x1 a + x2 a, , or (x1 + x2 )a < x1 a + x2 a and (x1 + x2 )a < x1 a + x2 a

(D.5) (D.6)

If (x1 + x2 )a > x1 a + x2 a, then x1 a > x1 a or, x1 a + x2 a > x1 a + x2 a or, x1 a + x2 a > (x1 + x2 )a, which is a contradiction to the second inequality of (D.5). If (x1 + x2 )a < x1 a + x2 a, then x1 a < x1 a or, x1 a + x2 a < x1 a + x2 a or, x1 a + x2 a < (x1 + x2 )a, which is a contradiction to the second inequality of (D.6). (d) For x2 < 0 and x1 > 0 with x1 + x2 > 0, the proof is similar to the Case 2c. • Case 3. Suppose that x1 and x2 have the same sign and y1 and y2 have different signs. By interchanging the role of x1 and x2 with y1 and y2 , we note that this case is identical to the Case 2. Hence, F((x1 , y1 ) + (x2 , y2 )) and F(x1 , y1 ) ⊕ F(x2 , y2 ) are not comparable • Case 4. Suppose that x1 and x2 have different signs, and y1 and y2 have different signs. For this case, only in the following two subcases, we prove that F((x1 , y1 ) + (x2 , y2 )) and F(x1 , y1 ) ⊕ F(x2 , y2 ) are not comparable. The same conclusion can be proved analogously for all other possible subcases. (a) Let x1 > 0 and x2 < 0 with x1 + x2 > 0, and y1 < 0 and y2 > 0 with y1 + y2 < 0. Then, we have (x1 +x2 ) [a, a]⊕(y1 +y2 ) [b, b] = [(x1 +x2 )a+(y1 +y2 )b, (x1 +x2 )a+(y1 +y2 )b] 33

and x1 [a, a] ⊕ y1 [b, b] ⊕ x2 [a, a] ⊕ y2 [b, b]

= [x1 a + y1 b + x2 a + y2 b, x1 a + y1 b + x2 a + y2 b].

If possible let (x1 + x2 ) [a, a] ⊕ (y1 + y2 ) [b, b] and x1 [a, a] ⊕ y1 [b, b] ⊕ x2 [a, a] ⊕ y2 [b, b] be comparable. Then, ) ( (x1 + x2 )a + (y1 + y2 )b < x1 a + y1 b + x2 a + y2 b (D.7) either and (x1 + x2 )a + (y1 + y2 )b < x1 a + y1 b + x2 a + y2 b or

(

(x1 + x2 )a + (y1 + y2 )b > x1 a + y1 b + x2 a + y2 b and (x1 + x2 )a + (y1 + y2 )b > x1 a + y1 b + x2 a + y2 b.

)

(D.8)

If the first inequality of (D.7) holds, i.e., (x1 + x2 )a + (y1 + y2 )b < x1 a + y1 b + x2 a + y2 b, then x2 a + y2 b < y2 b + x2 a or, x1 a + x2 a + y1 b + y2 b < (x1 + x2 )a + (y1 + y2 )b, which is a contradiction to the second inequality of (D.7). If the second inequality of (D.8) holds, i.e., (x1 + x2 )a + (y1 + y2 )b > x1 a + y1 b + x2 a + y2 b, then x2 a + y2 b > x2 a + y2 b or, x1 a + y1 b + x2 a + y2 b > (x1 + x2 )a + (y1 + y2 )b, which is a contradiction to the first inequality of (D.8). Thus, neither (D.7) nor (D.8) is true, and hence (x1 + x2 ) [a, a] ⊕ (y1 + y2 ) [b, b] and x1 [a, a] ⊕ y1 [b, b] ⊕ x2 [a, a] ⊕ y2 [b, b] are not comparable. (b) Let x1 > 0 and x2 < 0 with x1 + x2 < 0, and y1 < 0 and y2 > 0 with y1 + y2 < 0. Then, we have (x1 +x2 ) [a, a]⊕(y1 +y2 ) [b, b] = [(x1 +x2 )a+(y1 +y2 )b, (x1 +x2 )a+(y1 +y2 )b] and x1 [a, a] ⊕ y1 [b, b] ⊕ x2 [a, a] ⊕ y2 [b, b]

= [x1 a + y1 b + x2 a + y2 b, x1 a + y1 b + x2 a + y2 b].

34

If possible let (x1 + x2 ) [a, a] ⊕ (y1 + y2 ) [b, b] and x1 [a, a] ⊕ y1 [b, b] ⊕ x2 [a, a] ⊕ y2 [b, b] be comparable. Then, ) ( (x1 + x2 )a + (y1 + y2 )b < x1 a + y1 b + x2 a + y2 b (D.9) either and (x1 + x2 )a + (y1 + y2 )b < x1 a + y1 b + x2 a + y2 b or

(

(x1 + x2 )a + (y1 + y2 )b > x1 a + y1 b + x2 a + y2 b and (x1 + x2 )a + (y1 + y2 )b > x1 a + y1 b + x2 a + y2 b.

)

(D.10)

If the first inequality of (D.9) holds, i.e., (x1 + x2 )a + (y1 + y2 )b < x1 a + y1 b + x2 a + y2 b, then x1 a + y2 b < x1 a + y2 b or, x1 a + y1 b + x2 a + y2 b < (x1 + x2 )a + (y1 + y2 )b, which is a contradiction to the second inequality of (D.9). If the second inequality of (D.10) holds,i.e., (x1 + x2 )a + (y1 + y2 )b > x1 a + y1 b + x2 a + y2 b, then x1 a + y2 b > x1 a + y2 b or, x1 a + y1 b + x2 a + y2 b > (x1 + x2 )a + (y1 + y2 )b, which is a contradiction to the first inequality of (D.10). Thus, neither (D.9) nor (D.10) is true, and hence (x1 +x2 ) [a, a]⊕(y1 +y2 ) [b, b] and x1 [a, a] ⊕ y1 [b, b] ⊕ x2 [a, a] ⊕ y2 [b, b] are not comparable. From (D.1), (D.2) and four cases after (D.2), we see that F is a linear IVF.

Acknowledgement The authors put a sincere thanks to the reviewers and editors for their valuable comments to enhance the paper. Authors are also grateful to Prof. H.-C. Wu, Department of Mathematics, National Kaohsiung Normal University, Taiwan for his comments and suggestions on an initial version of the manuscript. The first and fourth authors gratefully acknowledge the financial support through the Early Career Research Award (ECR/2015/000467), Science & Engineering Research Board, Government of India. The second author is thankful for a research scholarship awarded by the University Grants Commission, Government of India. The ˘ 18-06915S. third author is grateful for the support of grants APVV-17-0066 and GACR

35

References References [1] Ahmad, I., Jayswal, A., Al-Homidan, S. and Banerjee, J. (2018), Sufficiency and duality in interval-valued variational programming, Neural Computing and Applications, DOI: doi.org/10.1007/s00521-017-3307-y. [2] Alefeld, G. and Mayer, G. (2000), Interval analysis: theory and applications, Journal of Computational and Applied Mathematics, 121, 421–464. [3] Aliev, R. A., Pedrycz, W. and Huseynov, O. H., Hukuhara difference of Z-numbers, Information Sciences 466 (2018) 13–24. [4] Bao, Y., Zao, B., Bai, E. (2016), Directional differentiability of interval-valued functions, Journal of Mathematics and Computer Science 16(4), 507–515. [5] Bede, B. and Gal, S. G. (2005), Generalizations of the differentiability of fuzzy-numbervalued functions with applications to fuzzy differential equations, Fuzzy Sets and Systems, 151, 581–599. [6] Bhurjee, A. K. and Panda, G. (2012), Efficient solution of interval optimization problem, Mathematical Methods of Operations Research, 76, 273–288. [7] Bhurjee, A. K. and Padhan, S. K. (2016), Optimality conditions and duality results for nondifferentiable interval optimization problems, Journal of Applied Mathematics and Computing, 50(1-2) 59–71. [8] Chen, S. H., Wu J. and Chen, Y. D. (2004), Interval optimization for uncertain structures, Finite Elements in Analysis and Design, 40, 1379–1398. [9] Chalco-Cano, Y., Lodwick, W. A. and Rufian-Lizana A. (2013), Optimality conditions of type KKT for optimization problem with interval-valued objective function via generalized derivative, Fuzzy Optimization and Decision Making, 12, 305–322. [10] Chalco-Cano, Y., Rufian-Lizana, A., Rom´an-Flores H.and Jim´enez-Gamero M. D. (2013), Calculus for interval-valued functions using generalized Hukuhara derivative and applications, Fuzzy Sets and Systems, 219, 49–67. [11] Chalco-Cano, Y., Lodwick, W. A., Osuna-G´omez, R., and Rufi´an-Lizana, A. (2016). The Karush–Kuhn–Tucker optimality conditions for fuzzy optimization problems Fuzzy Optimization and Decision Making, 15(1) (2016) 57–73. [12] Chalco-Cano, Y., Maqui-Huam´an, G. G., Silva, G. N., and Jim´enez-Gamero, M. D. Algebra of generalized Hukuhara differentiable interval-valued functions: review and new properties. Fuzzy Sets and Systems (2019) In press. 36

[13] Ghosh, D. (2017), Newton method to obtain efficient solutions of the optimization problems with interval-valued objective functions, Journal of Applied Mathematics and Computing, 53, 709–731. [14] Ghosh, D., A quasi-newton method with rank-two update to solve interval optimization problems, International Journal of Applied and Computational Mathematics 3 (3) (2017) 1719–1738. [15] Ghosh, D., Ghosh, D., Bhuiya, S. K. and Patra, L. K. (2018), A saddle point characterization of efficient solutions for interval optimization problems, Journal of Applied Mathematics and Computing, 58(1-2), 193–217 [16] Hern´andez, E. and Rodr´ıguez-Mar´ın, L. (2007), Nonconvex scalarization in set optimization with set-valued maps, Journal of Mathematical analysis and Applications 325(1), 1–18, [17] Hukuhara, M (1967), Int´egration des applications measurables dont la valeur est un compact convexe, Funkc. Ekvacioj, 10, 205–223. [18] Hoa, N. V. (2015), The initial value problem for interval-valued second-order differential equations under generalized H-differentiability, Information Sciences, 311, 119–148. [19] Ishibuchi, H. and Tanaka, H. (1990), Multiobjective programming in optimization of the interval objective function, European Journal of Operational Research, 48(2), 219–225. [20] Jayswal, A., Stancu-Minasian, I. and Ahmad, I. (2011), On sufficiency and duality for a class of interval-valued programming problems, Applied Mathematics and Computing, 218(8), 4119–4127. [21] Jayswal, A., Stancu-Minasian, I. and Banerjee, J. (2016), Optimality conditions and duality for interval-valued optimization problems using convexifactors, Rendiconti del Circolo Matematico di Palermo 65(1), 17–32. [22] Lodwick, W. A., Leal, U. A. S. and Silva, G. N. (2015), Multi-objective optimization in optimal control problem with interval-valued objective function, In: Proceeding Series of the Brazilian Society of Applied and Computational Mathematics 3(1), August 25, 2015. [23] Lupulescu, V. (2013), Hukuhara differentiability of interval-valued functions and interval differential equations on time scales, Information Sciences, 248, 50–67. [24] Lupulescu, V. (2015), Fractional calculus for interval-valued functions, Fuzzy Sets and Systems, 265, 63–85. [25] Mazandarani, M., Pariz, N. and Kamyad, A. V., Granular differentiability of fuzzynumber-valued functions, IEEE Transactions on Fuzzy Systems, 26(1) (2017) 310–323. [26] Moore, R. E. (1966), Interval analysis, Prentice-Hall, Englewood Cliffs, New Jersey. [27] Moore, R. E. (1987), Method and applications of interval analysis, Society for Industrial and Applied Mathematics. 37

[28] Moore, R. E., Kearfott, B. R. and Cloud, M. J. (2009), Introduction to Interval Analysis, Society for Industrial and Applied Mathematics. [29] Oppenheimer, E. P. and Michel, A. N. (1988), Application of interval analysis techniques to linear systems. Part I: fundamental results, IEEE Transaction Circuits System, 35, 1129–1138. [30] Osuna-G´omez, R., Chalco-Cano, Y., Hern´andez-Jim´enez, B., and Ruiz-Garz´on, G., Optimality conditions for generalized differentiable interval-valued functions, Information Sciences, 321 (2015) 136–146. [31] Piegat, A. and Landowski, M., (2012), Is the conventional interval-arithmetic correct? Journal of Theoretical and Applied Computer Science, 6(2), 27–44. [32] Landowski, M. (2015). Differences between Moore and RDM interval arithmetic. In Intelligent Systems’ 2014 (pp. 331-340). Springer, Cham. [33] Stefanini, L. and Bede, B. (2009), Generalized Hukuhara differentiability of intervalvalued functions and interval differential equations, Nonlinear Analysis, 71, 1311–1328. [34] Stefanini, L., and Manuel A.-J., Karush–Kuhn–Tucker conditions for interval and fuzzy optimization in several variables under total and directional generalized differentiability, Fuzzy Sets and Systems 362 (2019): 1-34. [35] Tao, J. and Zhang, Z. (2016), Properties of interval-valued function space under the gHdifference and their application to semi-linear interval differential equations, Advances in Difference Equations, DOI 10.1186/s13662-016-0759-9. [36] Wolfe, M. A. (2000), Interval mathematics, algebraic equations and optimization, Journal of Computational and Applied Mathematics, 124, 263–280. [37] Wu, H. C. (2007), The Karush-Kuhn-Tucker optimality conditions in an optimization problem with interval-valued objective function, European Journal of Operational Research, 176, 46–59. [38] Wu, H. C. (2010), Duality theory for optimization problems with interval-valued objective function, Journal of Optimization Theory and Application, 144(3), 615–628. [39] Wu, H. C. (2018), Solving set optimization problems based on the concept of null set, Journal of Mathematical Analysis and Applications, 472(2), 1741–1761. [40] Ghosh, D. Singh, A., Shukla, K. K., and Manchanda, K. (2019), Extended Karush-KuhnTucker condition for constrained interval optimization problems and its application in support vector machines, Information Sciences, Accepted Manuscript, In Press.

38

Declaration of Interest Statement We declare that our paper Generalized Hukuhara Gˆateaux and Fr´echet Derivatives of Interval-valued Functions and their Application in Optimization with Interval-valued Functions submitted to INS is the result of our own research. Best regards from Radko Mesiar, and Debdas Ghosh, Ram Surat Chauhan, Amit Kumar Debnath

39