Optimal control for the evolution of deterministic multi-agent systems

Optimal control for the evolution of deterministic multi-agent systems

JID:YJDEQ AID:10241 /FLA [m1+; v1.304; Prn:3/02/2020; 12:10] P.1 (1-36) Available online at www.sciencedirect.com ScienceDirect J. Differential Equ...

1MB Sizes 0 Downloads 30 Views

JID:YJDEQ AID:10241 /FLA

[m1+; v1.304; Prn:3/02/2020; 12:10] P.1 (1-36)

Available online at www.sciencedirect.com

ScienceDirect J. Differential Equations ••• (••••) •••–••• www.elsevier.com/locate/jde

Optimal control for the evolution of deterministic multi-agent systems ✩ Mira Bivas, Marc Quincampoix ∗ Laboratoire de Mathématiques de Bretagne Atlantique (CNRS UMR 6205), Univ Brest, 6, Avenue Victor Le Gorgeu, 29200, Brest, France Received 17 September 2019; revised 17 December 2019; accepted 27 January 2020

Abstract We investigate an optimal control problem with a large number of agents (possibly infinitely many). Although the dynamical system (a controlled ordinary differential equation) is of the same type for every agent, each agent may have a different control. So, the multi-agent dynamical system has two levels: a microscopic one, which concerns the control system of each agent, and a macroscopic level, which describes the evolution of the crowd of all agents. The state variable of the macroscopic system is the set of positions of the agents. In the present paper we define and study the evolution of such a global dynamical system whose solutions are called solution tubes. We also consider a minimization problem associated with the multi-agent system and we give a new characterization of the corresponding value function as the unique solution of a Hamilton-Jacobi-Bellman equation stated on the space of compact subsets of Rd . © 2020 Elsevier Inc. All rights reserved.

MSC: 34H05; 49L20; 93B05; 34A60; 34G25; 49J52; 26E25; 58C06

Keywords: Control system; Set evolution equation; Differential inclusion; Optimal control; Hamilton-Jacobi equations



This work was supported by the Air Force Office of Scientific Research under award number FA9550-18-1-0254.

* Corresponding author.

E-mail addresses: [email protected] (M. Bivas), [email protected] (M. Quincampoix). https://doi.org/10.1016/j.jde.2020.01.034 0022-0396/© 2020 Elsevier Inc. All rights reserved.

JID:YJDEQ AID:10241 /FLA

[m1+; v1.304; Prn:3/02/2020; 12:10] P.2 (1-36)

M. Bivas, M. Quincampoix / J. Differential Equations ••• (••••) •••–•••

2

1. Introduction In classical optimal control, a controller acts on the differential equation x(s) ˙ = f (x(s), u(s)), x(0) = x0 ,

(1)

where f : Rd × U → Rd and U is a subset of a finite-dimensional space, by choosing a measurable function t → u(t) ∈ U , called the control. This can be also modeled by the following differential inclusion x(s) ˙ ∈ F (x(s)), x(0) = x0 ,

(2)

where F (x) := f (x, U ). In this paper we consider a system with large number of controllers (possibly infinitely many) all having the same dynamics of the type (1) and we are interested by the global evolution of the crowd of controllers. Such a multi-agent system appears for example in flocks of animals, swarms of fishes or crowds of people. The crowd is modeled by a set, each point of which is the position of an agent. So the state variable of the multi-agent dynamics is the set of all positions of the agents. It is also very natural to suppose that the dynamics of a specific agent depends not only on his/her own position but also on the position of the crowd. Hence, the evolution of the multi-agent system can be represented by two-level dynamics • microscopic dynamics – each agent position at time s is given by x(s), which evolves according to the dynamical system x(s) ˙ ∈ F (x(s), E(s)), s ≥ 0 ,

(3)

where F is a set-valued mapping. It is worth pointing out that the particular agent dynamics depends on the state of the crowd at time s, which is E(s) – a subset of Rd . • macroscopic dynamics – the position of the crowd at time s is given by a set E(s) ⊂ Rd which satisfies E(s) ⊂ {x(s) | x(·) satisfies (3)}

(4)

and E(0) = E0 is the position of the agents at the initial time t = 0. One can interpret E0 as if at any point x0 ∈ E0 , there is an agent at x0 . After the initial moment, the crowd evolves according to (3). More precisely, the control of the crowd consists in selecting a kind of a “subflow” of (4). This needs a suitable definition of tubes s → E(s) (cf. Definition 3.1 below), which is one of the main outcomes of the paper. It is worth pointing out that this model allows to cope with finitely or infinitely many agents. An important class of controlled multi-agent dynamics concerns the case where each agent chooses his/her own control depending on the time t , on his/her position x ∈ Rd and on the position E ⊂ Rd of the crowd (the other agents). Therefore, we consider the control as a function (x, t, E) → v(x, t, E) ∈ Rd .

JID:YJDEQ AID:10241 /FLA

[m1+; v1.304; Prn:3/02/2020; 12:10] P.3 (1-36)

M. Bivas, M. Quincampoix / J. Differential Equations ••• (••••) •••–•••

3

This is a natural generalization of the feedback control. Indeed, a classical feedback for (1) is a function (x, t) → u(x, t) or equivalently the map v(x, t) := f (x, u(x, t)) ∈ F (x) for the differential inclusion model. So the generalized control feedback satisfies v(x, t, E) ∈ F (x, E) for all t ≥ 0 for each pair of states (x, E). In this paper we investigate the control problem for the evolution of the state of the multi-agent system (3). We consider the state variable to be the macroscopic evolution of the system t → E(t), oppositely to the microscopic evolution t → x(t) of a particular agent in the classical case. To this end, we introduce and study a rather general definition of solution tubes, which describe the evolution of the set-valued state variable t → E(t). We associate to this control system an optimal cost of a Mayer type min g(E(T )) , where g is a Lipschitz mapping and T > 0 is fixed. The above minimum is taken over the set of solution tubes starting from a given initial set E0 . We consider the corresponding value function and we prove its Lipschitz continuity and a dynamic programming principle, using the compactness and a Gronwall-Filippov type estimate for solution tubes. Then, we obtain results on “linearization” of solution tubes that enable us to show that the value function is the unique Lipschitz continuous solution of the corresponding Hamilton-Jacobi-Bellman equation stated in terms of generalized directional derivatives. This characterization is one of the two main results in the paper. Its proof is split into two propositions, characterizing the minimal and maximal solutions of the HJB equation. Multi-agent systems have been investigated from the point of view of the continuity equation [1,10,15,16,14,22,20], which concerns the evolution of a probability measure representing the crowd of the agents. In our present work, we do not have any probabilistic information about the density of the population, we are only interested by the set of positions of the agents. Our model is purely deterministic and concerns the moving sets. The evolution of sets has been deeply investigated in a theoretical framework [5,21]. Another approach for crowd motion models is presented in [6] and [7]. In it, a sweeping process is concerned and there is uniqueness of the solution. In fact, this solution is the solution of a viable classical differential inclusion. Here, we wish to select a specific controlled set evolution among all possible set evolutions in order to minimize a given cost. In the literature, controlled evolution of sets has been also studied in various contexts [9,24–27]. In [17] under suitable assumptions it is shown the existence and uniqueness of a mapping s → E(s), satisfying (4) with an equality instead of an inclusion. The main specificity of our model consists in the previously described two-level dynamics, where the dynamics of each agent is influenced by the whole crowd. Our main contributions consist first in introducing and studying a concept of solution tubes well-adapted to the model and second in proving that the corresponding value function for the Mayer problem is the unique Lipschitz continuous solution of a suitable Hamilton-Jacobi-Bellman equation, which is the main result of the paper. In order to obtain the main result, first we study the regularity properties of the value function and show that it is Lipschitz continuous in both variables. Moreover, a dynamic programming principle result holds true. Next, we generalize the notion of directional derivatives in terms of

JID:YJDEQ AID:10241 /FLA

4

[m1+; v1.304; Prn:3/02/2020; 12:10] P.4 (1-36)

M. Bivas, M. Quincampoix / J. Differential Equations ••• (••••) •••–•••

which the main theorem is stated. The directions for the directional derivatives are considered to be set-valued mappings with bounded graphs. Since the Hamilton-Jacobi-Bellman equation essentially provides a differential characterization of the value function, we make the connection between set-valued mappings with bounded graphs and solution tubes by means of linear approximations of tubes. Finally, we characterize the value function using these results and give a particular example. We would like to point out that our considerations are new even in the case when the dynamics does not depend on the state of the whole crowd (i.e. when the set-valued F appearing in (3) is independent of the variable E). The paper is organized as follows: in Section 2 we introduce the basic notation and the background for differential inclusions. In Section 3 we state the main assumptions and the definition of solution tubes. We also show compactness and Gronwall-Filippov type estimate for them. In Section 4 the regularity property of the value function and a dynamic programming principle are proven. In Section 5 we generalize the notion of directional derivatives and obtain some results on linear approximations of tubes. These results are used in Section 6 in order to prove the main result of the paper, namely the characterization of the value function as the unique Lipschitz continuous solution of a suitable HJB equation. In the Appendix A there is an existence result for solution tubes in a particular case of set-valued feedback. 2. Notation and preliminaries We denote by Limsup, Liminf and Lim upper limits, lower limits and limits of sets with respect to the Painvelé-Kuratowski convergence of sets. We refer the reader to Section 1.1 in [3] for definition and properties of these notions. The classical upper limits, lower limits and limits of sequences are denoted by lim sup, lim inf and lim. The notation Pc (Rd ) stands for the set of all nonempty compact subsets of Rd equipped with the Hausdorff distance dH . It is well-known that it is a complete metric space (e.g. Theorem II-3 in [13]). The space of continuous functions C([0, T ], Pc (Rd )) is equipped with the distance ˜ ˜ ˜ ∈ C([0, T ], Pc (Rd )). d(E(·), E(·)) := sup {dH (E(t), E(t))}, ∀E(·), E(·) t∈[0,T ]

Recall that again C([0, T ], Pc (Rd )) is a complete metric space (cf. e.g. Theorem 2.4.3 in [23]). Throughout the paper, we denote by BX [respectively B¯ X ] the open [respectively closed] unit ball, centered at the origin of the normed space X. The index could be omitted if there is no ambiguity about the space. We denote by I the identity mapping in any space. A set-valued mapping F from the space X to the subsets of the space Y is denoted by F : X ⇒ Y , while a function is denoted by f : X → Y . We now recall some basic results for the following differential inclusion (cf. e.g. [2–4,18]) x(s) ˙ ∈ F (x(s)) for almost all s ∈ [0, T ], x(0) = x0 .

(5)

If F : Rd ⇒ Rd is uniformly bounded, upper-semicontinuous and has compact convex values, then (5) has a solution. Moreover, the solution mapping SF : Rd ⇒ C([0, T ], Rd ) given by SF (x0 ) := {x(·) ∈ C([0, T ], Rd ) | x(·) is a solution of (5) such that x(0) = x0 } is upper-semicontinuous and has compact values. The reachable mapping RF : [0, T ] ×Rd ⇒ Rd given by

JID:YJDEQ AID:10241 /FLA

[m1+; v1.304; Prn:3/02/2020; 12:10] P.5 (1-36)

M. Bivas, M. Quincampoix / J. Differential Equations ••• (••••) •••–•••

5

RF (t, x0 ) := {x(t) ∈ Rd | x(·) ∈ SF (x0 )} has compact values. The mapping RF (·, x0 ) is Lipschitz continuous and if moreover F is Lipschitz continuous, then RF (t, ·) is also Lipschitz continuous. 3. Solution tubes 3.1. Definition and motivations Consider F : Rd × Pc (Rd ) ⇒ Rd – a set-valued mapping satisfying the following assumptions which be supposed throughout the paper (A1) F is Lipschitz continuous with Lipschitz constant L, i.e. dH (F (x1 , E1 ), F (x2 , E2 )) ≤ L(|x1 − x2 | + dH (E1 , E2 )) (A2) F has compact convex nonempty values (A3) F is uniformly bounded by M > 0, i.e. |F (x, E)| := sup {|y| | y ∈ F (x, E)} ≤ M for all (x, E) ∈ Rd × Pc (Rd ). The solution of the macroscopic evolution of the multi-agent system is given by the following Definition 3.1. Fix t0 ∈ [0, T ) and E0 ∈ Pc (Rd ). The set-valued mapping E : [t0 , T ] → Pc (Rd ) is a solution tube for F (an F -tube in short) on [t0 , T ] starting from E0 , if there exists a closed set A ⊂ C([t0 , T ], Rd ) such that (i) E(t0 ) = E0 (ii) E(t) = {x(t) | x(·) ∈ A}, for all t ∈ [t0 , T ] (iii) for all x(·) ∈ A x(t) ˙ ∈ F (x(t), E(t)) for almost all t ∈ [t0 , T ] . We denote the set of all F -tubes starting from E0 at t0 by TF[t0 ,T ] (E0 ). We now discuss two models that motivate our definition. The first model consists in a tube associated with a controlled dynamics v which is Lipschitz in both the agent state and the system state variables. Model 3.2. Consider a function v : Rd × [0, T ] × Pc (Rd ) → Rd that is a selection of F , namely v(x, t, E) ∈ F (x, E) for all (x, t, E) ∈ Rd × [0, T ] × Pc (Rd ), such that v(x, ·, E) is measurable and v(·, t, ·) is Lipschitz continuous uniformly in t ∈ [0, T ]. Fix E0 ∈ Pc (Rd ). We define E¯ : [t0 , T ] → Pc (Rd ) such that for all t ∈ [0, T ]

JID:YJDEQ AID:10241 /FLA

6

[m1+; v1.304; Prn:3/02/2020; 12:10] P.6 (1-36)

M. Bivas, M. Quincampoix / J. Differential Equations ••• (••••) •••–•••

¯ = {x(t)| there exists x ∈ C([0, T ], Rd ) E(t) ¯ such that x(s) ˙ = v(x(s), s, E(s)), s ∈ [0, T ] a.e., x(0) ∈ E0 }, ¯ can be viewed as the reachable mapping of the differential equation x(s) so E(·) ˙ = v(x(s), s, ¯ E(s)) starting from E0 . This model generalizes the classical open-loop measurable control case (given in (1)) by taking v(x, t, E) = f (x, u(t)) and the regular feedback case by taking v(x, t, E) = f (x, u(x, t)). Note that in this model v is the controlled dynamics, not a control only. The considerations of the paper [12] are closely related to the above model. In it, a game described by a controlled differential equation with a randomly distributed initial state under a probability measure μ0 is studied. If we take the set E0 to be the support of the measure μ0 , by Model 3.2 we can associate a solution tube to every choice of control (hence controlled dynamics) in the differential equation. ¯ from Model 3.2 is not obvious from its definition. In fact, the existence of the mapping E(·) For the reader convenience, we give a proof of the existence in the Appendix A even in a more general case when v can be set-valued. ¯ is an F -tube. In In both cases (single- and set-valued v) we now show that the mapping E(·) d order to prove this, we provide an appropriate closed set A ⊂ C([0, T ], R ) satisfying Definition 3.1. ¯ Setting G(x, t) := v(x, t, E(t)), we observe that G(·, t) is Lipschitz continuous and G(x, ·) is measurable. Then, the solution mapping SG : E0 ⇒ C([0, T ], Rd ) of x(s) ˙ ∈ G(x(s), s), s ∈ [0, T ], x(0) ∈ E0 is upper-semicontinuous and compact-valued (cf. for instance Theorem 7.1 and the remark at p. 79 in [18]). Since E0 is compact, we have that A := SG (E0 ) is compact. It is straightforward to ¯ indeed fulfills the definition of an F -tube with A. verify that E(·) However, in optimal control the regularity of the feedback with respect to x is a very strong assumption which is not suitable for many problems. One way of coping with nonregular feedback control is to consider its Filippov regularization,1 which gives our second Model 3.3. Fix a function v : Rd × [0, T ] × Pc (Rd ) → Rd which is a selection of F , such that v(·, t, E) and v(x, ·, E) are measurable and v(x, t, ·) is uniformly Lipschitz continuous with ¯ such that for all t ∈ [0, T ], E(t) ¯ respect to (x, t) ∈ Rd × [0, T ]. We define the mapping E(·) is the reachable set of ¯ x(s) ˙ ∈ Fv (x(s), s, E(s)), s ∈ [0, T ], x(0) ∈ E0 1 We remind that the Filippov regularization of a measurable f : Rd → Rd is

Ff (x) :=



co f ((x + δB) \ N ) ,

N δ>0

where the set N ⊂ Rd has Lebesgue measure zero. See [19] and [11] for more information on Filippov’s regularization and its applications.

JID:YJDEQ AID:10241 /FLA

[m1+; v1.304; Prn:3/02/2020; 12:10] P.7 (1-36)

M. Bivas, M. Quincampoix / J. Differential Equations ••• (••••) •••–•••

7

at time t , where Fv is the Filippov regularization of v in the first variable. ¯ is an F -tube. We refer the reader to the paper Analogously to Model 3.2, the mapping E(·) [8] for proof of existence and additional discussion on this example. Additional motivation for introducing Definition 3.1 is the fact that the set of all F -tubes is compact (as we will see later on). Thus, our concept of solution tubes contains the limits of sequences of the reachable set mappings from Models 3.2 and 3.3 (this is a direct consequence of Theorem 3.7 below). Examples of F -tubes are present in the literature also in the case when F depends on the state of the whole system. One of the main results in [17] is the existence and uniqueness of a ˜ such that set-valued mapping E(·) ˜ = {x(t) | there exists x ∈ C([0, T ], Rd ) such that E(t) ˜ x(s) ˙ ∈ F (x(s), E(s)) for almost all s ∈ [0, t], x(0) ∈ E0 } for all t ∈ [0, T ], which can be viewed as the largest F -tube (in the sense that all other F -tubes are contained in it). This is done under the assumptions that the set E0 is compact and the mapping F is Lipschitz continuous, has compact convex values and satisfies a linear growth condition. It ˜ is indeed an F -tube. In order to be more precise, we should note that in [17] the is clear that E(·) set-valued mapping F depends also on time in a measurable way. Remark 3.4. There is a connection between solution tubes and probability measure trajectories introduced in [22] (cf. also [14]). Indeed, consider G : Rd ⇒ Rd be a Lipschitz continuous and uniformly bounded set-valued mapping. Let the family of time-dependent probability measures μ = {μt }t∈[t0 ,T ] on Rd be an admissible trajectory staring from the probability measure {μ0 } on Rd and driven by the family of time-dependent Borel vector-valued measures ν = {ν t }t∈[t0 ,T ] on Rd , such that vt (x) :=

ν t (x) ∈ G(x) for a.e. t ∈ [t0 , T ] and for all x ∈ Rd . μt [t ,T ]

Then, there exists a solution tube E(·) ∈ TG 0 (E0 ), where E0 is the support of μ0 , such that E(t) is the support of μt for all t ∈ [t0 , T ]. [t ,T ] Now we prove the existence of such E(·) ∈ TG 0 (E0 ). By Theorem 8.2.1 in [1], the family μ is an admissible trajectory if and only if there exists a probability measure η on Rd × C([t0 , T ], Rd ) which is concentrated on the pairs (x0 , x) ∈ Rd × C([t0 , T ], Rd ), such that x is an absolutely continuous solution of x(t) ˙ = vt (x(t)) ∈ G(x(t)) for a.e. t ∈ [t0 , T ], x(t0 ) = x0

(6)

and μt = et #η – the pushforward of η through the evaluation et , where et : (x0 , x) ∈ Rd × C([t0 , T ], Rd ) → x(t) ∈ Rd . Using the above characterization and the fact that the support of a measure is the smallest closed set on which the measure is concentrated, we obtain that the support of η is E0 × A, where A is

JID:YJDEQ AID:10241 /FLA

[m1+; v1.304; Prn:3/02/2020; 12:10] P.8 (1-36)

M. Bivas, M. Quincampoix / J. Differential Equations ••• (••••) •••–•••

8

a closed subset of the solution set of (6), hence a closed subset of SG (E0 ). Let the solution tube E(·) ∈ TG[t0 ,T ] (E0 ) be associated to A. Then, due to the properties of the pushforward and et , we have that for all t ∈ [t0 , T ] E(t) = {x(t) | x(·) ∈ A} = et (supp η) = supp (et #η) = supp μt , which finishes the proof. Remark 3.5. Our considerations are also valid for set-valued mappings F depending on time in a measurable way (see, e.g. Sections 5.3 and 6.2 in [18]). We prefer to skip this dependence for easier readability of the proofs. Remark 3.6. Our approach is also valid for mappings F satisfying the following linear growth condition |F (x, E)| ≤ C(1 + |x| + sup{|e| | e ∈ E}) for some constant C > 0. Indeed, for such a mapping a standard application of Gronwall’s inequality implies (A3). For the sake of the convenience of the reader, we prefer to write the proofs for an uniformly bounded F . We end this section by noticing that an equivalent definition of solution tube for F could be given in terms of the limit of a suitably defined Euler schema. Indeed, every tube E(·) ∈ TF[t0 ,T ] (E0 ) is the limit of the sequence of the “piecewise affine” tubes En(·) we describe now: Let t0 =: t0n < t1n < . . . tNn n := T be a subdivision of [0, T ]. We define (iteratively) En (·) by En (t0n ) := E0 for all k ∈ {0, 1, . . . Nn − 1}, there exists an upper semicontinuous set-valued map k : En (tk ) ⇒ Rd such that ∀x ∈ En (tk ), k (x) ∈ F (x, En (tk )), En (t) := (I + (t − tk )k )(En (tk )), ∀t ∈ (tk , tk+1 ]. Conversely, every limit of a sequence of such piecewise affine tubes is a solution tube for F . This equivalent definition is an immediate consequence of Propositions 5.3 and 5.4 stated and proved in Section 5. 3.2. Properties of solution tubes We now study several properties of solution tubes. They are gathered in the following Theorem 3.7. Fix E0 – a compact subset of Rd and suppose that (A1), (A2) and (A3) hold true. Then, TF[t0 ,T ] (E0 ) is a nonempty and compact subset of C([t0 , T ], Pc (Rd )). Moreover, every E(·) [t ,T ]

in TF 0

(E0 ) is Lipschitz continuous with Lipschitz constant M.

Proof. Without loss of generality we assume that t0 = 0. From (A3), every solution of (3) is M-Lipschitz continuous, hence every E(·) ∈ TF[0,T ] (E0 ) is Lipschitz continuous with Lipschitz

JID:YJDEQ AID:10241 /FLA

[m1+; v1.304; Prn:3/02/2020; 12:10] P.9 (1-36)

M. Bivas, M. Quincampoix / J. Differential Equations ••• (••••) •••–•••

9

[t ,T ]

constant M. The nonemptyness of TF 0 (E0 ) is a direct consequence of Proposition A.1 in [t ,T ] the Appendix A. We now show the compactness of TF 0 (E0 ) in C([t0 , T ], Pc (Rd )). Take a [0,T ] (E0 ) and consider An ⊂ C([0, T ], Rd ) to be the sets associated sequence {En (·)}∞ n=1 ⊂ TF with En (·) by Definition 3.1. The maps En (·) ∈ C([0, T ], Pc (Rd )) are M-Lipschitz continuous ¯ is compact and uniformly bounded by C := sup{|e| | e ∈ E0 } + MT . Since Pc (E0 + MT B) d in Pc (R ) (cf. for instance Theorem 4.18 in [28]), then for all t ∈ [0, T ], {En (t)}∞ n=1 ⊂ E0 + MT B¯ is compact in Pc (Rd ). By the Arzela-Ascoli theorem, the sequence {En (·)} is uniformly convergent (up to a subsequence, relabeled in the same way) to some M-Lipschitz continuous mapping E(·) ∈ C([0, T ], Pc (Rd )). We will show that E(·) is an F -tube. Let us define A := Limsupn An = {x(·) ∈ C([0, T ], Rd ) | there exist a subsequence {Ank }k of {An }n and functions xnk (·) from Ank for all k, such that x(·) = limk xnk (·) } . Since |x˙n (t)| ≤ M for all t and n, we deduce from the Arzela-Ascoli theorem that the set A is nonempty. We will check that E(·) satisfies (ii) and (iii) from Definition 3.1. Let us fix t ∈ [0, T ]. The uniform convergence of En (·) to E(·) in C([0, T ], Pc (Rd )) implies the convergence of En (t) to E(t) with respect to the Hausdorff distance in Rd . Hence, we have that E(t) = Limn En (t) = Liminfn En (t) = {x ∈ Rd | x = lim xn , xn ∈ En (t)} n

= {x ∈ Rd | x = lim xn (t), xn (·) ∈ An } . n

Let y ∈ E(t) be arbitrary. Then, there exists a sequence {yn (·)}∞ n=1 , yn (·) ∈ An such that y = limn yn (t). Since the functions yn (·) are absolutely continuous and uniformly bounded by C and |y˙n (t)| ≤ M for all t ∈ [0, T ] and n ∈ N, we obtain that the sequence {yn (·)}∞ n=1 has a convergent subsequence {ynk (·)}∞ by Arzela-Ascoli theorem. Its limit y(·) is in the set A, due to the very k=1 definition of A. So we have obtained that y = lim yn (t) = lim ynk (t) = y(t) , n

k

therefore E(t) ⊂ {x(t) | x(·) ∈ A}, for all t ∈ [0, T ] . For the reverse inclusion, let z(·) ∈ A be arbitrary. We have that there exist znk (·) ∈ Ank , k ∈ N such that z(·) = limk znk (·). In particular, z(t) = limk znk (t) for all t ∈ [0, T ]. Since znk (t) ∈ Enk (t), we have that for all t ∈ [0, T ] z(t) ∈ Limsupk Enk (t) = E(t) . Since z(·) ∈ A is arbitrary, this finishes the verification of (ii) from Definition 3.1. Let us now prove (iii) from Definition 3.1. Fix x(·) ∈ A and x(·) = limk xnk (·), xnk (·) ∈ Ank as in the definition of A. Take ε > 0. There exists k0 such that x(·) − xnk (·) ∞ + dH (E(t), Enk (t)) < ε for all k ≥ k0 .

JID:YJDEQ AID:10241 /FLA

[m1+; v1.304; Prn:3/02/2020; 12:10] P.10 (1-36)

M. Bivas, M. Quincampoix / J. Differential Equations ••• (••••) •••–•••

10

From (A1) we get x˙nk (t) ∈ F (xnk (t), Enk (t)) ⊂ F (x(t), E(t)) + εLB¯ . Since the set in the above right-hand side is bounded, closed and convex, one can find a subsequence of x˙nk (·) (again similarly denoted) converging weakly in L2 [0, T ] to some z(·) ∈ L2 [0, T ]. Mazur’s lemma implies that z(t) ∈ F (x(t), E(t)) + εLB¯ for almost every t ∈ [0, T ]. We have for all k ∈ N and t ∈ [0, T ] T xnk (t) = xnk (0) +

x˙nk (s)1[0,t] (s)ds . 0

By passing to the limit as k → ∞ we get t x(t) = x(0) +

z(s)ds 0

so x˙ = z and thus z(t) ∈ F (x(t), E(t)) + εLB¯ for almost every t ∈ [0, T ]. Since ε is arbitrary, we get x(t) ˙ ∈ F (x(t), E(t)) , which ends the proof.

2

The next proposition gives useful estimates for solution tubes. Proposition 3.8 (Gronwall-Filippov type estimate). Let E0 , E¯ 0 ⊂ Rd be compact and (A1), (A2) ¯ ∈ T [0,T ] (E¯ 0 ) such that and (A3) hold true. Take E(·) ∈ TF[0,T ] (E0 ). Then, there exists E(·) F ¯ dH (E(t), E(t)) ≤ e2Lt dH (E0 , E¯ 0 ) for all t ∈ [0, T ] .

(7)

¯ as the limit of the Cauchy Proof. Fix E0 , E¯ 0 ⊂ Rd and E(·) as above. We will construct E(·) sequence we will define below. Denote δ := dH (E0 , E¯ 0 ). Consider A ⊂ C([0, T ], Rd ) associated to E(·) by Definition 3.1. We set

JID:YJDEQ AID:10241 /FLA

[m1+; v1.304; Prn:3/02/2020; 12:10] P.11 (1-36)

M. Bivas, M. Quincampoix / J. Differential Equations ••• (••••) •••–•••

11

t D1 := {z1 (·) ∈ C([0, T ], R ) | z1 (t) = z0 +

x(s)ds ˙ for all t ∈ [0, T ]

d

0

where z0 ∈ E¯ 0 , x ∈ A and x(0) ∈ (z0 + δB) ∩ E0 } and E1 (t) := {z1 (t) | z1 (·) ∈ D1 } for all t ∈ [0, T ] . Observe that D1 is nonempty because (z0 + δB) ∩ E0 is nonempty due to the compactness of E0 and since δ = dH (E0 , E¯ 0 ). Clearly, dH (E1 (t), E(t)) ≤ δ for all t ∈ [0, T ] . For n ≥ 2, we set Dn :={zn (·) ∈ C([0, T ], Rd ) | t zn (t) = zn−1 (0) +

P rojF (zn−1 (s),En−1 (s)) z˙ n−1 (s)ds for all t ∈ [0, T ] 0

where zn−1 (·) ∈ Dn−1 } and En (t) := {zn (t) | zn (·) ∈ Dn } for all t ∈ [0, T ] . First, we will show that these sets are indeed properly defined. For n = 2, we have that all z1 (·) ∈ D1 are absolutely continuous and their derivatives are measurable and uniformly bounded by M (the upper bound of F ). Therefore, the mappings E1 (·) : [0, T ] ⇒ Pc (Rd ) and F (z1 (·), E1 (·)) are Lipschitz continuous. Since F is convex-valued, we have that the mapping (which is defined for a.e. t ∈ [0, T ]) s → P rojF (z1 (s),E1 (s)) z˙ 1 (s) is single-valued and measurable (cf. Theorem 2, p. 91 in [2]). So, D2 and E2 (t), t ∈ [0, T ] are well-defined. The proof for the case n > 2 is done by induction and analogous arguments are used in order to prove the induction step. Next, we will show that for all n ≥ 2, if zn (·) ∈ Dn , then |zn (t) − zn−1 (t)| ≤

(2Lt)n−1 δ for all t ∈ [0, T ] , (n − 1)!

(8)

where zn−1 ∈ Dn−1 is the corresponding function to zn (·) from the definition of Dn . Moreover, we will also show that

JID:YJDEQ AID:10241 /FLA

[m1+; v1.304; Prn:3/02/2020; 12:10] P.12 (1-36)

M. Bivas, M. Quincampoix / J. Differential Equations ••• (••••) •••–•••

12

dH (En (t), En−1 (t)) ≤

(2Lt)n−1 δ for all t ∈ [0, T ] . (n − 1)!

(9)

We begin by checking these inequalities for n = 2. Let us fix arbitrary t ∈ [0, T ]. Let the function z2 (·) be arbitrary from D2 . Let z1 (·) ∈ D1 be its corresponding function from the definition of D2 . We estimate t |z2 (t) − z1 (t)| ≤

t |˙z2 (s) − z˙ 1 (s)| ds =

0

dist (˙z1 (s), F (z1 (s), E1 (s))) ds 0

From the definition of D1 we have that z˙ 1 (s) = P rojF (x(s),E(s)) x(s), ˙ where x(·) ∈ A is any corresponding function to z1 (·) from the definition of D1 . We also have that |z1 (s) − x(s)| ≤ δ, s ∈ [0, T ], hence t |z2 (t) − z1 (t)| ≤

dH (F (z1 (s), E1 (s)), F (x(s), E(s))) ds 0

t

t (|z1 (s) − x(s)| + dH (E1 (s), E(s))) ds ≤ L

≤L 0

2δ ds = 2Ltδ , 0

which implies dH (E2 (t), E1 (t)) ≤ 2Ltδ . Let us assume that (8) and (9) hold true until n. Fix an arbitrary t ∈ [0, T ]. zn+1 be an arbitrary point in En+1 (t). Then, there exists a function zn+1 (·) ∈ Dn+1 such that zn+1 := zn+1 (t). Let the function zn+1 (·) belong to Dn+1 . Let zn (·) ∈ Dn be its corresponding function from the definition of Dn+1 . We estimate t |zn+1 (t) − zn (t)| ≤

t |˙zn+1 (s) − z˙ n (s)| ds =

0

dist (˙zn (s), F (zn (s), En (s))) ds 0

From the definition of Dn we have that z˙ n (s) = P rojF (zn−1 (s),En−1 (s)) z˙ n−1 (s), where zn−1 (·) ∈ Dn−1 is the corresponding function to zn (·) from the definition of Dn . Hence, using the inductive hypothesis, we obtain that t |zn+1 (t) − zn (t)| ≤

dH (F (zn (s), En (s)), F (zn−1 (s), En−1 (s))) ds 0

t (|zn (s) − zn−1 (s)| + dH (En (s), En−1 (s))) ds

≤L 0

JID:YJDEQ AID:10241 /FLA

[m1+; v1.304; Prn:3/02/2020; 12:10] P.13 (1-36)

M. Bivas, M. Quincampoix / J. Differential Equations ••• (••••) •••–•••

t ≤L

2 0

13

(2Ls)n−1 (2Lt)n δ ds = δ, (n − 1)! n!

which verifies (8). It implies (9) as well. The inequalities (9) imply that {En (·)}∞ n=1 ⊂ C([0, ¯ ∈ C([0, T ], Pc (Rd )). T ], Pc (Rd )) is a Cauchy sequence, therefore it is convergent to some E(·) Let us fix ε > 0. Then, there exists n0 ∈ N such that ¯ d(En (·), E(·)) < ε for all n ≥ n0 . For arbitrary n ≥ n0 , we estimate ¯ ¯ d(E(·),E(·)) ≤ d(E(·), En (·)) + d(En (·), E(·)) < d(E(·), E1 (·)) + d(E1 (·), E2 (·)) + · · · + d(En−1 (·), En (·)) + ε ≤ δ + 2LT δ + · · · +

(2LT )n−1 + ε ≤ e2LT δ + ε . (n − 1)!

Since ε is arbitrary, we have in fact obtained (7). In order to complete the proof, we need to show ¯ is indeed an F -tube on [0, T ] for E¯ 0 . We will prove it by showing that E(·) ¯ fulfills the that E(·) definition. Let us define the following set D := lim inf Dn = {z(·) ∈ C([0, T ], Rd ) | z(·) = lim zn (·), zn (·) ∈ Dn } . n

n

The set D is closed by definition. We will check that it is nonempty. Indeed, let the sequence xn (·) ∈ Dn be such that t xn (t) = xn−1 (0) +

P rojF (xn−1 (s),En−1 (s)) x˙n−1 (s)ds for all t ∈ [0, T ] 0

for n ≥ 2. The inequalities (8) imply that {xn (·)} is a Cauchy sequence, hence it is convergent to some x(·) ∈ D. Since |x˙n (t)| ≤ M for any xn (·) ∈ Dn , n ∈ N, we have that x(·) ∈ D is absolutely ¯ continuous. It is clear that E(0) = E¯ 0 . Moreover, we have that for all t ∈ [0, T ] ¯ = lim En (t) = lim inf En (t) = {x ∈ Rd | x = lim xn , xn ∈ En (t)} E(t) n

n

n

= {x ∈ R | x = lim xn (t), xn (·) ∈ Dn } d

n

= {x ∈ R | x = x(t), x(·) ∈ D} . d

The last thing we have to check is that for any x(·) ∈ D and for almost all t ∈ [0, T ] ¯ x(t) ˙ ∈ F (x(t), E(t)) . This can be done using the same argument as the one used in the end of the proof of Theorem 3.7. The proof is complete. 2

JID:YJDEQ AID:10241 /FLA

[m1+; v1.304; Prn:3/02/2020; 12:10] P.14 (1-36)

M. Bivas, M. Quincampoix / J. Differential Equations ••• (••••) •••–•••

14

We now give two easy consequences of the previous result. Corollary 3.9. Let E0 , E¯ 0 ⊂ Rd be compact and (A1), (A2) and (A3) hold true. Then, δH (TF[0,T ] (E0 ), TF[0,T ] (E¯ 0 )) ≤ e2LT dH (E0 , E¯ 0 ) , where δH is the Hausdorff distance between the subsets of the complete metric space C([0, T ], Pc (Rd )). Corollary 3.10. Suppose that (A1), (A2) and (A3) hold true. Then, the graph of the mapping TF[0,T ] is compact in the space Pc (Rd ) × C([0, T ], Pc (Rd )). 4. The value function and its properties We consider the following optimization problem inf{g(E(T )) | E(·) ∈ TF[0,T ] (E¯ 0 )} , where (A4) the function g : Pc (Rd ) → R is Lipschitz continuous with Lipschitz constant K and the set E¯ 0 ⊂ Rd is compact. For example, the cost function g can be given by g(E) = max{φ(z) | z ∈ E} , where φ : Rd → R is Lipschitz continuous. It is straightforward to check that g is Lipschitz continuous as well. Other examples for the cost function g are  1 • g(E) := λ(E) E xdλ(x), where λ is the Lebesgue measure  • g(E) := E f (x)dλ(x), where f is essentially bounded. These mappings are Lipschitz continuous if the so-called interior ball condition holds (see Definition 3.3 in [17]). This is not a limitation since this property is conserved by differential inclusions – cf. Proposition 4.2 in [17] for proof and further discussion. We now define the value function V (t0 , E0 ) := inf{g(E(T )) | E(·) ∈ TF[t0 ,T ] (E0 )} . Proposition 4.1. Suppose that (A1), (A2), (A3) and (A4) hold true. Then, for every (t0 , E0 ) ∈ [0, T ] × Pc (Rd ) there exists an optimal F -tube E(·) ∈ TF[t0 ,T ] (E0 ) such that V (t0 , E0 ) = g(E(T )). Moreover, the value function V : [0, T ] × Pc (Rd ) → R is Lipschitz continuous.

JID:YJDEQ AID:10241 /FLA

[m1+; v1.304; Prn:3/02/2020; 12:10] P.15 (1-36)

M. Bivas, M. Quincampoix / J. Differential Equations ••• (••••) •••–••• [t ,T ]

Proof. The compactness of TF 0

15

(E¯ 0 ) implies the compactness of

¯ ) | E(·) ¯ ∈ T [t0 ,T ] (E¯ 0 )} . {E(T F Since the function g is continuous, the infimum in the definition of the value function is attained. We will show the Lipschitz continuity with respect to the second variable. Let us fix t0 ∈ [0, T ] and compact sets E1 , E2 ⊂ Rd . Due to the first part of the proposition there exists [t ,T ] E¯ 2 (·) ∈ TF 0 (E2 ) such that V (t0 , E2 ) = min{g(E2 (T )) | E2 (·) ∈ TF[t0 ,T ] (E2 )} = g(E¯ 2 (T )) . [t ,T ] By Proposition 3.8 we find E¯ 1 (·) ∈ TF 0 (E1 ) such that

d(E¯ 1 (·), E¯ 2 (·)) ≤ e2LT dH (E1 , E2 ) . We estimate V (t0 , E1 ) − V (t0 , E2 ) ≤ g(E¯ 1 (T )) − g(E¯ 2 (T )) ≤ KdH (E¯ 1 (T ), E¯ 2 (T )) ≤ Ke2LT dH (E1 , E2 ) . By interchanging E1 and E2 , we have obtained the Lipschitz continuity of V with respect to the second variable. Next, we will show the Lipschitz continuity with respect to the first variable. Let t1 and t2 be such that 0 < t1 < t2 < T and let the set E0 ⊂ Rd be compact. There exists E1 (·) ∈ TF[t1 ,T ] (E0 ) such that V (t1 , E0 ) = g(E1 (T )) . Since the restriction E1 |[t2 ,T ] (·) belongs to TF[t2 ,T ] (E1 (t2 )), by Proposition 3.8 we find E2 (·) ∈ TF[t2 ,T ] (E0 ) such that dH (E1 (T ), E2 (T )) ≤ e2LT dH (E1 (t2 ), E0 ) ≤ Me2LT (t2 − t1 ) , where the last inequality is due to the Lipschitz continuity of E1(·). We estimate V (t2 , E0 ) − V (t1 , E0 ) ≤ g(E2 (T )) − g(E1 (T )) ≤ KdH (E1 (T ), E2 (T )) ≤ KMe2LT (t2 − t1 ) . For the reverse inequality, take E  (·) ∈ TF[t2 ,T ] (E0 ) such that V (t2 , E0 ) = g(E  (T )). Fix E(·) ∈ ˜ ∈ T [t2 ,T ] (E(t2 )) such that TF[t1 ,T ] (E0 ). By Proposition 3.8, there exists E(·) F ˜ )) ≤ e2LT dH (E0 , E(t2 )) . dH (E  (T ), E(T ˜ 2 ) = E(t2 ), the mapping E  : [t1 , T ] → Pc (Rd ) defined by Since E(t

(10)

JID:YJDEQ AID:10241 /FLA

[m1+; v1.304; Prn:3/02/2020; 12:10] P.16 (1-36)

M. Bivas, M. Quincampoix / J. Differential Equations ••• (••••) •••–•••

16

 

E (t) =

E(t) if t ∈ [t1 , t2 ) ˜ E(t) if t ∈ [t2 , T ]

is an F -tube on [t1 , T ] starting from E0 . Using (10) and the Lipschitz continuity of E(·) ∈ TF[t1 ,T ] (E0 ), we estimate V (t1 , E0 ) − V (t2 , E0 ) ≤ g(E  (T )) − g(E  (T )) ˜ ), E  (T )) ≤ KdH (E  (T ), E  (T )) = KdH (E(T ≤ Ke2LT dH (E0 , E(t2 )) ≤ KMe2LT (t2 − t1 ) . This completes the proof of the Lipschitz continuity of V .

2

Next, we state a dynamic programming principle result. Proposition 4.2 (Dynamic programming principle). Suppose that (A1), (A2), (A3) and (A4) hold [t ,T ] true. Consider (t0 , E0 ) ∈ [0, T ) × Pc (Rd ). Then, for every E(·) ∈ TF 0 (E0 ) V (t0 , E0 ) ≤ V (t0 + h, E(t0 + h)) for every h ∈ [0, T − t0 ) . ¯ ∈ T [t0 ,T ] (E0 ) such that Moreover, there exists E(·) F ¯ 0 + h)) for every h ∈ [0, T − t0 ) . V (t0 , E0 ) = V (t0 + h, E(t [t ,T ]

¯ ∈T 0 Proof. Due to Proposition 4.1 there exists E(·) F

(E0 ) such that

[t ,T ]

V (t0 , E0 ) = min{g(E(T )) | E(·) ∈ TF 0

¯ )) . (E0 )} = g(E(T

¯ [t0 +h,T ] (·) belongs to T [t0 +h,T ] (E(t ¯ 0 + h)), we have that Since the restriction E| F ¯ 0 + h)) = inf{g(E(T ˜ )) | E(·) ˜ ∈ T [t0 +h,T ] (E(t ¯ 0 + h))} V (t0 + h,E(t F ¯ )) = V (t0 , E0 ) . ≤ g(E(T Because [t ,T ]

inf{V (t0 + h, E(t0 + h)) | E(·) ∈ TF 0

¯ 0 + h)) , (E0 )} ≤ V (t0 + h, E(t

we obtain [t ,T ]

V (t0 , E0 ) ≥ inf{V (t0 + h, E(t0 + h)) | E(·) ∈ TF 0 [t ,T ]

For the reverse inequality, let E(·) ∈ TF 0 [t +h,T ] (E(t0 + h)) such that E1 (·) ∈ TF 0

(E0 )} .

(E0 ) be arbitrary. Due to Proposition 4.1 there exists

V (t0 + h, E(t0 + h)) = g(E1 (T )) .

JID:YJDEQ AID:10241 /FLA

[m1+; v1.304; Prn:3/02/2020; 12:10] P.17 (1-36)

M. Bivas, M. Quincampoix / J. Differential Equations ••• (••••) •••–•••

17

Since E(t0 + h) = E1 (t0 + h), the mapping E  : [t0 , T ] → Pc (Rd ) defined by  E  (t) =

E(t) E1 (t)

if t ∈ [t0 , t0 + h) if t ∈ [t0 + h, T ]

is an F -tube on [t0 , T ] starting from E0 . Therefore, V (t0 , E0 ) ≤ g(E  (T )) = g(E1 (T )) = V (t0 + h, E(t0 + h)) [t ,T ]

for any E(·) ∈ TF 0

(E0 ). This implies that [t ,T ]

V (t0 , E0 ) ≥ inf{V (t0 + h, E(t0 + h)) | E(·) ∈ TF 0 which finishes the proof.

(E0 )} ,

2

5. Directional derivatives and linear approximations of tubes In this section we define directional derivatives and we obtain some results on linear approximations of tubes. They will be used to provide a differential characterization of the value function in the next section. Definition 5.1. Consider a Lipschitz continuous function W : Pc (Rd ) → R, a set E ∈ Pc (Rd ) a set-valued map with compact graph  : E ⇒ Rd . The lower directional derivative of W at E in the direction of  is defined as D − W (E; ) := lim inf h→0+

W ((I + h)(E)) − W (E) . h

The upper directional derivative of W at E in the direction of  is defined as D + W (E; ) := lim sup h→0+

W ((I + h)(E)) − W (E) . h

Clearly, D + W (E; ) = −D − (−W )(E; ). The above definition of a lower directional derivative is a particular case of the definition of a lower Dini derivative in [27]. Due to the following lemma, we can extend the definition of directional derivatives for directions  with bounded graphs. ˜ : E0 ⇒ Rd with a bounded graph. Define  : E0 ⇒ Lemma 5.2. Consider E0 ∈ Pc (Rd ) and  d ˜ Then R as the closure of the graph of . ˜ (I + h)(E0 ) = (I + h)(E 0 ), ∀h > 0 .

JID:YJDEQ AID:10241 /FLA

18

[m1+; v1.304; Prn:3/02/2020; 12:10] P.18 (1-36)

M. Bivas, M. Quincampoix / J. Differential Equations ••• (••••) •••–•••

Proof. Let us consider an arbitrary z ∈ (I + h)(E0 ). There exist x0 ∈ E0 and y0 ∈ (x0 ) ˜ n ) and the such that z = x0 + hy0 . Since y0 ∈ (x0 ) we have that y0 = limn yn , where yn ∈ (x sequence {xn } belongs to E0 and goes to x0 , when n goes to infinity. The following representation ˜ n) z = lim(xn + hyn ), where xn ∈ E0 and yn ∈ (x n

finishes the proof that ˜ (I + h)(E0 ) ⊂ (I + h)(E 0) . ˜ For the reverse inclusion, let us consider an arbitrary z¯ ∈ (I + h)(E 0 ). There exists a sequence ˜ x¯n ) and x¯n ∈ E0 . Using the {¯zn } such that z¯ = limn z¯ n and z¯ n = x¯n + hy¯n for some y¯n ∈ ( compactness of E0 , we obtain that x¯n → x¯ ∈ E0 (up to a subsequence, labeled in the same way). ¯ (up to a Because the graph of  is compact and y¯n ∈ (x¯n ), we have that y¯n → y¯ ∈ (x) subsequence, relabeled in the same way). So z¯ = x¯ + hy¯ ∈ (I + h)(E0 ) . The proof of the lemma is complete. 2 We now give two results on linearization of tubes, which will be used in the next section. Proposition 5.3. Suppose that (A1), (A2) and (A3) hold true. Consider t0 ∈ [0, T ), E0 ∈ Pc (Rd ) and a set-valued mapping  : E0 ⇒ Rd such that (x0 ) ⊂ F (x0 , E0 ) for every x0 ∈ E0 . Then, there exists E(·) ∈ TF[t0 ,T ] (E0 ) and a positive constant C, such that dH (E(t0 + h), (I + h)(E0 )) ≤ Ch2 for all h ∈ [0, T − t0 ]. Proof. Without loss of generality we take t0 = 0. We will construct E(·) as a limit of the Cauchy sequence we will define below. We set A1 := {x1 (·) ∈ C([0, T ], Rd ) | x1 (t) = x0 + ty0 for all t ∈ [0, T ] where x0 ∈ E0 , and y0 ∈ (x0 )} E1 (t) := {x1 (t) | x1 (·) ∈ A1 } for all t ∈ [0, T ] . We continue the construction by induction. For n ≥ 2, we set An :={xn (·) ∈ C([0, T ], Rd ) | t xn (t) = xn−1 (0) +

P rojF (xn−1 (s),En−1 (s)) x˙n−1 (s)ds for all t ∈ [0, T ] 0

where xn−1 (·) ∈ An−1 }

JID:YJDEQ AID:10241 /FLA

[m1+; v1.304; Prn:3/02/2020; 12:10] P.19 (1-36)

M. Bivas, M. Quincampoix / J. Differential Equations ••• (••••) •••–•••

19

and En (t) := {xn (t) | xn (·) ∈ An } for all t ∈ [0, T ] . First, we will show that these sets are indeed properly defined. By definition we have that all xn (·) ∈ An are absolutely continuous and their derivatives are measurable and uniformly bounded by M (the upper bound of F ). Therefore, the mappings En (·) : [0, T ] ⇒ Pc (Rd ) and F (zn (·), En (·)) are Lipschitz continuous. Since F is convex-valued, s → P rojF (zn (s),En (s)) z˙ n (s) is single-valued and measurable (cf. Theorem 2 in [2]). Next, we will show that for all n ≥ 2, if xn (·) ∈ An , then |xn (t) − xn−1 (t)| ≤ M

(2L)n−1 t n for all t ∈ [0, T ] , n!

(11)

where xn−1 ∈ Bn−1 is associated to xn (·) by the definition of An . Moreover, we will show that dH (En (t), En−1 (t)) ≤ M

(2L)n−1 t n for all t ∈ [0, T ] . n!

(12)

We begin by checking these inequalities for n = 2. Fix t ∈ [0, T ] and x2 (·) ∈ A2 . Let x1 (·) ∈ A1 be its corresponding function from the definition of A2 . We have t |x2 (t) − x1 (t)| ≤

t |x˙2 (s) − x˙1 (s)| ds =

0

dist (x˙1 (s), F (x1 (s), E1 (s))) ds 0

From the definition of A1 we have that x˙1 (s) ∈ (x1 (0)) ⊂ F (x1 (0), E0 ), hence t |x2 (t) − x1 (t)| ≤

dH (F (x1 (0), E0 ), F (x1 (s), E1 (s))) ds 0

t ≤L

t (|x1 (s) − x1 (0)| + dH (E1 (s), E0 )) ds ≤ L

0

2Ms ds = 2ML

t2 , 2

0

which implies dH (E2 (t), E1 (t)) ≤ 2ML

t2 . 2

Let us assume that (11) and (12) hold true for some n ≥ 2. We will verify them for n + 1. Fix t ∈ [0, T ] and xn+1 (·) ∈ An+1 . Let xn (·) ∈ An be its corresponding function from the definition of An+1 . We estimate

JID:YJDEQ AID:10241 /FLA

20

[m1+; v1.304; Prn:3/02/2020; 12:10] P.20 (1-36)

M. Bivas, M. Quincampoix / J. Differential Equations ••• (••••) •••–•••

t |xn+1 (t) − xn (t)| ≤

t |x˙n+1 (s) − x˙n (s)| ds =

0

dist (x˙n (s), F (xn (s), En (s))) ds 0

From the definition of An we have that x˙n (s) = P rojF (xn−1 (s),En−1 (s)) x˙n−1 (s), where xn−1 (·) ∈ An−1 is the corresponding function to xn (·) from the definition of An . Hence, using the inductive hypothesis, we obtain that t |xn+1 (t) − xn (t)| ≤

dH (F (xn (s), En (s)), F (xn−1 (s), En−1 (s))) ds 0

t ≤L

(|xn (s) − xn−1 (s)| + dH (En (s), En−1 (s))) ds 0

t ≤L

2M 0

(2L)n−1 s n (2L)n t (n+1) ds = M , n! (n + 1)!

which verifies (11). It implies (12) as well. The inequalities (12) imply that {En (·)}∞ n=1 ⊂ C([0, T ], Pc (Rd )) is a Cauchy sequence, therefore it is convergent to some E(·) ∈ C([0, T ], Pc (Rd )). Fix ε > 0. Then, there exists n0 ∈ N such that d(En (·), E(·)) < ε for all n ≥ n0 . For arbitrary n ≥ n0 , we estimate dH (E1 (t), E(t)) ≤ dH (E1 (t), En (t)) + dH (En (t), E(t)) < dH (E1 (t), E2 (t)) + · · · + dH (En−1 (t), En (t)) + ε   M (2Lt)2 (2Lt)n M 2Lt ≤ + ··· + +ε≤ (e − 1 − 2Lt) + ε 2L 2 n! 2L for any t ∈ [0, T ]. Since ε is arbitrary, we have in fact obtained that dH (E(t), E1 (t)) ≤

M 2Lt (e − 1 − 2Lt) 2L

for every t ∈ [0, T ]. So, we have that dH (E(t), (I + t)(E0 )) = dH (E(t), E1 (t) ≤

M 2Lt (e − 1 − 2Lt) ≤ Ct 2 2L

for some positive C and for every t ∈ [0, T ]. In order to complete the proof, we need to show that E(·) is indeed an F -tube on [0, T ] for E0 . We will prove it by showing that E(·) fulfills the definition. Define the following closed set

JID:YJDEQ AID:10241 /FLA

[m1+; v1.304; Prn:3/02/2020; 12:10] P.21 (1-36)

M. Bivas, M. Quincampoix / J. Differential Equations ••• (••••) •••–•••

21

A := lim inf An = {x(·) ∈ C([0, T ], Rd ) | x(·) = lim xn (·), xn (·) ∈ An } . n

n

We first check that A is nonempty. Indeed, let the sequence {xn (·)}, xn (·) ∈ An be such that t xn (t) = xn−1 (0) +

P rojF (xn−1 (s),En−1 (s)) x˙n−1 (s)ds for all t ∈ [0, T ] 0

for n ≥ 2. The inequality (11) implies that {xn (·)} is a Cauchy sequence, hence it is convergent to some x(·) ∈ A. Since |x˙n (t)| ≤ M for any xn (·) ∈ An , n ∈ N, we have that every x(·) ∈ A is absolutely continuous. Moreover, for all t ∈ [0, T ] we have that E(t) = lim En (t) = lim inf En (t) = {x ∈ Rd | x = lim xn , xn ∈ En (t)} n

n

n

= {x ∈ R | x = lim xn (t), xn (·) ∈ An } = {x ∈ Rd | x = x(t), x(·) ∈ A} . d

n

The last thing we have to check is that E(·) verifies (iii) from Definition 3.1. This can be done using the same argument as the one used in the end of the proof of Theorem 3.7. The proof is complete. 2 Proposition 5.4. Let t0 ∈ [0, T ) and E0 ∈ Pc (Rd ) be fixed. Let (A1), (A2) and (A3) hold true. Let [t ,T ] E(·) ∈ TF 0 (E0 ) be arbitrary. Then, there exists an upper-semicontinuous set-valued mapping  : E0 ⇒ Rd such that (x0 ) ⊂ F (x0 , E0 ) for every x0 ∈ E0 and lim inf h→0+

dH (E(t0 + h), (I + h)(E0 )) = 0. h

(13)

Proof. For every n ∈ N and any x0 ∈ E0 we set t0 + n1



n (x0 ) := {n

x(s)ds ˙ | x(·) ∈ A, x(t0 ) = x0 } . t0

Since F is uniformly bounded and E0 is compact, the set-valued mappings n : E0 ⇒ Rd have uniformly bounded graphs. Hence, due to Theorem 5.36 in [28], they converge graphically to a mapping  : E0 ⇒ Rd (up to a subsequence, labeled in the same way). From the definition of graphical limit for every k ∈ N there exists nk ∈ N such that for every x0 ∈ E0 1 1 1 1 (x0 ) ⊂ nk ((x0 + B) ∩ E0 ) + B and nk (x0 ) ⊂ ((x0 + B) ∩ E0 ) + B . k k k k Using these inclusions and the observation that 1 1 E(t0 + ) = (I + n )(E0 ) for every n ∈ N , n n we obtain

JID:YJDEQ AID:10241 /FLA

[m1+; v1.304; Prn:3/02/2020; 12:10] P.22 (1-36)

M. Bivas, M. Quincampoix / J. Differential Equations ••• (••••) •••–•••

22

E(t0 +

1 1 1 1 ) = (I + nk )(E0 ) ⊂ (I + )(E0 ) + B nk nk nk nk k

and 1 1 1 1 1 )(E0 ) ⊂ (I + nk )(E0 ) + B = E(t0 + ) + B. nk nk nk k nk nk k

(I + So

dH (E(t0 +

1 nk ), (I 1 nk

+

1 nk )(E0 ))



1 . k

Letting k go to infinity proves the desired estimate (13). In order to complete the proof, we have to check that (x0 ) ⊂ F (x0 , E0 ) for every x0 ∈ E0 .

(14)

By the definition of n and the Lipschitz continuity and uniform boundedness of F , we have for all n ∈ N and x0 ∈ E0 t0 + n1





n (x0 ) :=

x(·)∈A,x(t0 )=x0

t0

t0 + n1



n

F (x(s), E(s))ds t0

t0 + n1







x(s)ds ˙ ⊂

n

x(·)∈A,x(t0 )=x0



(F (x0 , E0 ) + L(|x(s) − x0 | + dH (E(s), E0 ))B)ds

n

x(·)∈A,x(t0 )=x0

t0

t0 + n1



⊂n

1 (F (x0 , E0 ) + 2MLsB)ds = F (x0 , E0 ) + ML B . n

t0

Fix ε > 0. From the definition of graphical limit there exists n1 such that (x0 ) ⊂ n1 ((x0 + εB) ∩ E0 ) + εB . Without loss of generality, we can assume that

1 n1

≤ ε and estimate

(x0 ) ⊂ F ((x0 + εB) ∩ E0 , E0 ) + (ML + 1)εB . By letting ε go to 0 and using that F is upper-semicontinuous and compact-valued, we establish (14). The proof is complete. 2

JID:YJDEQ AID:10241 /FLA

[m1+; v1.304; Prn:3/02/2020; 12:10] P.23 (1-36)

M. Bivas, M. Quincampoix / J. Differential Equations ••• (••••) •••–•••

23

6. Characterization of the value function The following theorem is the main result in the paper. It characterizes the value function as the unique Lipschitz continuous solution of the Hamilton-Jacobi-Bellman equation stated in (a), (b) and (c) below ((a) can be viewed as a suitable viscosity supersolution condition, while (b) is viscosity subsolution one and (c) is the boundary condition). Theorem 6.1. Suppose that (A1), (A2), (A3) and (A4) hold true. Then, the value function V is the unique bounded Lipschitz continuous function W : [0, T ] × Pc (Rd ) → R such that (a) for every (t, E) ∈ [0, T ) × Pc (Rd ) there exists  : E ⇒ Rd with (·) ⊂ F (·, E) on E, such that D − W (t, E; (1, )) ≤ 0 ; (b) for every (t, E) ∈ [0, T ) × Pc (Rd ), for every  : E ⇒ Rd with (·) ⊂ F (·, E) on E, it holds true that D + W (t, E; (1, )) ≥ 0 ; (c) W (T , ·) = g(·). The proof of the theorem will be split into two propositions, characterizing the solutions of (a) and (b), for which (c) holds true. Proposition 6.2. Let the function W : [0, T ] × Pc (Rd ) → R be Lipschitz continuous. Let (A1), (A2) and (A3) hold true. Then, W satisfies (a) if and only if [t ,T ] (a  ) for every (t0 , E0 ) ∈ [0, T ] × Pc (Rd ) there exists E(·) ∈ TF 0 (E0 ) such that W (t, E(t)) ≤ W (t0 , E0 ) for all t ∈ [t0 , T ] . Proposition 6.3. Let the function W : [0, T ] × Pc (Rd ) → R be Lipschitz continuous. Let (A1), (A2) and (A3) hold true. Then, W satisfies (b) if and only if (b ) for every (t0 , E0 ) ∈ [0, T ] × Pc (Rd ) and every E(·) ∈ TF[t0 ,T ] (E0 ), it holds true that W (t, E(t)) ≥ W (t0 , E0 ) for all t ∈ [t0 , T ] . Proof of Theorem 6.1. From the Dynamical programming principle (Proposition 4.2), we know that the value function V satisfies relations (a  ) and (b ). Consequently, V satisfies (a) and (b) in view of Propositions 6.2 and 6.3. Clearly, V also satisfies (c). Conversely, consider W – a Lipschitz function satisfying (a), (b) and (c). Due to Proposition 6.2, W also satisfies (a  ). Hence, there exists E(·) ∈ TF[t0 ,T ] (E0 ) such that W (t0 , E0 ) ≥ W (T , E(T )) = g(E(T )) ≥ V (t0 , E0 )

(15)

JID:YJDEQ AID:10241 /FLA

24

[m1+; v1.304; Prn:3/02/2020; 12:10] P.24 (1-36)

M. Bivas, M. Quincampoix / J. Differential Equations ••• (••••) •••–•••

by (c) and from the very definition of V . By Proposition 6.3, W also satisfies (b ). Passing to the infimum in (b ) over E(·) ∈ TF[t0 ,T ] (E0 ) and using (c), we obtain W (t0 , E0 ) ≤

W (T , E(T )) =

inf

[t ,T ]

E(·)∈TF 0

(E0 )

g(E(T )) = V (t0 , E0 ) .

inf

[t ,T ]

E(·)∈TF 0

(16)

(E0 )

In view of (15) and (16), the proof is complete. 2 Before proving Proposition 6.2, we state and prove the following useful Lemma 6.4. Consider W : R × Pc (Rd ) → R – a Lipschitz continuous function. Suppose that (A1), (A2) and (A3) hold true. Suppose that for every (t, E) ∈ R × Pc (Rd ) there exists t,E : Rd ⇒ Rd with closed graph such that t,E (x) ⊂ F (x, E) for all x ∈ Rd and D − W (t, E; (1, t,E )) ≤ 0 . Then, for every h ∈ (0, 1) there exist θhmin and θhmax in (0, h) such that for every (t, E) ∈ [0, T ] × ¯ it holds true that Pc (E0 + MT B), W ((I + θh,t,E (1, t,E ))(t, E)) − W (t, E) θh,t,E

≤ (Lip(W )L + 1)h

for some θh,t,E ∈ [θhmin , θhmax ] and t,E : Rd ⇒ Rd satisfying t,E (·) ⊂ F (·, E) on Rd . Proof. Let fix an arbitrary h ∈ (0, 1). From the definition of lower directional derivative, for ¯ there exists θh,t,E ∈ (0, h) such that every (t, E) ∈ [0, T ] × Pc (E0 + MT B) W ((I + θh,t,E (1, t,E ))(t, E)) − W (t, E) < h. θh,t,E ¯ We set for every (t, E) ∈ [0, T ] × Pc (E0 + MT B) Ah (t, E) := {(t  , E  ) ∈R × Pc (Rd ) | |t − t  | < h, dH (E, E  ) < h and

W ((I + θh,t,E (1, t,E ))(t  , E  )) − W (t  , E  ) < h} . θh,t,E

Clearly, (t, E) ∈ Ah (t, E). Moreover, since W is continuous, every Ah (t, E) is open in R × ¯ is compact in Pc (Rd ) (cf. e.g. Theorem 4.18 in [28]), the set Pc (Rd ). Since Pc (E0 + MT B) ¯ [0, T ] ×Pc (E0 +MT B) ⊂ R ×Pc (Rd ) is compact. Then, one can extract from the open covering 

¯ , Ah (t, E) ⊃ [0, T ] × Pc (E0 + MT B)

¯ (t,E)∈[0,T ]×Pc (E0 +MT B)

¯ Fix (t, E) ∈ a finite subcovering ki=1 Ah (ti , Ei ), where (ti , Ei ) ∈ [0, T ] × Pc (E0 + MT B). ¯ [0, T ] × Pc (E0 + MT B). There exists i ∈ {1, . . . , k} such that (t, E) ∈ Ah (ti , Ei ). We define the following map t,E : Rd ⇒ Rd

JID:YJDEQ AID:10241 /FLA

[m1+; v1.304; Prn:3/02/2020; 12:10] P.25 (1-36)

M. Bivas, M. Quincampoix / J. Differential Equations ••• (••••) •••–•••

t,E (x) :=

25

 (ti ,Ei (x) + LhB) ∩ F (x, E), if x ∈ E F (x, E), if x ∈ Rd \ E .

The set t,E (x) is nonempty for any x ∈ E, because for any yi ∈ ti ,Ei (x) we have that yi ∈ F (x, Ei ) ⊂ F (x, E) + LhB, so there exists y ∈ F (x, E) such that |yi − y| ≤ Lh. We have obtained that y ∈ (ti ,Ei (x) + LhB) ∩ F (x, E). We also notice that dH ((I + θh,ti ,Ei ti ,Ei )(E), (I + θh,ti ,Ei t,E )(E)) ≤ Lθh,ti ,Ei h . Using this inequality and the fact that (t, E) ∈ Ah (ti , Ei ), we estimate W ((I + θh,ti ,Ei (1, t,E ))(t, E)) − W (t, E) θh,ti ,Ei ≤

W ((I + θh,ti ,Ei (1, t,E ))(t, E)) − W ((I + θh,ti ,Ei (1, ti ,Ei ))(t, E)) θh,ti ,Ei +



W ((I + θh,ti ,Ei (1, ti ,Ei ))(t, E)) − W (t, E) θh,ti ,Ei

Lip(W )dH ((I + θh,ti ,Ei ti ,Ei )(E), (I + θh,ti ,Ei t,E )(E)) θh,ti ,Ei

+h

≤ (Lip(W )L + 1)h . By setting θhmin := min{θh,ti ,Ei | i = 1, . . . , k} and θhmax := max{θh,ti ,Ei | i = 1, . . . , k}, we complete the proof of the lemma. 2 Proof of Proposition 6.2. Let the function W be a solution of (a). Fix an arbitrary (t0 , E0 ) ∈ 1 . Without loss of generality we assume that t = T . By [0, T ] × Pc (Rd ) and n ∈ N, n ≥ T −t 0

min and θ max in (0, 1 ) such that for every (t, E) ∈ 0 Lemma 6.4 for h = T −t n n , there exist θn n ¯ there exist θn,t,E ∈ [θnmin , θnmax ] and  : Rd ⇒ Rd satisfying [0, T ] × Pc (E0 + MT B) t,E t,E (·) ⊂ F (·, E) on Rd such that

W ((I + θn,t,E (1, t,E ))(t, E)) − W (t, E) θn,t,E

1 ≤ (Lip(W )L + 1) . n

(17)

We are going to construct a tube En (·) ∈ TF[t0 ,T ] (E0 ) such that W (T , En (T )) − W (t0 , E0 ) ≤

C˜ , n

(18)

where C˜ is a positive constant. The construction is done by induction. For the first step, we set t1 := t0 + θn,t0 ,E0 . On the interval [t0 , t1 ] we define En (·) as the tube constructed in Proposition 5.3 starting from E0 at t0 for the mapping t0 ,E0 . We estimate using the Lipschitz continuity of W , inequality (17) and Proposition 5.3

JID:YJDEQ AID:10241 /FLA

[m1+; v1.304; Prn:3/02/2020; 12:10] P.26 (1-36)

M. Bivas, M. Quincampoix / J. Differential Equations ••• (••••) •••–•••

26

W (t1 , E(t1 )) − W (t0 , E0 ) = W (t1 , E(t1 )) − W ((I + θn,t0 ,E0 (1, t0 ,E0 ))(t0 , E0 )) + W ((I + θn,t0 ,E0 (1, t0 ,E0 ))(t0 , E0 )) − W (t0 , E0 ) 1 2 + (Lip(W )L + 1)θn,t0 ,E0 ≤ Lip(W )Cθn,t 0 ,E0 n   1 , ≤ (t1 − t0 ) C1 θnmax + C2 n where C1 and C2 are positive constants. Next, let us assume that we have constructed En (·) on [t0 , tj ], j ≥ 1. We consider two cases: Case 1. tj + θn,tj ,En (tj ) ≤ T We set tj +1 := tj + θn,tj ,En (tj ) . On the interval [tj , tj +1 ] we define En (·) as the tube constructed in Proposition 5.3 starting from En (tj ) at tj for the mapping tj ,En (tj ) . We estimate using the Lipschitz continuity of W , inequality (17) and Proposition 5.3 W (tj +1 , En (tj +1 )) − W (tj , En (tj )) = W (tj +1 , E(tj +1 )) − W ((I + θn,tj ,En (tj ) (1, tj ,En (tj ) ))(tj , En (tj ))) + W ((I + θn,tj ,En (tj ) (1, tj ,En (tj ) ))(tj , En (tj ))) − W (tj , En (tj )) 1 2 ≤ Lip(W )Cθn,t + (Lip(W )L + 1)θn,tj ,En (tj ) j ,En (tj ) n   1 . ≤ (tj +1 −tj ) C1 θnmax + C2 n Case 2. tj + θn,tj ,En (tj ) > T On the interval [tj , T ] we define En (·) as an arbitrary tube starting from En (tj ) at tj . We estimate using the Lipschitz continuity of W and En (·) 1 W (T , En (T )) − W (tj , En (tj )) ≤ Lip(W )(1 + M)(T − tj ) ≤ Lip(W )(1 + M) . n [t ,T ]

The construction of En (·) ∈ TF 0 (E0 ) is finished. In it we have that Nn – the maximal number T of points {tj }, j ≥ 0 before reaching Case 2. is not greater than θ min . We estimate n

W (T ,En (T )) − W (t0 , E0 ) =

N

n −1

(W (tj +1 , En (tj +1 )) − W (tj , En (tj ))) + W (T , En (T )) − W (tNn , En (tNn ))

j =0



≤ (T − t0 )

C1 θnmax

 1 1 C˜ + C2 + Lip(W )M ≤ , n n n

which verifies (18). By repeating this construction for all n ∈ N, n ≥ {En (·)} in the set

TF[0,T ] (E0 ),

1 T −t0

we obtain a sequence

which is compact due to Proposition 3.7. Therefore the sequence

JID:YJDEQ AID:10241 /FLA

[m1+; v1.304; Prn:3/02/2020; 12:10] P.27 (1-36)

M. Bivas, M. Quincampoix / J. Differential Equations ••• (••••) •••–•••

27

[t ,T ]

{En (·)} converges to some E(·) ∈ TF 0 (E0 ) up to a subsequence. Due to the continuity of W , by using (18) for the limit E(·), we obtain W (T , E(T )) ≤ W (t0 , E0 ) , which finishes the proof of the necessity part of the proposition. For the converse, let (t0 , E0 ) ∈ [0, T ] × Pc (Rd ) be arbitrary. Due to the assumptions, it holds true that ¯ W (t0 , E0 ) ≥ W (t, E(t))

(19)

¯ ∈ T [t0 ,T ] (E0 ). Using Proposition 5.4 we find an upper-semicontinuous compactfor some E(·) F valued mapping  : E0 ⇒ Rd such that (x0 ) ⊂ F (x0 , E0 ) for every x0 ∈ E0 and lim inf h→0+

¯ 0 + h), (I + h)(E0 )) dH (E(t = 0. h

The Lipschitz continuity of W , (19) and the above inequality imply that D − W (t0 , E0 ; (1, )) = lim inf h→0+

≤ lim inf h→0+

¯ 0 + h)) W ((I + h(1, ))(t0 , E0 )) − W (t0 + h, E(t h

≤ lim inf Lip(W ) h→0+

which finishes the proof.

W ((I + h(1, ))(t0 , E0 )) − W (t0 , E0 ) h

¯ 0 + h)) dH ((I + h)(E0 ), E(t = 0, h

2

Proof of Proposition 6.3. Consider W satisfying (b). Let us fix arbitrary (t0 , E0 ) ∈ [0, T ] × [t ,T ] Pc (Rd ) and E(·) ∈ TF 0 (E0 ). Without loss of generality we assume that t = T . Let A ⊂ d C([0, T ], R ) be the set from the definition of E(·) as an F -tube. Let n ∈ N be arbitrary. Since A is compact, there exist finitely many x1 (·), . . . , xmn (·) such that A⊂

mn   i=1

1 xi (·) + BC([0,T ],Rd ) n

 .

(20)

Let us denote for every t ∈ [t0 , T ] En (t) := {xi (t) | i ∈ {1, 2, . . . , mn }} . Since x˙1 (·), . . . , x˙mn (·) are finitely many, there exists a set N ⊂ [0, T ] with Lebesgue measure zero such that every t ∈ [t0 , T ] \ N is a Lebesgue point for x˙1 (·), . . . , x˙mn (·) and x˙i (t) ∈ F (xi (t), E(t)) for all i ∈ {1, 2, . . . , mn } for all t ∈ [0, T ] \ N . Fix t ∈ [t0 , T ] \ N . It holds true that

JID:YJDEQ AID:10241 /FLA

[m1+; v1.304; Prn:3/02/2020; 12:10] P.28 (1-36)

M. Bivas, M. Quincampoix / J. Differential Equations ••• (••••) •••–•••

28

1 x˙i (t) = lim h→0+ h

t+h x˙i (s)ds for all i ∈ {1, 2, . . . , mn } . t

This implies that for a fixed ε > 0, there exists δ > 0 such that 1 |x˙i (t) − h

t+h x˙i (s)ds| < ε for all h < δ and for all i ∈ {1, 2, . . . , mn } . t

Let us denote n (x) = {x˙i (t) | x = xi (t), i ∈ {1, 2, . . . , mn }} for x ∈ En (t). It is straight-forward to check that for h < δ En (t + h) ⊂ (I + hn )(En (t)) + εhB and (I + hn )(En (t)) ⊂ En (t + h) + εhB , which imply that lim

h→0+

dH (En (t + h), (I + hn )(En (t))) = 0. h

(21)

We define a mapping n : En (t) ⇒ Rd for x ∈ En (t) by 1 n (x) := (n (x) + L B) ∩ F (x, En (t)) . n The set n (x) is nonempty for any x ∈ En (t). Indeed, due to (20), we have dH (E(t), En (t)) ≤ and

1 n

1 n (x) ⊂ F (x, E(t)) ⊂ F (x, En (t)) + L B n for all x ∈ En (t). Then, for any y ∈ n (x), there exists y  ∈ F (x, En (t)) such that |y − y  | ≤ L n1 , hence y  ∈ (n (x) + L n1 B) ∩ F (x, En (t)). We also notice that 1 dH ((I + hn )(En (t)), (I + hn )(En (t))) ≤ L h n for all h > 0. Due to the assumptions of the proposition, we have that D + W (t, En (t); (1, n )) ≥ 0 . Therefore, there exists a sequence {hk } of positive numbers tending to 0 such that W ((I + hk (1, n ))(t, En (t))) − W (t, En (t)) 1 ≥− . hk n

(22)

JID:YJDEQ AID:10241 /FLA

[m1+; v1.304; Prn:3/02/2020; 12:10] P.29 (1-36)

M. Bivas, M. Quincampoix / J. Differential Equations ••• (••••) •••–•••

29

Next, using the Lipschitz continuity of W , (22) and the above inequality, we estimate W (t + hk , En (t + hk )) − W (t, En (t)) hk W (t + hk , En (t + hk )) − W ((I + hk (1, n ))(t, En (t))) = hk +

W ((I + hk (1, n ))(t, En (t))) − W ((I + hk (1, n ))(t, En (t))) hk

W ((I + hk (1, n ))(t, En (t))) − W (t, En (t)) hk   dH (En (t + hk ), (I + hn )(En (t))) 1 1 ≥ −Lip(W ) +L − . hk n n +

Using (21), we obtain that d W (t + hk , En (t + hk )) − W (t, En (t)) Lip(W )L + 1 ≥− W (t, En (t)) = lim . k dt hk n Since this estimate is valid for almost all t ∈ [0, T ] and the function W (·, En (·)) is Lipschitz, hence absolutely continuous on [0, T ], we have that T W (T , En (T )) = W (t0 , En (t0 )) +

d W (t, En (t))dt dt

t0

≥ W (t0 , En (t0 )) −

(T − t0 )(Lip(W )L + 1) . n

By letting n go to infinity and using the continuity of W and the fact that d(En (·), E(·)) ≤ n1 , we obtain that W (T , E(T )) ≥ W (t0 , E(t0 )) , which finishes the proof of the necessity part of the proposition. For the converse, let (t0 , E0 ) ∈ [0, T ] × Pc (Rd ) be arbitrary. Due to the assumptions, it holds true that ¯ W (t0 , E0 ) ≤ W (t, E(t))

(23)

¯ ∈ T [t0 ,T ] (E0 ). Let us fix an upper-semicontinuous compact-valued mapping  : for all E(·) F E0 ⇒ Rd such that (x0 ) ⊂ F (x0 , E0 ) for every x0 ∈ E0 . Proposition 5.3 implies the existence ˜ ∈ T [t0 ,T ] (E0 ) such that of a mapping E(·) F lim

h→0+

˜ 0 + h), (I + h)(E0 )) dH (E(t = 0. h

Using (23), (24) and the Lipschitz continuity of W , we estimate

(24)

JID:YJDEQ AID:10241 /FLA

30

[m1+; v1.304; Prn:3/02/2020; 12:10] P.30 (1-36)

M. Bivas, M. Quincampoix / J. Differential Equations ••• (••••) •••–•••

0 ≤ lim sup h→0+

≤ lim sup h→0+

˜ 0 + h)) − W (t0 , E0 ) W (t0 + h, E(t h ˜ 0 + h)) − W ((I + h(1, ))(t0 , E0 )) W (t0 + h, E(t h

+ lim sup h→0+

W ((I + h(1, ))(t0 , E0 )) − W (t0 , E0 ) h

≤ lim sup Lip(W ) h→0+

˜ 0 + h), (I + h)(E0 )) dH (E(t + D + W (t0 , E0 ; (1, )) h

= D + W (t0 , E0 ; (1, )) . This is the condition (b). The proof is complete. 2 From the above proofs we can deduce two interesting consequences. Corollary 6.5. Suppose that (A1), (A2), (A3) and (A4) hold true. Then, • The value function V is the unique minimal Lipschitz continuous solution of (a), for which (c) holds true • The value function V is the unique maximal Lipschitz continuous solution of (b), for which (c) holds true. We finish this paper by showing what our results say about the following particular Example 6.6 (cf. Example 2.10 in [22]). Consider the following one-dimensional differential inclusion x(t) ˙ ∈ F (x), x(0) = 0 , with F (x) := [−1, 1] for all x ∈ R. Fix T = 1. Associated to the above dynamics, we consider the following Mayer optimal control problem inf{g(E(1)) | E(·) ∈ TF[0,1] ({0})} ,

(25)

where g(E) := dH (E, {−1} ∪ {1}) for all E ∈ Pc (R). The sets to which the solution tubes TF[0,1] ({0}) are associated are subsets of the set {x ∈ C([0, 1], R) | x(0) = 0 and x(t) ˙ ∈ [−1, 1] a.e. in [0, 1]} . ¯ = {−t} ∪ {t}. Indeed, The optimal value of (25) is 0 and it is achieved by the solution tube E(t) we can verify by the Dynamic programming principle that ¯ V (0, {0}) = V (h, E(h)) for every h ∈ [0, 1]

JID:YJDEQ AID:10241 /FLA

[m1+; v1.304; Prn:3/02/2020; 12:10] P.31 (1-36)

M. Bivas, M. Quincampoix / J. Differential Equations ••• (••••) •••–•••

31

where V : [0, 1] × Pc (R) → R is the corresponding value function for (25). Theorem 6.1 says that the value function V is the unique bounded Lipschitz continuous function W : [0, 1] × Pc (R) → R such that (a) for every (t, E) ∈ [0, 1) × Pc (R) there exists  : E ⇒ [−1, 1], such that D − W (t, E; (1, )) ≤ 0 ; (b) for every (t, E) ∈ [0, 1) × Pc (R), for every  : E ⇒ [−1, 1], it holds true that D + W (t, E; (1, )) ≥ 0 ; (c) W (1, ·) = g(·). ¯ is satisfied by the mapping Condition (a) for the value function V along the optimal tube E(·) ¯ (x) = {−1} ∪ {1} for all x ∈ R. We also have that ¯ W (t, E(t)) ≤ W (0, {0}) for all t ∈ [0, 1] due to Proposition 6.2. Condition (b) implies that for every E(·) ∈ TF[0,1] ({0}), it holds true that W (t, E(t)) ≥ W (0, {0}) for all t ∈ [0, 1] due to Proposition 6.3. Acknowledgment We are grateful to the unknown referees for their careful reading of the manuscript and for the numerous suggestions and remarks that helped us improve it. Appendix A Proposition A.1. Let E0 be a compact subset of Rd . Consider a set-valued mapping F : Rd × Pc (Rd ) ⇒ Rd satisfying (A1), (A2) and (A3) and a set-valued mapping V : Rd × [0, T ] × Pc (Rd ) ⇒ Rd with compact convex values such that V (x, t, E) ⊂ F (x, E) for all (x, t, E) ∈ Rd × [0, T ] × Pc (Rd ) , V (x, ·, E) is measurable for all (x, E) ∈ Rd × Pc (Rd ) and V (·, t, ·) is L1 -Lipschitz for all t ∈ [0, T ]: dH (V (x1 , t, E1 ), V (x2 , t, E2 )) ≤ L1 (|x1 − x2 | + dH (E1 , E2 )) for all t ∈ [0, T ] . Then, there exists a mapping E : [0, T ] → Pc (Rd ) such that for all t ∈ [0, T ], E(t) is the reachable set of x(s) ˙ ∈ V (x(s), s, E(s)), s ∈ [0, T ], x(0) ∈ E0 at time t .

JID:YJDEQ AID:10241 /FLA

32

[m1+; v1.304; Prn:3/02/2020; 12:10] P.32 (1-36)

M. Bivas, M. Quincampoix / J. Differential Equations ••• (••••) •••–•••

Proof. We are going to construct such a mapping E : [0, T ] → Pc (Rd ) as a limit of a Cauchy sequence of mappings {En (·)} ⊂ C([0, T ], Pc (Rd )) we define next. Let E1 (t) be the reachable set at time t of the inclusion x(s) ˙ ∈ V (x(s), s, E0 ) for almost all s ∈ [0, t], x(0) ∈ E0 . Clearly, t → E1 (t) is compact-valued and Lipschitz continuous with Lipschitz constant M. For n ≥ 2, let En (t) be the reachable set at time t of the inclusion x(s) ˙ ∈ V (x(s), s, En−1 (s)) for almost all s ∈ [0, t], x(0) ∈ E0 .

(26)

It is straight-forward to check by induction that for all n ∈ N the mapping t → En (t) is compactvalued and Lipschitz continuous with Lipschitz constant M, the mapping x → V (x, s, En (s)) =: Gn (x, s) is Lipschitz continuous with Lipschitz constant L1 and the mapping s → Gn (x, s) is measurable. Next, we will show that the sequence {En (·)} is indeed a Cauchy sequence in C([0, T ], Pc (Rd )). Let m ∈ N be arbitrary. We will obtain that dH (En+m (t), En (t)) ≤

(L1 eL1 t )n Mt n+1 , ∀t ∈ [0, T ] (n + 1)!

(27)

by induction on n ≥ 0. It is straight-forward to obtain that for any m ∈ N dH (Em (t), E0 ) ≤ Mt . Suppose that dH (En+m−1 (t), En−1 (t)) ≤

(L1 eL1 t )n−1 Mt n , ∀t ∈ [0, T ] n!

for n ∈ N. In order to prove the induction hypothesis, we proceed by fixing t ∈ [0, T ] and showing that for every yn+m ∈ En+m (t) there exists zn ∈ En (t) such that t |yn+m − zn | ≤ L1 e

L1 t

dH (En+m−1 (s), En−1 (s))ds .

(28)

0

Indeed, we have that for every yn+m ∈ En+m (t) there exist a point y0n+m ∈ E0 and an absolutely continuous yn+m (·) – solution of y(s) ˙ ∈ Gn+m (y(s), s) for almost all s ∈ [0, t], y(0) = y0n+m such that yn+m = yn+m (t). We will apply Filippov’s theorem (cf. Theorem 10.4.1, p. 401 in [3]) for yn+m (·), the mapping Gn and δ = 0. The assumptions of the theorem are indeed true:

JID:YJDEQ AID:10241 /FLA

[m1+; v1.304; Prn:3/02/2020; 12:10] P.33 (1-36)

M. Bivas, M. Quincampoix / J. Differential Equations ••• (••••) •••–•••

33

We have that Gn (x, ·) is measurable for every x ∈ R; Gn (·, s) is Lipschitz with constant L1 for every s ∈ [0, T ] and we can estimate γ (s) := dist(y˙n+m (s), Gn (yn+m (s), s)) ≤ dH (Gn+m (yn+m (s), s), Gn (yn+m (s), s)) = dH (V (yn+m (s), s, En+m−1 (s)), V (yn+m (s), s, En−1 (s))) ≤ L1 dH (En+m−1 (s), En−1 (s)) so γ (·) is integrable. Then, Filippov’s theorem yields the existence of z(·) – solution of z˙ (s) ∈ Gn (z(s), s) for almost all s ∈ [0, t], z(0) = y0n+m such that s |z(s) − y(s)| ≤ e

s γ (τ )dτ ≤ L1 e

L1 s

L1 s

0

dH (En+m−1 (τ ), En−1 (τ ))dτ 0

for all s ∈ [0, t]. Therefore t |yn+m − zn | ≤ L1 e

L1 t

dH (En+m−1 (s), En−1 (s))ds , 0

which completes the proof of (28). Analogously, we can show that for every yn ∈ En (t) there exists zn+m ∈ En+m (t) such that t |yn − zn+m | ≤ L1 e

L1 t

dH (En+m−1 (s), En−1 (s))ds , 0

which, together with (28), implies that t dH (En+m (t), En (t)) ≤ L1 e

L1 t

dH (En+m−1 (s), En−1 (s))ds 0

for an arbitrary t ∈ [0, T ]. Using the induction hypothesis, we obtain (27). The proof that {En (·)} is a Cauchy sequence is complete by noting that d(En+m (·), En (·)) = sup {dH (En+m (t), En (t))} t∈[0,T ]

≤ MT

(L1 T eL1 T )n −→n→∞ 0 . (n + 1)!

JID:YJDEQ AID:10241 /FLA

34

[m1+; v1.304; Prn:3/02/2020; 12:10] P.34 (1-36)

M. Bivas, M. Quincampoix / J. Differential Equations ••• (••••) •••–•••

Since C([0, T ], Pc (Rd )) is a complete metric space, the sequence {En (·)} is convergent to a sequence E(·) ∈ C([0, T ], Pc (Rd )). Since {En (·)} are equi-Lipschitz with Lipschitz constant M, we have that E(·) is Lipschitz with constant M. Let us consider the mapping G : Rd × [0, T ] ⇒ Rd defined by G(x, t) := V (x, t, E(t)). It is compact- and convex-valued by definition. It is also clear that G(·, t) is Lipschitz continuous and G(x, ·) is measurable. Hence, the inclusion x(t) ˙ ∈ V (x(t), t, E(t)) for almost all t ∈ [0, T ], x(0) ∈ E0 has a solution. Let us fix t ∈ [0, T ]. Since E(·) is the limit of En (·) when n → ∞, we have that for every ε > 0 there exists n0 ∈ N such that dH (E(s), En (s)) < ε, ∀s ∈ [0, t], ∀n ≥ n0 . The last thing we need to check is that for all t ∈ [0, T ] the reachable set R(t) of x(s) ˙ ∈ V (x(s), s, E(s)) for almost all s ∈ [0, t], x(0) ∈ E0

(29)

at the moment t is E(t). The first part of proving this is to show that R(t) ⊂ E(t). Let x ∈ R(t) be arbitrary. We will show that x ∈ E(t). Indeed, we have that there exist a point x0 ∈ E0 and an absolutely continuous x(·) – solution of (29) with x(0) = x0 such that x = x(t). Let us fix ε > 0. We will apply Filippov’s theorem for x(·), the mapping Gn , n > n0 and δ = 0. The assumptions of the theorem are indeed true, since we have γ (s) := dist(x(s), ˙ Gn (x(s), s)) ≤ dH (V (x(s), s, E(s)), V (x(s), s, En−1 (s))) ≤ L1 dH (E(s), En−1 (s)) < L1 ε . Filippov’s theorem yields the existence of y(·) – solution of y(s) ˙ ∈ Gn (y(s), s) for almost all s ∈ [0, t], y(0) = x0 such that s |x(s) − y(s)| ≤ e

L1 s

γ (τ )dτ < εL1 seL1 s 0

for all s ∈ [0, t]. Since y(t) ∈ En (t), we estimate dist(x, E(t)) ≤ |x − y(t)| + dist(y(t), E(t)) < εL1 teL1 t + dH (En (t), E(t)) < ε(L1 teL1 t + 1) By letting ε go to 0, we obtain that x ∈ E(t). For the reverse inclusion – E(t) ⊂ R(t), let us consider an arbitrary y ∈ E(t). Let us fix ε > 0. Then, there exists z ∈ En (t), n > n0 such that |y −z| < ε. Since z ∈ En (t), we have that there exist a point z0 ∈ E0 and an absolutely continuous z(·) – solution of

JID:YJDEQ AID:10241 /FLA

[m1+; v1.304; Prn:3/02/2020; 12:10] P.35 (1-36)

M. Bivas, M. Quincampoix / J. Differential Equations ••• (••••) •••–•••

35

z˙ (s) ∈ Gn (z(s), s) for almost all s ∈ [0, t], z(0) = z0 such that z = z(t). We will apply Filippov’s theorem for z(·), the mapping G and δ = 0. The assumptions of the theorem are indeed true, since we have γ (s) := dist(˙z(s), G(z(s), s)) ≤ dH (V (z(s), En−1 (s)), V (z(s), E(s))) ≤ L1 dH (En−1 (s), E(s)) < L1 ε . Filippov’s theorem yields the existence of x(·) – solution of x(s) ˙ ∈ G(x(s), s) for almost all s ∈ [0, t], x(0) = z0 such that s |z(s) − x(s)| ≤ e

L1 s

γ (τ )dτ < εL1 seL1 s 0

for all s ∈ [0, t]. We estimate |y − x(t)| ≤ |y − z(t)| + |z(t) − x(t)| < ε(1 + L1 teL1 t ) By letting ε go to 0, we obtain that y ∈ R(t). The proof is complete. 2 References [1] L. Ambrosio, N. Gigli, G. Savaré, Gradient Fows in Metric Spaces and in the Space of Probability Measures, 2nd ed., Birkhäuser Verlag, 2008. [2] J.-P. Aubin, A. Cellina, Differential Inclusions, Grundlehren Math. Wiss., vol. 264, Springer-Verlag, Berlin, 1984. [3] J.-P. Aubin, H. Frankowska, Set-Valued Analysis, Birkhäuser, 1990. [4] J.-P. Aubin, Viability Theory, Birkhauser, Boston, 1991. [5] J.-P. Aubin, Mutational and Morphological Analysis, Birkhauser, Boston, 1999. [6] F. Bernicot, J. Venel, Differential inclusions with proximal normal cones in Banach spaces, J. Convex Anal. 17 (2010) 451–484. [7] F. Bernicot, J. Venel, A discrete contact model for crowd motion, ESAIM: M2AN 45 (1) (2011) 145–168, 17. [8] M. Bivas, M. Quincampoix, Feedback control for multi-agent dynamic, in preparation. [9] A. Bressan, Differential inclusions and the control of forest fires, J. Differ. Equ. 243 (2007) 179–207. [10] A. Bressan, D. Zhang, Control problems for a class of set valued evolutions, Set-Valued Var. Anal. 20 (4) (2012) 581–601. [11] R. Buckdahn, Y. Ouknine, M. Quincampoix, On limiting values of stochastic differential equations with small noise intensity tending to zero, Bull. Sci. Math. 133 (3) (2009) 229–237. [12] P. Cardaliaguet, M. Quincampoix, Deterministic differential games under probability knowledge of initial condition, Int. Game Theory Rev. 10 (1) (2008) 1–16. [13] C. Castaing, M. Valadier, Convex Analysis and Measurable Multifunctions, Lecture Notes in Mathematics, Springer, 1977. [14] G. Cavagnari, A. Marigonda, K.T. Nguyen, F.S. Priuli, Generalized control systems in the space of probability measures, Set-Valued Var. Anal. 26 (3) (2018) 663–691. [15] R.M. Colombo, M. Lécureux-Mercier, An analytical framework to describe the interactions between individuals and a continuum, J. Nonlinear Sci. 22 (1) (2012) 39–61.

JID:YJDEQ AID:10241 /FLA

36

[m1+; v1.304; Prn:3/02/2020; 12:10] P.36 (1-36)

M. Bivas, M. Quincampoix / J. Differential Equations ••• (••••) •••–•••

[16] R.M. Colombo, N. Pogodaev, On the control of moving sets: positive and negative confinement results, SIAM J. Control Optim. 5 (1) (2013) 380–401. [17] Rinaldo M. Colombo, Thomas Lorenz, Nikolay I. Pogodaev, On the modeling of moving populations through set evolution equations, Discrete Contin. Dyn. Syst. 35 (1) (2015) 73–98. [18] K. Deimling, Multivalued Equations, Walter de Gruyter, Berlin-New York, 1992. [19] A.F. Filippov, Differential Equations With Discontinuous Right-Hand Sides, Math. Appl. Soviet Ser., vol. 18, Kluwer Academic Publishers, Dordrecht, 1988. [20] C. Jimenez, A. Marigonda, M. Quincampoix, Optimal control of multiagent system on Wassertsein space, preprint, 2019. [21] T. Lorenz, Mutational Analysis, Lecture Notes in Mathematics, vol. 1996, Springer-Verlag, Berlin, 2010. [22] A. Marigonda, M. Quincampoix, Mayer control problem with probabilistic uncertainty on initial positions, J. Differ. Equ. 264 (5) (2018) 3212–3252. [23] S.M. Srivastava, A Course on Borel Sets, Graduate Texts in Mathematics, Springer-Verlag, New York, 1998. [24] M. Quincampoix, V. Veliov, Viability under uncertain initial state, Set-Valued Anal. 7 (1) (1999) 55–87. [25] M. Quincampoix, V. Veliov, Optimal control in presence of unobservable uncertainties, Proc. Bulg. Acad. Sci. 55 (8) (2002) 11–16. [26] M. Quincampoix, V. Veliov, Solution tubes for differential inclusions within a collection of sets, Control Cybern. 31 (3) (2002) 849–862. [27] M. Quincampoix, V. Veliov, Optimal control of uncertain systems with incomplete information for the disturbance, SIAM J. Control Optim. 43 (4) (2005) 1373–1399. [28] R. Tyrrell Rockafellar, Roger J.-B. Wets, Variational Analysis, Grundlehren der Mathematischen Wissenschaften, vol. 317, Springer, 1998.