Robust control barrier functions for constrained stabilization of nonlinear systems

Robust control barrier functions for constrained stabilization of nonlinear systems

Automatica 96 (2018) 359–367 Contents lists available at ScienceDirect Automatica journal homepage: www.elsevier.com/locate/automatica Brief paper ...

1MB Sizes 0 Downloads 105 Views

Automatica 96 (2018) 359–367

Contents lists available at ScienceDirect

Automatica journal homepage: www.elsevier.com/locate/automatica

Brief paper

Robust control barrier functions for constrained stabilization of nonlinear systems✩ Mrdjan Jankovic Ford Research & Advanced Engineering, 2101 Village Rd, MD 2036 RIC, Dearborn, MI 48121-2053, USA

article

info

Article history: Received 25 June 2017 Received in revised form 5 March 2018 Accepted 9 June 2018

Keywords: Nonlinear control systems Control Lyapunov functions Control barrier functions Quadratic programming

a b s t r a c t Quadratic Programming (QP) has been used to combine Control Lyapunov and Control Barrier Functions (CLF and CBF) to design controllers for nonlinear systems with constraints. It has been successfully applied to robotic and automotive systems. The approach could be considered an extension of the CLF-based point-wise minimum norm controller. In this paper we modify the original QP problem in a way that guarantees that V˙ < 0, if the barrier constraint is inactive, as well as local asymptotic stability under the standard (minimal) assumptions on the CLF and CBF. We also remove the assumption that the CBF has uniform relative degree one. The two design parameters of the new QP setup allow us to control how aggressive the resulting control law is when trying to satisfy the two control objectives. The paper presents the controller in a closed form making it unnecessary to solve the QP problem on line and facilitating the analysis. Next, we introduce the concept of Robust-CBF that, when combined with existing ISS-CLFs, produces controllers for constrained nonlinear systems with disturbances. In an example, a nonlinear system is used to illustrate the ease with which the proposed design method handles nonconvex constraints and disturbances and to illuminate some tradeoffs. © 2018 Elsevier Ltd. All rights reserved.

1. Introduction Control Lyapunov functions (CLF), introduced by Artstein (1983), have been used for theoretical development and robust control design for nonlinear systems. If a CLF V is known, universal formulas, such as Sontag formula (Sontag, 1989a) and pointwise minimum norm (PMN) formula (Freeman & Kokotovic, 1996) explicitly compute stabilizing control laws. Both formulas provide infinite gain margin and, for multi-input systems, a stronger ‘‘half-space’’ robustness property. That is, any control input u that satisfies uT uf ≥ uTf uf , with uf the control produced by either formula, also achieves global asymptotic stability. This half-space robustness becomes important when a CLF is paired with a control barrier function (CBF) and a Quadratic Programming (QP) setup is trying to satisfy both. The idea behind barrier functions is to assure constraint adherence when their time derivative is negative (or non-positive) close to the boundary of an admissible set. Extending the concept to systems with inputs, CBFs, first introduced in Wieland and Allgower (2007), can help design controllers that guarantee adherence to constraints. If a CBF B is available, the universal formulas developed ✩ The material in this paper was presented at the 2017 American Control Conference, May 24–26, 2017. This paper was recommended for publication in revised form by Associate Editor Zongli Lin under the direction of Editor Daniel Liberzon. E-mail address: [email protected]. https://doi.org/10.1016/j.automatica.2018.07.004 0005-1098/© 2018 Elsevier Ltd. All rights reserved.

for CLFs apply and achieve B˙ ≤ 0 on the boundary of the admissible set C rendering it forward invariant — that is, every trajectory that starts in C stays in C . The half-space robustness properties of the universal formulas carry over to the CBF case. Beyond this, however, the less intrusive the barrier constraint is, the weaker the robustness that can be claimed. In the original CBF paper (Wieland & Allgower, 2007), the authors proposed to combine a stabilizing control law, obtained independently of the constraint, with a CBF derived one using statedependent weighted average. Similarly, Romdlony and Jayawardhana (2014) used a combined CLF–CBF function (CLBF) obtained as a weighted average of the two. Both papers apply the Sontag formula based on the CBF and the CLBF, respectively. An advantage of the weighted average approach is its simplicity, while a drawback is that, in some proximity to the admissible set boundary, the controller uses all inputs to make B˙ < 0 ignoring the CLF part even when it is possible to satisfy both B˙ < 0 and V˙ < 0 simultaneously. To overcome this problem, Ames, Grizzle, and Tabuada (2014), Ames, Xu, Grizzle, and Tabuada (2017), and Xu, Tabuada, Grizzle, and Ames (2015) combine a CLF and a CBF using a QP approach. In order to assure that the QP problem is feasible (i.e. a solution exists) a scalar relaxation variable δ has been added. While δ allows QP to be solved, it does not actually impact the system — it is a fictitious control input. A high penalty on δ , denoted by m here, in the QP cost produces a control law that approximates the CLF based PMN formula when the barrier constraint is inactive. However, for no

360

M. Jankovic / Automatica 96 (2018) 359–367

value of m can one claim stability or boundedness of the closed loop system. Moreover, for multi-input systems, large values of m may result in large magnitude of the control input and its derivative (or, rather, its Lipschitz constant) producing a jumpy controller (see Section 3). In this paper we further develop this QP method. First, we simply remove the assumption that the CBF B has uniform relative degree one. Second, the relaxation variable δ becomes a vector and is multiplied by ∂∂Vx g(x) (the same vector that multiplies the control u in the expression for V˙ ) in the CLF constraint. This aligns vectors u and δ and allows one to recover the PMN controller while maintaining feasibility and Lipschitz continuity of the solution. These modifications allow us to claim local asymptotic stability independently of the selection of the CLF V and the CBF B as long as they satisfy the standard (minimal) assumptions including the small control property for the CLF V (Sontag 1989a). All the results of the paper are stated for ‘‘reciprocal’’ barrier functions, but would have remained unchanged if ‘‘zeroing’’ barrier functions were used (see Ames et al. (2017) and Xu et al. (2015) for definitions). Next, we analyze how two parameters of the modified QP problem impact the controller performance in terms of gain margin and its dual objective in multi-input systems. It is shown that they can be used to trade-off ability to meet both control objectives against ‘‘twitchiness’’ of the controller, that is, how far is the control vector allowed to swing for small changes in ∂∂Vx g(x) or ∂∂ Bx g(x) vectors. Because we want the CBF to be minimally intrusive, there is a limited robustness guarantee that can be claimed beyond the halfspace robustness mentioned above. Here – in an extension relative to the conference paper version (Jankovic, 2017) – we consider effects of external disturbances and introduce the concept of robust Control Barrier Functions (R-CBF). Combined with the ISS-CLF (see, for example, Krstic and Deng (1998)) it allows us to replicate the QP setup from the no-disturbance case and claim equivalent properties for the controller: feasibility, Lipschitz continuity, adherence to constraints, and, if the barrier constraint is inactive, input-to-state stability from the disturbance input. The definition of the R-CBF and the results claimed stay the same whether the disturbance is known, partially known, or unknown. However, as illustrated by an example, there is a potential difference in system performance between the cases. The differences from the conference version (Jankovic, 2017) include addition of Section 4 with the definition of R-CBF and the results for systems with disturbances including connection with ISS-CLF, as well as known, partially known, and unknown disturbances. The example is changed from a linear oscillator to a nonlinear oscillator and extended to include the case of ‘‘nonmatching’’ external disturbance. The paper is structured as follows. In Section 2, the modified QP problem is presented and the main result proved. Section 3 discusses the impact of the two QP parameters on the controller properties. Section 4 introduces R-CBFs and proves the robust version of the results of Section 2. An example in Section 5 is used to illustrate the results by simulations. Notation: For a differentiable function V (x) and a vector f (x), the notation Lf V (x) stands for ∂∂Vx f (x). A function α : R+ → R+ belongs to class K if it is continuous, zero and zero, and strictly increasing. A function f (x) is said to be Lipschitz continuous at x0 if there exists L and ε > 0 (that may depend on x0 ) such that ∥f (x) − f (y)∥ ≤ L∥x − y∥ whenever ∥x − x0 ∥ ≤ ε and ∥y − x0 ∥ ≤ ε . For a set C ⊂ Rn we denote by int C its interior and by ∂ C its boundary. 2. QP design for asymptotic stability Consider a nonlinear system affine in the control x˙ = f (x) + g(x)u

(1)

with the state x ∈ Rn and the control u ∈ Rp . We assume that the vector functions f (x) and g(x) are Lipschitz continuous and f (0) = 0. For the system (1) we also assume that we know a CLF that governs closed loop system performance (stabilization or regulation) and a CBF that governs adherence to constraints — in this case staying inside the interior of an admissible set C . The following definitions are standard. Definition 1 (Control Lyapunov Function (CLF)). A positive definite, radially unbounded, differentiable function V (x) is a CLF if there exists a class K function α (x) such that for all x ̸ = 0 Lg V (x) = 0 ⇒ Lf V (x) + α (∥x∥) < 0

(2)

In this paper we assume that α is Lipschitz continuous. Definition 2 (Control Barrier Function (CBF)). A function B(x) is a CBF with respect to the admissible set C if B is differentiable in int C , B(x) → ∞ as x → ∂ C , and for x ∈ int C Lg B(x) = 0 ⇒ Lf B(x) − αB (1/B(x)) < 0

(3)

where αB belongs to class K and is Lipschitz continuous. ∂ V (x)

∂ B(x)

We assume that ∂ x and ∂ x are also Lipschitz continuous so that Lf V , Lf B, etc. are too. Note that the strict inequality signs in (2) and (3) are essential to show that the universal (Sontag, PMN) formulas based on such CLF or a CBF are continuous and this is independent of the non-strict constraint inequalities in the QP formulation (see below). The objective is to find a controller that keeps the admissible set C invariant, that is, B(x) > 0, while, if possible, achieving V˙ (x) < −α (∥x∥). One effective method developed in Ames et al. (2014, 2017), and Xu et al. (2015) is based on Quadratic Programming: QP1 Problem — the baseline: Find the control u and the relaxation variable δ that satisfy 1

(∥u∥2 + mδ 2 ) subject to 2 Lf V (x) + α (∥x∥) + Lg V (x)u + δ ≤ 0

min

(4)

Lf B(x) − αB (1/B(x)) + Lg B(x)u ≤ 0 where m ≥ 1 is a design parameter intended to make δ in the solution as small as possible. This QP1 problem is feasible because the control u can take care of the second, barrier constraint in (4) – it was assumed that Lg B ̸ = 0 in C – while the relaxation variable δ could be used to satisfy the CLF constraint if needed. Because the constraints on u, δ in (4) are linearly independent, the control is Lipschitz continuous. The drawback of this setup is that, even when the barrier constraint in (4) is inactive, the control law still does not guarantee V˙ (x) < 0 because the variable δ that helps satisfy the constraint in (4) is fictitious. Namely, in this case, for Lf V (x) + α (∥x∥) > 0, Lf V (x) + α (∥x∥) Lg V T (x) ∥Lg V (x)∥2 + 1/m Lf V (x)/m − α (∥x∥)∥Lg V (x)∥2 V˙ (x) = ∥Lg V (x)∥2 + 1/m u

= −

(5)

It is clear from the first expression in (5) that, if m is very large, the control law approximates the PMN norm controller of Freeman and Kokotovic (1996) that is asymptotically stabilizing. Still, for no value of m is the controller in (5) guaranteed to achieve global boundedness or local asymptotic stability. To avoid these issues, we modify the QP1 problem by changing the CLF constraint (modifications underlined in Eq. (6)). In addition,

M. Jankovic / Automatica 96 (2018) 359–367

we remove Lg B(x) ̸ = 0 assumption used in Ames et al. (2014, 2017), and Xu et al. (2015) and just keep the assumption that B is a CBF. QP Problem — γ m version: Find the control u and the relaxation variable δ that satisfy min

1 2

(uT u + mδ T δ ) subject to

F1 := γf (Lf V (x) + α (∥x∥)) + Lg V (x)u + Lg V (x)δ ≤ 0

(6)

F2 := Lf B(x) − αB (1/B(x)) + Lg B(x)u ≤ 0

γf (s) =

γs s

if s ≥ 0 if s < 0

(7)

We need γ ≥ 1 to overcome the impact of δ when Lf V (x) + α (∥x∥) is positive and to have no impact when it is negative. The Lagrangian L(x, u, δ, λ1 , λ2 ) for the γ m-QP problem is given by 1

(∥u∥2 + m∥δ∥2 ) + λ1 [¯a1 + b1 (u + δ )] + λ2 (a2 + b2 u) (8) 2 where a¯ 1 (x) = γf (a1 ), a1 = Lf V (x) + α (∥x∥) and a2 (x) = Lf B(x) − αB (1/B(x)) are scalars, b1 (x) = Lg V (x) and b2 (x) = Lg B(x) are 1 × p row vectors, and λi , i = 1,2, are scalar Lagrange multipliers. According to the Karush–Kuhn– Tucker (KKT) conditions, the solution is optimal if (in this case, if and only if because the cost is convex and constraints are affine Boyd & Vandenberghe, 2004) it results in the gradient of the Lagrangian L with respect to u and δ equal to 0, λi ≥ 0, Fi ≤ 0 and λi Fi = 0, i = 1,2: L=

∂L = uT + b1 λ1 + b2 λ2 = 0 ∂u ∂L = mδ T + b1 λ1 = 0 ∂δ

(9)

λ1 F1 = λ1 [¯a1 + b1 (u + δ )] = 0 λ2 F2 = λ2 (a2 + b2 u) = 0 At this point we need to distinguish four cases depending on what constraints are active. Case A (F1 < 0 or x = 0, F2 < 0, λ1 = 0, λ2 = 0): In this case the solution to the first two equations in (9) is the trivial: u = 0, δ = 0

(10)

with λ1 = 0, λ2 = 0, F1 = a1 (x) < 0 and F2 = a2 (x) < 0. Hence, the region in which this solution applies is given by

Ω1 = {x ∈ Rn : a1 (x) < 0, a2 (x) < 0}

(11)

Note at this point that any x where b1 (x) = b2 (x) = 0 is in the interior of Ω1 by definition of the CLF and CBF. Case B (F1 = 0, x ̸ = 0, F2 < 0, λ1 ≥ 0, λ2 = 0): In this case, the barrier constraint is inactive. The solution to Eqs. (9) is u=−

δ=−

m

a¯ 1

m + 1 ∥b1 ∥2 1 a¯ 1 m + 1 ∥ b1 ∥

bT1

bT , λ 1 = 2 1

m

(12)

a¯ 1

m + 1 ∥b1 ∥2

Because in this case F2 < 0 while u is given by (12), we have that a2 <

T m b2 b1 a . m+1 ∥b2 ∥2 1

¯ Together with the condition λ1 ≥ 0 ⇒ a1 ≥ 0

we obtain that the region in Rn where this solution applies is given by

{ Ω2 = x ∈ Rn : a1 ≥ 0, a2 <

m

b2 bT1

a¯ 2 1

m + 1 ∥b1 ∥

The control law in (12) is well defined because b1 cannot be zero in any subset of Ω2 not containing the origin. The origin is handled separately with the small control property. Case C (F1 < 0, F2 = 0, λ1 = 0, λ2 ≥ 0): In this case when the CLF constraint is inactive and the solution is given by a2 bT u=− ∥b2 ∥2 2 (14) a2 δ = 0, λ2 = 2 ∥ b2 ∥ With F1 < 0 and λ2 ≥ 0, the set where the solution (14) applies is given by

where the Lipschitz continuous function γf is defined by

{

361

{ } b1 bT2 Ω3 = x ∈ Rn : a2 ≥ 0, a¯ 1 < a 2 ∥b2 ∥2

(15)

Again, the control law is well defined because b2 is bounded away from 0 in any bounded subset of Ω3 . Case D (F1 = 0, F2 = 0, λ1 ≥ 0, λ2 ≥ 0): Here, both constraints are active. The solution to the resulting four equations is u=

−∥b2 ∥2 bT1 a¯ 1 + b1 bT2 (bT1 a2 + bT2 a¯ 1 ) (1 +



1 ) m

∥b1 ∥2 ∥b2 ∥2 − ∥b1 bT2 ∥2 1 (1 + m )∥b1 ∥2 bT2 a2



1 ) b1 2 m b2 2 bT1 a1

∥ ∥ ∥b2 ∥2 − ∥b1 bT2 ∥2 −∥ ∥ ¯ + b1 bT2 bT1 a2 δ= (m + 1)∥b1 ∥2 ∥b2 ∥2 − m∥b1 bT2 ∥2 ∥b2 ∥2 a¯ 1 − b1 bT2 a2 λ1 = 1 (1 + m )∥b1 ∥2 ∥b2 ∥2 − ∥b1 bT2 ∥2 T −b1 b2 a¯ 1 + (1 + m1 )∥b1 ∥2 a2 λ2 = 1 (1 + m )∥b1 ∥2 ∥b2 ∥2 − ∥b1 bT2 ∥2 (1 +

(16)

Here, b1 and b2 must be different from 0 because, otherwise, one or both Fi constraints become inactive. Directly from the last two equations in (16) a¯ 1 ≥ (16) holds in

{

Ω4 = x ∈ A : a¯ 1 ≥

b1 bT2

a , ∥b2 ∥2 2

b1 bT2

∥b2 ∥2

a2 ≥

a2 , a2 ≥

T m b1 b2 a , m+1 ∥b1 ∥2 1

¯ and the solution

m

b1 bT2

1 + m ∥ b1 ∥ 2

} a¯ 1

(17)

where A = Rn − {x ∈ Rn : a1 (x) < 0, a2 (x) < 0}. In the derivation above, we have given a special consideration to the case x = 0. F1 (0, u, δ ) = 0 ( ∂∂Vx (0) = 0 because x is a local minimum and α (0) = 0), but the constraint is not active because it is 0 independently of the choice of u and δ . Thus, we have chosen to add it to the F1 -inactive Case A. This consideration is not needed for Cases C and D if one assumes the standard ‘‘small control property’’. In fact, it turns out that the small control property is instrumental in proving local asymptotic stability (see part 4 of Theorem 1). Without it, the barrier constraint might be active around the origin potentially interfering with achieving V˙ < 0. Even for unconstrained stabilization problem, the standard CLF conditions are not enough to guarantee that the universal (Sontag or PMN) control law is continuous at the origin. Thus, the small control property – existence of a small input that makes V˙ < 0 around the origin – has been used to guarantee that the respective universal formula is continuous (c.f. Freeman and Kokotovic (1996) and Sontag (1989a)). Here we present the version based on Krstic and Deng (1998). Definition 3 (Small Control Property (SCP)). The CLF V is said to have the small control property if, in a neighborhood of the origin, there exists a continuous control law uc (x) such that uc (0) = 0 and

} (13)

V˙ = Lf V + Lg Vuc (x) ≤ −α (∥x∥)

(18)

362

M. Jankovic / Automatica 96 (2018) 359–367

With the solution of the γ m-QP problem presented by Eqs. (10) to (17), we can state the following result.

neighborhood of the origin, for a1 (x) ≤ 0, the PMN control law u1 = 0 and, for a1 (x) > 0, by the small control property,

Theorem 1. If V (x) is a CLF and B(x) is a CBF for the system (1), then

|a1 (x)| ≤ ∥b1 (x)∥ ∥uc (x)∥

(1) The γ m-QP problem (6) is feasible and the resulting control law given by Eqs. (10) to (17) is Lipschitz continuous in every subset of int C not containing the origin. ˙ (2) B(x) ≤ αB (1/B) for all x ∈ int C and the set int C is forward invariant. (3) If the barrier constraint is inactive (F2 < 0) and if we select γm = 1 the control law is identical to the PMN formula m+1 of Freeman and Kokotovic (1996) and achieves V˙ (x) ≤ −α (∥x∥). (4) If 0 ∈ int C and the CLF V has the small control property then the barrier constraint is inactive around the origin, i.e. 0 ∈ int(Ω1 ∪ Ω2 ), the control law is continuous at the origin, and the closed loop system is locally asymptotically stable. Proof. Part 1: Because V is a CLF and B is a CBF, the γ m-QP problem has a unique solution given by Eqs. (10) to (17). To prove Lipschitz continuity we use Theorem 3.1 of Hager (1979). In that paper, M denotes a matrix multiplying the controls (in our case u, δ ) in the active constraints. Hence, it changes with the problem parameters. In our case,

⎧ void [ ] ⎪ ⎪ ⎪ ⎨ b[1 (x) b1 (x) ] M(x) = [ b2 (x) 0 ] ⎪ ⎪ ⎪ ⎩ b1 (x) b1 (x) b2 (x)

0

for x ∈ Ω1 for x ∈ Ω2 for x ∈ Ω3 for x ∈ Ω4

Part 2: Because the constraint F2 is satisfied (the γ m-QP problem is feasible), B˙ = Lf B + Lg Bu ≤ αB (1/B). Following the proof of Theorem 1 in Ames et al. (2017), one obtains that the interior of the set C is forward invariant. Part 3: If the barrier constraint is inactive, F2 < 0, the control law is obtained by combining Cases A and B. With a¯ 1 = γf (Lf V + α (∥x∥)), γf defined by (7), and mγ+m1 = 1 we have if a1 ≥ 0

1

Because a2 is negative in a neighborhood of the origin and u needed to satisfy the F1 constraint converges to 0, we conclude that F2 < 0 around the origin. That is, the γ m-QP controller around the origin is given by the point-wise minimum norm formula (19) that is continuous at the origin and achieves local asymptotic stability according to Freeman and Kokotovic (1996). △ Remark 1. In the single input case, the control laws u from (14) in Ω3 (Case C) and (16) in Ω4 (Case D) are identical and given by a u2 = − ∥b 2∥2 bT2 — the point-wise minimum norm (PMN) controller 2

for the CBF when a2 (x) ≥ 0. Remark 2. If B(x) =

1 h(x)

is the reciprocal barrier function with

the boundary of the admissible set ∂ C defined by h(x) = 0, it is easy to show that the γ m-QP control law remains bounded on ∂ C because the term h21(x) in a2 and b2 , arising from ∂∂ Bx , cancels out. As a consequence, the γ m-QP control law is not very aggressive close to the boundary of the admissible set. In contrast, Sontag’s formula applied to a CBF would go to infinity on the boundary, unless Lg B = 0 on ∂ C . 3. Selecting QP parameters γ and m

Select a convex and compact set D ⊂ int C − {0}. To apply Theorem 3.1 of Hager (1979) we need to satisfy three conditions which are modified slightly to fit the problem and the notation of this paper. First, there is a unique solution in D (condition A1 of Hager (1979)). Because D is compact and b1 , b2 are assumed Lipschitz continuous, there exists Γ > 0 such that ∥M T (x)∥ < Γ for all x ∈ D (condition ¯ 2 ∩ D (Ω ¯2 A2). Next, ∥b1 (x)∥ ≥ ε > 0, on the compact set Ω denotes the closure of Ω2 ) because a1 ≥ 0 for x ∈ Ω2 and V ¯ 3 ∩ D, and is a CLF. Similarly, b2 (x) is bounded away from 0 in Ω both b1 (x) and b2 (x) are bounded away from 0 on the compact ¯ 4 ∩ D. Hence, in all the cases M T is a full-rank ‘‘tall’’ matrix set Ω and we conclude that there exists β > 0 such that ∥M T (x)µ∥ ≥ β∥µ∥ for all µ of appropriate dimensions (condition A3). Thus, the conditions of Theorem 3.1 of Hager (1979) are satisfied and the control law derived from γ m-QP is Lipschitz continuous in D provided that the QP problem data a¯ 1 , a2 , b1 , b2 are themselves Lipschitz continuous. This proves that the control law is Lipschitz continuous for every x ∈ int C except maybe at x = 0.

⎧ ⎨− a1 b (x)T 1 ∥b1 (x)∥2 u1 = ⎩ 0

with uc being the continuous control law from the definition of |a (x)| SCP. Thus, uc (x) → 0 implies that ∥u1 ∥ = ∥b1 (x)∥ → 0 as x → 0.

(19)

otherwise

where, recall, a1 = Lf V + α (∥x∥). This control law coincides with the point-wise minimum norm formula of Freeman and Kokotovic (1996) and achieves V˙ (x) ≤ −α (∥x∥). Part 4: To show local asymptotic stability to 0 ∈ int C , we note that a2 (0) = −αB (1/B(0)) < 0 (f (0) = 0 ⇒ Lf B(0) = 0). In a

The modified QP problem considered in Section 2 introduces the parameter γ to achieve stability when the barrier constraint is inactive. Together with m, it can also be used to adjust the behavior of the resulting controller. In this section we offer guidance how to select the two parameters to find a good trade off between performance (speed of response), robustness, and aggressiveness of the controller. The last property refers to a situation where selecting parameters such that the resulting control u meets both the stability and barrier constraint objectives leads to the magnitude of u going towards infinity as the angle between vectors Lg V = b1 and Lg B = b2 approaches 180◦ . We first look at the robustness of the control law to uncertainties at the system input and consider Case B where the barrier constraint is inactive. It has been established in the literature (e.g. Freeman and Kokotovic (1996)) that the PMN control law (19) is inverse optimal and has [1, ∞) gain margin. More precisely, the closed loop system remains stable if a sector nonlinearity ϕi (·) that satisfies s2 ≤ sϕi (s) < ∞, i = 1, . . . , p is inserted at each input (see Sepulchre, Jankovic, & Kokotovic, 1997). In the γ m-QP method, a stronger gain/sector margin can be achieved by increasγm ing γ . For m+1 = 2 the gain margin is [ 12 , ∞) — the same as that of optimal regulators. In Case C, when the CLF constraint is inactive and the control law is given by the PMN formula for the barrier function (14), there is also [1, ∞) gain (sector) margin guarantee for constraint adherence. Introducing γf 2 in the F2 constraint and following the same approach as for the CLF constraint achieves a stronger margin. Note, however, that the goal to achieve V˙ strictly negative and B˙ not too positive produces lesser robustness for the latter. We discuss the robustness to external disturbance in the next section. In the case of multi-input systems, if we replace the PMN a controller ui = − ∥b i∥2 bTi , i = 1,2 with any controller u that, at i every time instant, satisfies uTi u ≥ uTi ui

(20)

M. Jankovic / Automatica 96 (2018) 359–367

363

Fig. 1. The half-space constraints, the ideal PMN controls that satisfy each, and the minimal control umin that satisfies both.

the corresponding constraint will be satisfied: V˙ ≤ −α (∥x∥) or B˙ ≤ αB (1/B). We refer to this property as the half space robustness because, for any fixed ui , (20) defines a half space for u in Rp . Consider the Case D where both constraints are active. Because the γ m-QP problem is feasible, b2 uD ≤ −a2 (here, uD is the control law given by (16)) and, thus, B˙ ≤ αB (1/B). The situation is depicted in Fig. 1 showing the two PMN control inputs u1 and u2 and the control law umin — the minimal magnitude control that meets both control objectives. The control input uD must lie to the right of the shaded region. The question is under what conditions will V˙ ≤ −α (∥x∥) too. It is obvious from Fig. 1 that, as the angle between u1 and u2 (i.e. the angle between b1 and b2 ) θ → 180◦ , ∥umin ∥ → ∞. However, when the two vectors are exactly opposite, one can show that uD = u2 . So the selection of γ and m determines how far is uD allowed to go towards meeting both objectives before retreating back towards u2 as θ approaches 180◦ . Now we replace b1 bT2 with ∥b1 ∥∥b2 ∥ cos(θ ) and divide the numerator and the denominator of b1 uD by ∥b1 ∥2 ∥b2 ∥2 to obtain b1 uD = −

m sin2 (θ ) m sin2 (θ ) + 1

a¯ 1 −

cos(θ )

∥ b1 ∥

m sin2 (θ ) + 1 ∥b2 ∥

a2

(21)

In the case b1 bT2 ≥ 0 (i.e. θ ≤ 90◦ ) we use the second ∥b ∥ inequality that defines the set Ω4 (17) rewritten here as ∥b1 ∥ a2 ≥ 2 m cos(θ )a¯ 1 . We can make the right hand side of (21) larger if we m+1 make the second term smaller and thus b 1 uD

m sin2 (θ )

θ)

a¯ 1 m sin (θ ) + 1 m sin (θ ) + 1 m a¯ 1 ≤ −a1 = −(Lf V + α (∥x∥)) =− m+1

≤−

2

a¯ 1 −

m cos2 ( m+1 2

(22)

resulting in V˙ ≤ −α (∥x∥). The last inequality follows from the γm definition of a¯ 1 provided that m+1 ≥ 1. It partially explains the selection of γf in (7). If the multiplier γ were applied when a1 < 0, the second inequality in (22) would not hold. For b1 bT2 < 0 (i.e. 90◦ < θ ≤ 180◦ ) the condition b1 uD ≤ −a1 (i.e. V˙ ≤ −α (∥x∥)) will be satisfied if the angle θ satisfies (−ρ sin (θ ) + 1) 2

a1

∥b1 ∥

− cos(θ )

a2

∥b2 ∥

≤0

(23)

where ρ = m(γ − 1) if a1 > 0. When a1 < 0 then γ = 1 ⇒ ρ = 0 and (23) is independent of γ and m. The values of θ that meet (23) as the equality for a1 > 0 and a2 > 0 are shown in Fig. 2. Note that the choice of γ and m in Theorem 1 leads to ρ = 1 and θmax = 90◦ — the least aggressive choice. By increasing ρ we can meet both control objectives for a wide range of angles between the two ideal PMN control laws. However, as it is obvious from Fig. 1, this will result in ever larger values of the control law

Fig. 2. The maximal angle θmax for which V˙ ≤ −α for different values of ρ = m(γ − 1).

because, for 90◦ ≤ θ < 180◦ ,

∥u1 ∥ + ∥u2 ∥ ∥ u1 ∥ + ∥ u2 ∥ ≤ ∥umin ∥ ≤ √ sin(θ ) 2 sin(θ )

(24)

Thus, any control law that satisfies both control objectives also must satisfy the left hand side inequality in (24). Through the selection of the parameter ρ we can find the trade off between the size of the region where both objectives are satisfied and the amplitude of the resulting control vector uD . One can show that uD is limited by ∥uD ∥2 ≤ ρ 2 sin2 (θ )∥u1 ∥2 + ∥u2 ∥2 , which shows that uD remains bounded when θ = 180◦ . However, when θ ̸ = 180◦ , the bound is of order ρ which is proportional to m. For large values of m, a small change in x around a value where the angle θ is equal to 180◦ may result in a large change in uD . That is, even though the control law is Lipschitz continuous, the Lipschitz gain is of order m. In view of this, the parameter selection might proceed as follows: (1) Based on (24), select the amplification factor

1 sin(θmax )

we

want ∥uD ∥ to have relative to ∥u1 ∥ + ∥u2 ∥ and compute the corresponding θmax . ∥u ∥ (2) The worst amplification is for ratios ∥u2 ∥ being close to 0 and we have ρ ≈

1 . sin2 (θmax )

1

(3) Independently, pick the desired gain margin [ κ1 , ∞) with 1 < κ < ρ. ρ−κ ρ (4) Solve for m and γ using m = κ−1 , γ = m + 1 For example if we select the gain margin κ = 2 and ρ = 10, for which the amplification factor is less than 3 – that is, ∥uD ∥ ≤ 3(∥u1 ∥ + ∥u2 ∥) – we obtain the QP parameters γ = 2.25, m = 8. 4. Robust CLF/CBF controller In Remark 2 we have argued that the γ m-QP controller applies a minimal control effort to prevent closed loop trajectories from crossing the boundary of the admissible set C . On the other hand, there is little margin of error to cushion against uncertainties and external disturbances. It was shown in Xu et al. (2015) that, with a zeroing barrier function, a small perturbation of the system dynamics results in a small violation of the constraint. The result was established by considering Lyapunov stability of the admissible set for trajectories outside of it. Here, we propose a different approach with a more structured and not necessarily

364

M. Jankovic / Automatica 96 (2018) 359–367

small disturbance/uncertainty. Thus, we consider a system with an external disturbance w (t) ∈ Rν : x˙ = f (x) + g(x)u + p(x)w

(25)

Note that this could also model parametric uncertainties in the system dynamics such as f˜ (x) = f (x) + p(x)θ . Here, instead of the disturbance ω(t) we have a vector of unknown parameters θ with the nominal value of 0. For systems with disturbances, the concept of CLF has been extended to robust CLF in Freeman and Kokotovic (1996) and input-to-state-stability (ISS)1 CLF in Krstic and Deng (1998). We follow the latter (i.e. ISS-CLF) approach, but note that, if the barrier constraint is inactive, the resulting control law of the design proposed below is the PMN formula similar to the one in Section 4.2.2 of Freeman and Kokotovic (1996). That setup, however, includes an additional assumption that V (x) > cV , which is not easily incorporated in the QP problem posed below. The definition of ISSCLF used here is obtained by combining Definition 2.3 and Lemma 2.4 from Krstic and Deng (1998). Definition 4 (ISS-CLF). A positive definite, radially unbounded function V is an ISS-CLF if there exists class K∞ functions α, η such that, for ∥x∥ ≥ η(∥w∥), Lg V (x) = 0 ⇒ Lf V (x) + ∥Lp V (x)∥η−1 (∥x∥) < −α (∥x∥)

(26)

As before, we assume that all the functions that appear in these inequalities – i.e. Lf V , Lg V , Lp V , α and η−1 – are Lipschitz continuous. To guarantee the continuity at the origin of the control law derived from an ISS-CLF, we need to modify the small control property. Definition 5 (Small Control Property for ISS-CLF). The ISS-CLF V is said to have the small control property if, in a neighborhood of the origin, there exists a continuous control law uc (x) such that uc (0) = 0 and V˙ = Lf V + ∥Lp V (x)∥η−1 (∥x∥) + Lg Vuc (x) ≤ −α (∥x∥)

(27)

The existence of an ISS-CLF leads to an input-to-state stabilizing control law (in Krstic and Deng (1998), given by the Sontag-type formula) that forces the state to enter a set that depends on the magnitude of the disturbance. A larger disturbance may result in a larger residual set to which the state of the system converges to. It is clear from this description that the concept of ISS-CLF is not readily transferable to CBFs. For the latter, we need a definition that guarantees that, under the worst case disturbance, trajectories will stay in the admissible set. In particular, unless the barrier has a higher relative degree relative to the disturbance, we have to assume that it is bounded by a known constant w ¯:

∥w(t)∥ ≤ w, ¯ ∀t ≥ 0

(28)

Definition 6 (Robust-CBF). A function B(x) is an R-CBF with respect to the admissible set C if B(x) is positive for x ∈ int C , B(x) → ∞ as x → ∂ C , and there exists a class K function αB such that Lg B(x) = 0 ⇒ Lf B(x) + ∥Lp B(x)∥w ¯ < αB (1/B)

1 The notion of ISS itself was introduced by Sontag (1989b).

(29)

With this setup, we can repeat the process from Section 2 and consider the following constrained stabilization problem for the system (25) impacted by a disturbance. Robust QP Problem: Find the control u and the relaxation variable δ that satisfy 1

(uT u + mδ T δ ) subject to 2 F1 := a¯ 1 (x) + Lg V (x)u + Lg V (x)δ ≤ 0 F2 := a2 (x) + Lg B(x)u ≤ 0

min

(30)

where now a¯ 1 (x) = γf Lf V (x) + ∥Lp V (x)∥η−1 (∥x∥) + α (∥x∥) a2 (x) = Lf B(x) + ∥Lp B(x)∥w ¯ − αB (1/B(x))

(

) (31)

and the Lipschitz continuous functions γf is defined by (7). The only difference from the case with no disturbance is the definition of a¯ 1 and a2 , but, because they match the definitions of ISS-CLF and R-CBF, it should be no surprise that the results are similar to those of Theorem 1. Theorem 2. If V (x) is an ISS-CLF with the small control property and B(x) is an R-CBF for the system (25), then (1) The Robust QP problem (30) is feasible and the resulting control law given by Eqs. (10) to (17), with a¯ 1 and a2 redefined by (31), is Lipschitz continuous in every subset of int C not containing the origin. ˙ (2) B(x) ≤ αB (1/B) for all x ∈ int C and the set int C is forward invariant. γm (3) If the barrier constraint is inactive (F2 < 0) and if m+1 ≥ 1 the Case B control law is the scaled PMN formula that achieves V˙ (x) ≤ −α (∥x∥) − ∥Lp B(x)∥(η−1 (∥x∥) − w ). As a result, if the barrier constraint is inactive for t greater than some t ∗ , the closed loop system is input to state stable with respect to the disturbance input w . Proof. Part 1 is identical to the corresponding one in the proof of Theorem 1. In Part 2, because the barrier constraint is satisfied, B˙ = αB (1/B) + Lp Bw − ∥Lp B∥w ¯ ≤ αB (1/B). This proves that C is forward invariant. For the Part 3, it is straightforward to show that the PMN formula achieves V˙ (x) ≤ −α (∥x∥) − ∥Lp B(x)∥(η−1 (∥x∥) − w ) and, thus, V˙ ≤ −α (∥x∥) if ∥x∥ ≥ η(∥w∥). The ISS property of the closed loop system follows from Theorem 2.2 in Krstic and Deng (1998). △ Note that the presence of ∥Lp B∥w ¯ term in the barrier constraint prevents us from claiming local asymptotic stability. That is, if the disturbance bound is large enough relative to αB (1/B(0)), the barrier constraint might be active around the origin potentially preventing V˙ < 0. In this case, the controller for a multi-input system might not be continuous at the origin (it is always continuous in the single input case). Of course, if the barrier constraint is inactive at x = 0 and the disturbance is 0, the controller is continuous at the origin and achieves local asymptotic stability. Known Disturbance: Thus far, we have worked with an unknown disturbance. If the disturbance is known, say by having a measurement or using an unknown input observer, we could make the barrier constraint less conservative and increase the chances of achieving asymptotic stabilization. In such a case, we leave the definition of the R-CBF the same – we still have to deal with the worst case disturbance when Lg B = 0 – but use the actual measured/estimated disturbance w ˆ (t) in the formulation of the RQP problem. That is, instead of a2 defined by (31), we use aˆ 2 (x, w ˆ ) = Lf B(x) + Lp B(x)w ˆ − αB (1/B(x))

(32)

M. Jankovic / Automatica 96 (2018) 359–367

365

This reformulation guarantees that the R-QP control law achieves B˙ ≤ αB (1/B) provided that w (t) = w ˆ (t) for all t. Partially Known Disturbance: If a bound on the estimation error is available, say ∥w (t) − w ˆ (t)∥ ≤ w ¯ ∆ , ∀t ≥ 0, instead of a2 in (30) we propose to use 1 ˆ ) = Lf B + Lp Bw ˆ + ∥Lp B∥w ¯ ∆ − αB ( ) a∆ (33) 2 (x, w B Here, when Lg B ̸ = 0, the Robust-QP controller always satisfies the barrier constraint and B˙ ≤ αB (1/B) + Lp B(x)(w − w ˆ ) − ∥Lp B∥w ¯∆ ≤ αB (1/B). When Lg B = 0, however, we have to rely on the CBF condition (29) to guarantee feasibility. That is, when Lg B = 0, we need a∆ 2 < 0, which can be accomplished by modifying the CBF condition (29) with w ¯ + w∆ replacing w ¯. The results of Theorem 2 apply to cases of known or partially known disturbances with no changes (except for the just mentioned change in (29)). Moreover, all the considerations of Section 3 apply to the case of Robust QP control. Note that it does not mean that the presence of the disturbance and the availability of the disturbance estimate does not have an impact on the system performance as illustrated by an example in the next section.

Fig. 3. System responses: open loop (black-dot), CLF closed loop (green-dash), and γ m-QP (blue-solid), against the constraint with q = −1 (red).

5. An example We illustrate the ease with which the control method presented above facilitates design of a stabilizing controller while handling convex and non-convex constraints. First we consider the case without a disturbance: x˙ 1 = −dx1 − x2 x˙ 2 = x31 + x2 u

(34)

where d > 0. The system (34) is non-linear, not feedback linearizable, but it is open loop (u ≡ 0) asymptotically stable as one can verify by considering the Lyapunov function V = 14 x41 + 21 x22 . V is

also a CLF because, with α (∥x∥) = 2d ∥x∥4 , Lg V = x22 = 0 ⇒ a1 = Lf V +α = − 2d x21 < 0, for x ̸ = 0. Finally, it satisfies the small control property with uc (x) = −dx22 . We want to control the system to the origin while staying within the admissible set defined by C = {x ∈ Rn : h(x) = qx22 − x1 + 1 > 0} where q ̸ = 0. To accomplish this, we define a barrier function 1 1 B(x) = (35) = 2 h(x) qx2 − x1 + 1 Note that the function B(x) > 0 for x ∈ C and B → ∞ as x → ∂ C = {x : h(x) = 0}. The function B is a CBF with

αB (s) = d|s| because, Lf B =

−dx1 −x2 −2qx31 x2

(qx22 −x1 +1)2 −dx1 d x1 (x1 −1)2

, Lg B =

−2qx22

(qx22 −x1 +1)2

, and

Lg B = 0 ⇒ Lf B − αB (1/B) = − | − 1| < 0. For the γ m-QP controller given in Section 2 we use m = 8, γ = 2.25 as suggested in Section 3. This provides the gain margin of 2, while the other considerations of Section 3 do not apply because we have a single input. The resulting controller is given by

u=

⎧ 0 ⎪ ⎪ ⎪ −2dx4 − d∥x∥4 ⎪ 1 ⎨ 2x2

2 ⎪ ⎪ −dx1 − x2 − 2qx31 x2 − d|h(x)|3 ⎪ ⎪ ⎩ 2

2qx2

x ∈ Ω1 x ∈ Ω2

(36)

x ∈ Ω3 ∪ Ω4

Here, division by x22 is not a problem because, when x2 = 0, Lg V and Lg B are both zero, guaranteeing that we are in the set Ω1 and u = 0. In fact, all the conditions of Theorem 1 are satisfied and the control law (36) is Lipschitz continuous everywhere except maybe at the origin and it is at least continuous at the origin. The set C is forward invariant and the origin is locally asymptotically stable.

We have run simulations for q = −1 and q = 1. The two cases are of interest because if q = −1, Lg V and Lg B always have the same sign – an action that helps reduce V , also helps avoid the constraint – while with q = 1 they have the opposite signs. Fig. 3 shows the results with q = −1, d = 0.6. The goal is to stabilize the system to 0, while keeping it inside (i.e. to the left) of the constraint (red) curve h(x) = 0. The responses of the open loop system (blackdot curve) and the CLF-only closed loop system (green-dash curve), violate the constraint. The response of the CLF/CBF controller (36) adheres to the constraint while stabilizing the system (blue-solid curve). For q = 1, the constraint becomes non-convex, as shown in Fig. 4, and Lg V and Lg B have the opposite signs. The controller (36) still manages to avoid violating it while stabilizing the system to the origin — the blue solid curve marked CLF/CBF. For comparison, the open loop system (black dot curve) and the CLF-only close loop system (green-dash curve) trajectories both violate the constraint. From Figs. 3 and 4 one might notice that the CLF/CBF closed loop response stays very close to the CLF-only response until the trajectory comes fairly close to the constraint. In other words, over the large region of the state space the CLF-only and CLF/CBF controllers coincide. The dash–dot line shown in Fig. 4 divides the plane into the regions where x˙ 1 > 0 and x˙ 1 < 0. The line is independent of the control input applied. Only in the latter region, that is above the line, can a trajectory move to the left towards the origin. The controller successfully manages to guide the trajectory through the small gap between the x˙ 1 = 0 line and the parabolic constraint. However, if we were to select d ≤ 0.5, the line would intersect the constraint h(x) = 0 and no trajectory can be driven to the origin if it starts from below the constraint and has the initial value x1 (0) larger than the x1 coordinate of the intersection point. This is because all points on the vertical half line that start at the intersection point have x˙ 1 ≥ 0 – basically, the way back towards the origin is blocked by the constraint. This consideration applies to any control law and not just the one proposed in this paper. To look at the time domain, the traces in Fig. 5 corresponds to the CLF/CBF trajectory in the phase plane plot in Fig. 4. The top plot shows the traces of states x1 and x2 versus time. The control u (bluesolid) and the active region – mode i for x ∈ Ωi – (green-dash) are shown in the bottom plot.

366

M. Jankovic / Automatica 96 (2018) 359–367

We consider the same barrier function B = 1/(qx21 − x1 + 1). In this case, Lf B = − h12 (dx1 + x2 + 2qx2 x31 ), Lp B =

1 , Lg B h2

= − h12 2qx22 , and αB = µ|h|, with µ an adjustable parameter. When x2 = 0, that is Lg B = 0, on the boundary of the admissible set as x1 → 1, αB → 0 and h22 (Lf B + ∥Lp B∥w ¯ ) → −d + w ¯ . Thus, one can find µ such that B is an R-CBF if w ¯ < d. The closer w ¯ is to d, the larger the µ must be selected. If the disturbance is not known, the controller that handles the worst-case disturbance is given by

u¯ w =

⎧ 0 ⎪ ⎪ d d 3 4 4 ⎪ ⎪ ⎨ −2dx1 − 2 |x1 |∥x∥ − 4 ∥x∥ 2x2

2 ⎪ ⎪ ¯ − µ|h(x)|3 −dx1 − x2 − 2qx31 x2 + w ⎪ ⎪ ⎩ 2

2qx2

x ∈ Ω1 x ∈ Ω2

(39)

x ∈ Ω34

where Ω34 = Ω3 ∪ Ω4 . If the disturbance is known, w ˆ (t) = w(t), the controller is given by Fig. 4. System responses: open loop (black-dot), CLF closed loop (green-dash), and γ m-QP (blue-solid), against the constraint with q = 1 (red).

⎧ ⎨the same as u¯ w 3 ˆ − µ|h(x)|3 uˆ w = −dx1 − x2 − 2qx1 x2 + w ⎩ 2 2qx2

Next we consider the same system with the added external disturbance w : x˙ 1 = −dx1 − x2 + w x˙ 2 = x31 + x2 u

(37) 1 4 x 4 1

+ 12 x22 , that was used in the non-disturbance case (37). With Lf V = −dx41 , Lp V = x31 , and Lg V = x22 , we can select α (∥x∥) = 4d ∥x∥4 and η−1 (∥x∥) = 2d ∥x∥ to obtain Our ISS-CLF candidate is the same function, V (x) =

Lg V = x22 = 0 ⇒ Lf V + Lp V η−1 < −α

(38)

This shows that V is indeed an ISS-CLF. It is not difficult to show that V has the ISS-CLF small control property. For example, uc = −2dx21 − 3dx22 achieves Lf V + ∥Lp V ∥η−1 + Lg Vuc ≤ −α (∥x∥).

x ∈ Ω1 ∪ Ω2 x ∈ Ω34

(40)

Finally, for the simulations we use d = 0.6, as in the nondisturbance case, w ¯ = 0.5 (so that w ¯ < d) and µ = 10 which guarantees the R-CBF condition is satisfied. Simulation results are shown in Fig. 6 with w (t) being the square wave with period 5 s and amplitude 0.5. If we apply the control law (36) that does not account for the disturbance (green-dash) curve, the trajectory violates the constraint. With the worst case (unknown) disturbance, the control law (39) prevents violation of the constraint, and eventually clears the gap between the constraint and the x˙ 1 = 0 line (brown-dot curve). Finally, if the disturbance is known, the control law (40) easily squeezes through the gap (blue solid curve) in less than 10 s while the worst case controller takes about 80 s. The difference is that when the barrier constrain is active B˙ = αB (1/B) in the known disturbance case and B˙ = αB (1/B) + Lp Bw − ∥Lp B∥w ¯ in the unknown disturbance case. It is clear that, in the latter case, trajectories will spend more time further away from the constraint, which, in this example, means closer or above the

Fig. 5. Top plot — state trajectory; bottom plot — the control action and the active region (mode) i corresponding to Ωi in (36).

M. Jankovic / Automatica 96 (2018) 359–367

367

References

Fig. 6. State trajectories: the base γ m-QP controller (green-dash), Robust QP controller with unknown disturbance w (brown-dots), and Robust QP controller with known w (blue-solid).

x˙ 1 = 0 line. The case with partially known disturbance (not shown) falls somewhere in between the two depending on the size of uncertainty. 6. Conclusion The control objectives of assuring a CLF decrease and a controlled increase of the CBF can be combined as constraints in a QP problem. In this paper we refine previous results by modifying the QP problem such that feasibility and Lipschitz continuity of the control law are retained while achieving decreasing CLF and local asymptotic stability. The paper shows how to select the two design parameters to reach the appropriate trade-off between competing objectives of decreasing CLF and control ‘‘twitchiness’’. Next, for systems with external disturbances, the concept of robust CBF is defined and combined with ISS-CLFs in essentiality the same QP formulation. The cases handled include unknown disturbance, partially know disturbance, and known disturbance. The difference is in the level of conservativeness the controller applies to the barrier constraint to keep the trajectories in the feasible set. The claimed results for systems with disturbance are very close to the base nondisturbance case. The results are illustrated on a nonlinear example with a parabolic constraint. Acknowledgments The author would like to thank Prof. Jessy Grizzle and Dr. Xiangru Xu for fruitful discussions.

Ames, A. D., Grizzle, J. W., & Tabuada, P. (2014). Control barrier functions based quadratic programming with application to adaptive cruise control. In IEEE CDC, Los Angeles, CA. Ames, A. D., Xu, X., Grizzle, J. W., & Tabuada, P. (2017). Control barrier function based quadratic programs for safety critical systems. IEEE Transactions on Automatic Control, 62(8), 3861–3876. Artstein, Z. (1983). Stabilization with relaxed control. Nonlinear Analysis, 7, 1163– 1173. Boyd, S. P., & Vandenberghe, L. (2004). Convex optimization. Cambridge University Press. Freeman, R., & Kokotovic, P. (1996). Robust nonlinear control design. Boston: Birkhauser. Hager, W. W. (1979). Lipschitz continuity for constraint processes. SIAM Journal on Control and Optimization, 17(3), 321–338. Jankovic, M. (2017). Combining control Lyapunov and barrier functions for constrained stabilization of nonlinear systems. In American control conference, Seattle, WA. Krstic, M., & Deng, H. (1998). Stabilization of nonlinear uncertain systems. London: Springe-Verlag. Romdlony, M. Z., & Jayawardhana, B. (2014). Uniting control Lyapunov and control barrier functions. In IEEE CDC, Los Angeles, CA. Sepulchre, R., Jankovic, M., & Kokotovic, P. (1997). Constructive nonlinear control. London: Springer-Verlag. Sontag, E. D. (1989a). A universal construction of Artsteins theorem on nonlinear stabilization. Systems & Control Letters, 13, 117–123. Sontag, E. D. (1989b). Smooth stabilization implies coprime factorization. IEEE Transactions on Automatic Control, 34, 435–443. Wieland, P., & Allgower, F. (2007). Constructive safety using control barrier functions. In Proceedings of IFAC symp. on nonlinear control systems. Xu, X., Tabuada, P., Grizzle, J. W., & Ames, A. D. (2015). Robustness of control barrier functions for safety critical control. In IFAC conference analysis and design of hybrid systems.

Mrdjan Jankovic received a B.S. (1986) degree from the University of Belgrade Yugoslavia, and M.S. (1989) and Ph.D. (1992) degrees from Washington University, St. Louis, MO. He held postdoctoral teaching and research positions with Washington University and UC Santa Barbara. He joined Ford Research Laboratory, Dearborn, MI, in 1995 where he is currently a Senior Technical Leader in the Controls Engineering Department. He has coauthored one book, four book chapters, and more than 100 external technical publications. He is a co-inventor on more than 70 U.S. patents, with many implemented in Ford products worldwide. His research interests include automotive control, nonlinear control, and time-delay systems. He received two Ford Research Technical Achievement Awards, IEEE Control System Technology Award, AACC Control Engineering Practice Award, and best paper awards from IEEE, AVEC, and SAE for three automotive control papers. He served as an Associate Editor of the IEEE Transactions on Control Systems Technology and as the chair of several IEEE and SAE committees. He is a Fellow of the IEEE.