Discrete-time certainty equivalence output feedback: allowing discontinuous control laws including those from model predictive control

Discrete-time certainty equivalence output feedback: allowing discontinuous control laws including those from model predictive control

Automatica 41 (2005) 617 – 628 www.elsevier.com/locate/automatica Discrete-time certainty equivalence output feedback: allowing discontinuous control...

300KB Sizes 0 Downloads 57 Views

Automatica 41 (2005) 617 – 628 www.elsevier.com/locate/automatica

Discrete-time certainty equivalence output feedback: allowing discontinuous control laws including those from model predictive control夡 Michael J. Messina, Sezai E. Tuna, Andrew R. Teel∗,1 Center for Control Engineering and Computation, Department of Electrical and Computer Engineering, University of California, Santa Barbara, CA 93106-9560, USA Received 30 July 2004; received in revised form 5 October 2004; accepted 8 November 2004 Available online 20 January 2005

Abstract We present certainty equivalence output feedback results for discrete-time nonlinear systems that employ possibly discontinuous control laws in the feedback loop. Coupling assumptions of nominal robustness with uniform observability or detectability assumptions, we assert nominally robust stability for output feedback closed loops. We further show that model predictive control (MPC) can be used to generate a feedback control law that is robustly globally asymptotically stabilizing when used in a certainty equivalence output feedback closed loop. Allowing for discontinuous feedback control laws is important for systems employing MPC, since the method can, and sometimes necessarily does, result in discontinuous control laws. 䉷 2004 Elsevier Ltd. All rights reserved. Keywords: Discrete-time nonlinear systems; Output feedback; Discontinuous feedback; Model predictive control

1. Introduction 1.1. Background Certainty equivalence output feedback control is a generalization of feedback control where the state feedback control law takes as its argument an estimate of the state, which is generated by an observer using the output and input of the

夡 This paper was not presented at any IFAC meeting. This paper was recommended for publication in revised form by Associate Editor J.W. Grizzle under the direction of Editor H.K. Khalil. This work supported in part by NIH under grant R21 AI057071, AFOSR under grant F4962003-1-0203, and NSF under grant ECS-0324679. ∗ Corresponding author. Tel.: +1 805 893 3616; fax: +1 805 893 3262. E-mail addresses: [email protected] (M.J. Messina), [email protected] (S.E. Tuna), [email protected] (A.R. Teel). 1 Part of this research was carried out while the last author was visiting the Mittag-Leffler Institute, Sweden in March 2003, during its emphasis on Mathematical Control and Systems Theory.

0005-1098/$ - see front matter 䉷 2004 Elsevier Ltd. All rights reserved. doi:10.1016/j.automatica.2004.11.015

system, in order to generate the control input for the closedloop system. The benefit of this approach is that the feedback control law can be designed using the full state; however, an observer must also be designed. In linear systems, the well-known separation principle states that the design of the observer is wholly separate from the design of the controller. In nonlinear systems, no such principle exists in general. However, Tsinias (1991) shows for continuous-time nonlinear systems that if the system is locally detectable (see Definition 7) and a continuous locally stabilizing feedback control law can be found, it can be used with an appropriate observer in a certainty equivalence output feedback configuration and achieve local stabilization. Kazakos and Tsinias (1993, 1994) extend this result to discrete-time systems and obtain corresponding global results by assuming an ISS-like property of the closed-loop and requiring the feedback to be continuous on Rn \{0}. The requirement of continuity of the feedback control law in order to employ output feedback is limiting for three reasons. First, some systems cannot be stabilized with a

618

M.J. Messina et al. / Automatica 41 (2005) 617 – 628

continuous feedback control law. Second, even if a continuous stabilizing feedback control law is known to exist, it may be difficult to find, particularly in the presence of state and input constraints. Finally, for certain control design methods, namely model predictive control (MPC, see Section 5), with which we will be particularly interested here, it may be difficult to ensure that the resulting control law is continuous. For these reasons, in this paper we give output feedback results that do not require the stabilizing control law to be continuous. The lack of a continuous control law requires us to instead assume that the stability of the full-state feedback closed loop is nominally robust (see Section 2). This property is guaranteed if the feedback control law is required to be continuous on Rn \{0}, but not if it is allowed to be discontinuous elsewhere. We make this robustness assumption and assume in general that our observers are sufficiently fast (Assumptions 1 and 2). However, we make an assumption analogous to the ISS-like condition of Kazakos and Tsinias (1994) in order to give an extension of the corresponding result (see Assumption 4 and Proposition 3). Generating a robustly stabilizing controller for general nonlinear systems is difficult, but Grimm, Messina, Teel, and Tuna (2003b) give such robust stability results employing MPC; here, we consider a special case of these results in order to show that they are applicable to the output feedback problem. MPC output feedback results include those of Scokaert, Rawlings, and Meadows (1997), who show that discretetime MPC is stabilizing in the presence of (decaying) perturbations, such as those coming from an observer, when the control law is Lipschitz. Magni, De Nicolao, and Scattolini (1998) give more explicit results for the stability of the interconnection of a weak detector with an MPC-stabilized closed-loop system when the feedback and observer are Lipschitz.2 Here again, we wish to present results that make no continuity assumption on the feedback control law. The purpose of this paper is therefore twofold. First, we wish to discuss the output feedback problem for cases involving discontinuous feedback control laws for discretetime nonlinear systems. Second, we wish to show the applicability of model predictive control to this problem. For ease of reading, the proofs of the main results are given in the appendices.

where x ∈ Rn is the state, u ∈ Rm is the control input, and y ∈ Rp is the output. The functions f and h are continuous. We will also be interested in looking at perturbations to (1) of the form

1.2. Problem specification and notation

x + = f (x, (x)),

We consider the problem of using dynamic output feedback to stabilize the origin of a nonlinear control system

for which a compact set A is asymptotically stable. We begin with a standard definition.

x + = f (x, u),

Definition 0. For system (2), the set A is globally asymptotically stable (GAS) with continuous Lyapunov function if there exist a continuous function V : Rn → R  0 , 1 , 2 ∈ K∞ , and a continuous, positive definite function  such that 1 (|x|A )  V (x) 2 (|x|A ) and V (f (x, (x))  V (x) − (|x|A ).

y = h(x, u),

(1)

2 See corresponding continuous time results given by Michalska and Mayne (1995) and more recently by Adetola and Guay (2003), Findeisen, Imsland, Allgöwer, and Foss (2003a), and De Oliveira Kothare and Morari (2000); Findeisen, Imsland, Allgöwer, and Foss (2003b) give an overview.

x + = f (x, u) + d,

y = h(x, u) + e,

n

where d ∈ R and e ∈ Rp are, respectively, additive disturbance and measurement error (we will sometimes call them outer and inner perturbations, respectively), both of which we assume to be sufficiently small. We use class-K, class-K∞ , and class-KL functions to characterize aspects of asymptotic stability. A function  : R  0 → R  0 is said to belong to class-K ( ∈ K) if it is continuous, zero at zero, and strictly increasing and to belong to class-K∞ ( ∈ K∞ ) if it belongs to class-K and is unbounded. A function  : R  0 × R  0 → R  0 is said to belong to class-KL ( ∈ KL) if it is nondecreasing in its first argument, nonincreasing in its second argument, and lims→0 (s, t) = limt→∞ (s, t) = 0. For a given compact set A ⊂ Rn and vector x ∈ Rn , we define |x|A : =inf z∈A |x − z|. We note that when A = {0}, we have |x|A =|x|, so the reader who is distracted by the use of a set A may substitute |x|A by |x| in all of the subsequent expressions. In everything that follows we consider global asymptotic stability of A, but results for local asymptotic stability are similar. We use the notation wji : =[w(i)T , . . . , w(i + j )T ]T for 0 , and use the definition j  0. We denote wj := wj0 , w := w∞ + w := supk |wk |. For the system x =f (x, (x+e))+d we denote the solution k steps into the future (k ∈ Z  0 ) under the influence of inner and outer perturbation sequences e and d by (k, x, e, d). Notice that (0, x, e, d)=x. The notation (k, x, d) will be for the case when e=0. We sometimes use the notation x(k) for a solution when the context regarding the input is clear and we want to distinguish subsystems. The initial condition is x(0) in this case. Unless otherwise noted, all constants (for example, , ε) will be assumed to be real numbers. Finally, we use B to denote the closed unit ball, that is, B = {s ∈ Rn : |s|  1}. 2. Preliminaries: nominal robustness In this section we consider the closed-loop system formed with a feedback control law  : Rn → Rm , (2)

M.J. Messina et al. / Automatica 41 (2005) 617 – 628

Note that global asymptotic stability and the existence of a continuous Lyapunov function are not necessarily equivalent when (·) is discontinuous; Kellett and Teel (2003) make this clear and illuminate the intimate relationship between nominal robustness and the existence of a continuous (in fact, smooth) Lyapunov function. There are various ways to characterize nominal robustness. Roughly speaking, the Lyapunov function should decrease along solutions in the presence of sufficiently small measurement errors and additive disturbances, that is, f (x, (x)) is replaced with f (x, (x + e)) + d, with |e| and |d| small. It is reasonable to expect this when a continuous Lyapunov function exists and (·) is locally bounded, since we can use continuity of f (·, u), V (·), and (| · |A ) to make the following approximations: V (f (x, (x + e)) + d) − V (x) ≈ V (f (x + e, (x + e)) + d) − V (x) ≈ V (f (x + e, (x + e))) − V (x + e)  − (|x + e|A ) ≈ −(|x|A ). We now give several definitions in order to make the notion of nominal robustness more precise. We note below that when f is continuous and  is locally bounded, all of these properties are equivalent to one another and to the existence of a continuous Lyapunov function. Definition 1. For system (2), the set A is said to be robustly globally asymptotically stable (RGAS) if there exist  ∈ KL and a continuous positive definite function  such that for the system x + = f (x, (x + e)) + d under the constraint max{|e(k)|, |d(k)|} (|(k, x, e, d)|A ) for all k  0, the solutions satisfy, for all k  0, |(k, x, e, d)|A (|x|A , k). Definition 2. For system (2), the set A is said to be semiglobally practically asymptotically stable (SPAS) in the worst case size of inner and outer perturbations if there exists  ∈ KL and for each pair ,  > 0 there exists ε > 0 such that for the system x + = f (x, (x + e)) + d under the constraints |x|A  and max{e, d}  ε, the solutions satisfy, for all k  0, |(k, x, e, d)|A  max{(|x|A , k), }. Definition 3. For system (2), the set A is said to be semiglobally practically asymptotically stable (SPAS) in the worst case size of outer perturbations if there exists  ∈ K∞ and for each pair ,  > 0 there exist ε > 0 and integer k > 0 such that for the system x + = f (x, (x)) + d under the constraints |x|A  and d  ε, the solutions satisfy, for all k  0, |(k, x, d)|A  max{(|x|A ), } and, for all k  k, |(k, x, d)|A .

619

Definition 4. For system (2), the set A is said to be attenuated input-to-state stable (AISS) if there exists a continuous nonincreasing function H : R  0 → R>0 that is identically equal to one on a neighborhood of the origin,  ∈ KL, and ∈ K such that for the system x + = f (x, (x + H (|x|A )e)) + H (|x|A )d, the solutions satisfy, for all k  0, |(k, x, e, d)|A  max{(|x|A , k), (e), (d)}. Definition 5. For system (2), the set A is said to be integral input-to-state stable (IISS) if there exist  ∈ KL and

e , d ∈ K such that for the system x + =f (x, (x+e))+d, the solutions, satisfy, for all k  0, |(k, x, e, d)|A 

 max (|x|A , k),

k−1 

e (|e(i)|),

i=0

k−1 



d (|d(i)|) .

i=0

Proposition 1. Suppose f (·, ·) is continuous and (·) is locally bounded. Then the following statements are equivalent: for system (2), the compact set A is S0 . globally asymptotically stable with a continuous Lyapunov function; S1 . robustly globally asymptotically stable; S2 . semiglobally practically asymptotically stable in the worst case size of inner and outer perturbations; S3 . semiglobally practically asymptotically stable in the worst case size of outer perturbations; S4 . attenuated input-to-state stable; S5 . integral input-to-state stable. 3. Main results: certainty equivalence output feedback In this section, we make assumptions on closed-loop systems formed by implementing a certainty equivalence control law that uses an observer output in place of the state and give robust stability results. We first make assumptions on the interconnection between a full-state feedback closedloop system (for some ) and an observer. Before we get to these assumptions, we first give the form of the dynamic output feedback structure. Our dynamic output feedback will have the general form

+ = g( , u, y),

xˆ = Υ ( ), u = (x), ˆ

(3)

where (·) is a certainty equivalence control law coming from a state feedback algorithm. Using , we can write the observer in the general form + = G( , y); xˆ = Υ ( ) and the interconnection as x + = f (x, (Υ ( ))), + = G( , h(x, (Υ ( ))))

(4)

and, with perturbations, as x + = f (x, (Υ ( ))) + d1 , + = G( , h(x, (Υ ( ))) + e) + d2 .

(5)

620

M.J. Messina et al. / Automatica 41 (2005) 617 – 628

We use X := [x T T ]T for the composite state and d = [d1T d2T ]T for the composite additive disturbance. For the feedback interconnection, we make one of the following two assumptions. Even though these are closed-loop assumptions, they usually can be verified from the properties of the observer; see Section 4. Assumption 1. For the closed-loop system (5) there exist an integer r  1, , ∈ K∞ ,  ∈ KL, and for each pair ,  > 0 there exists ε > 0 such that if X  and max{e, d}  ε, then, for all k  0, |x(k) − x(k)| ˆ  max{(|X(0)|) · max{0, r − k}, }, | (k)|  max{(|X(0)|, k), (max{x, x − xˆ }), }. Assumption 2. For the closed-loop system (5) there exist ∈ K∞ , 1 , 2 ∈ KL, and for each pair ,  > 0, there exists ε > 0 such that if X  and max{e, d}  ε, then, for all k  0, |x(k) − x(k)| ˆ  max{1 (|X(0)|, k), },

Corollary 1. Under Assumptions 1 and 3, the origin of the closed-loop system (4) is RGAS. Finally, we make the following assumption in order to assert a robust stability result that is an extension of results given by Kazakos and Tsinias (1994). Assumption 4. For the system x + = f (x, (x + e)), there exist  ∈ K∞ and , c > 0 such that if ||e||  , then |(k, x, e)|  max{(|x|), c}. Remark 2. This assumption is implied by the condition, called ISSC, used to assert Theorem 2.3 of Kazakos and Tsinias (1994), which is similar to the next proposition, but it assumes that (·) is continuous on R\{0}. Proposition 3. Under Assumptions 2–4, the origin of the closed-loop system (4) is RGAS.

4. Observers

(k)|  max{2 (|X(0)|, k), (max{x, x − xˆ }), }.

In this section, we discuss dynamic output feedback systems satisfying certain detectability or observability properties that, when used with the given observers, satisfy either Assumption 1 or 2.

In Section 4, we construct observers for control systems that are uniformly completely observable (see Section 4.1) and that are detectable (see Section 4.2); these will correspond to Assumptions 1 and 2, respectively.

4.1. Uniform complete observability

Remark 1. The observers in the case of detectability have a slightly stronger property, that (k) = x(k) ˆ and |X(0)| can be replaced by |x(0) − x(0)|, ˆ but these properties are not needed for the statements below. In order to state the robust stability results, we must make the following assumption on the control law and the corresponding closed-loop system. Here we do not presuppose the feedback law  comes from an MPC formulation; in Section 5, we discuss this case.

j

For system (1) we use (k, x, uk−1 ) (a slight deviation from above) to denote the solution at time k starting at x produced by the input sequence {u(j ), . . . , u(j + k − 1)}. Also, we define the vector  h(x, u(j )) j  h((1, x, u0 ), u(j + 1))  j hs (x, us ) :=   : j h((s, x, us−1 ), u(j + s)) 

Assumption 3. The function  : Rn → Rm is locally bounded and the origin of x + = f (x, (x)) is RGAS.

The following definition is similar to concepts explored by Grizzle and Moraal (1990) and Nijmeijer (1982) (see Gauthier & Bornard (1981) and Sontag (1984) for related concepts in continuous time):

Proposition 2. Under Assumptions 2 and 3, if the function

e from the equivalence between RGAS and IISS, for some , and the function 1 coming from Assumption 2 are such that e (1 (s, ·)) is summable for all s, then the origin of the closed-loop system (4) is RGAS.

Definition 6. System (1) is said to be uniformly completely observable (UCO) if there exist an integer r  1 and a continuous mapping : Rpr × Rmr → Rn such that x =

(hr−1 (x, ur−1 ), ur−1 ) for all x and all input sequences ur−1 .

The next result follows directly from Proposition 2 by observing that, with the definition 1 (s, k) := (s)·max{0, r − k}, the function e (1 (s, ·)) is summable for all s since 1 (s, ·) has finite support.

The existence of a mapping satisfying Definition 6 has been discussed by Glad (1983) (in the context of continuous time nonlinear systems with discrete measurements). Now, suppose a system in the form of (1) is UCO. Consider the

M.J. Messina et al. / Automatica 41 (2005) 617 – 628

system system

+ 1

5. Robust global asymptotic stabilization via model predictive control

+ 1

= 2 , = 2 , : : + +  =  , r r−1 r−1 = r , + r = y, + r =u

(6)

with dynamic output feedback controller ˆ xˆ = (r, (, ), ); u = (x).

(7)

We then have the following result. Proposition 4. Suppose system (1) is UCO with integer r. Define

:= [ T1 T2 ]T = [T T ]T ,    0  0rp−p×1 rp−p×p Irp−p 1 0p×rp y     g( , u, y) := 

+ , 0rm−m×1 0rm−m×m Irm−m 2 u 0m×rm Υ ( ) := (r, ( 1 , 2 ), 2 ), and G( , y) := g( , (Υ ( )), y), where Ii is an i × i identity matrix and 0i×j is an i × j zero matrix. If h(0, 0) = (0, 0) = 0,  is locally bounded, and lim supx→0 |(x)| = 0, then interconnection (5) of the controller (7) and system (1) satisfies Assumption 1. 4.2. Lyapunov-type detectability condition Here we discuss a general notion of detectability expressed in terms of the existence of a Lyapunov function (see the weakly detectable definition of Vidyasagar (1993); it was used by Kazakos & Tsinias (1994)). Definition 7. System (1) is said to be detectable if there exist a continuous function  : Rn × Rm × Rp → Rn such that (0, 0, 0) = 0, a continuous function W : Rn × Rn → R  0 , 1 , 2 ∈ K∞ , and a continuous, positive definite function 3 such that when xˆ + := (x, ˆ u, h(x, u)), for all ˆ  W (x, x) ˆ 2 (|x − x|) ˆ (x, x, ˆ u) ∈ Rn×n×m , 1 (|x − x|) and W (x + , xˆ + ) − W (x, x) ˆ  − 3 (|x − x|). ˆ The corresponding dynamic output feedback controller, xˆ + = (x, ˆ u, h(x, u)); u = (x), ˆ

621

(8)

In order to apply Propositions 2 or 3 or Corollary 1, we must show that Assumption 3 holds. First, we recall results (for example, from Kellett & Teel, 2004) on the existence of state feedback laws that guarantee Assumption 3 holds for systems that can be driven to the origin asymptotically using open loop controls. If the open-loop controls vanish as the trajectories approach the origin, then (·) can be taken to satisfy lim supx→0 |(x)| = 0. The feedback laws are constructed from solutions to an appropriate infinite horizon optimal control problem. Next, we give some conditions on finite horizon model predictive control problems that generate feedback laws such that both Assumption 3 holds and lim supx→0 |(x)| = 0. 5.1. Infinite horizon model predictive control We first give a definition of controllability that parallels Definition 7 of Kellett and Teel (2004): Definition 8. The system x + = f (x, u) is said to be asymptotically controllable to the origin with locally bounded controls if there exist  ∈ KL,  ∈ K,  0, and for each x ∈ Rn a sequence u such that for each k  0, |(k, x, u)| (|x|, k) and |u(k)| (|(k, x, u)|) + . The system is said to be asymptotically controllable to the origin with vanishing controls when = 0. The following proposition is based on Theorem 15 of Kellett and Teel (2004), which is established using an appropriate infinite horizon optimal control problem. Proposition 6. If the system x + = f (x, u) is asymptotically controllable to the origin with vanishing controls, then there exists a feedback function  that satisfies Assumption 3 and lim supx→0 |(x)| = 0. Proof. Using the continuity of f and , Theorem 15 of Kellett and Teel (2004) is applicable and there exists a smooth, positive definite, radially unbounded function V : Rn → R  0 such that, for each x ∈ Rn , V (f (x, u))  V (x)e−1 .

has the structure of (3) if we choose = xˆ and Υ ( ) = . We then have the following result.

u∈(|x|)B

Proposition 5. Suppose system (1) is detectable. Define Υ ( ) := , := x, ˆ and G( , y) := ( , ( ), y). If  is locally bounded, then interconnection (5) of controller (8) and system (1) satisfies Assumption 2.

We let  ∈ [e−1 , 1) and then for each x ∈ Rn we let (x) ∈ (|x|)B satisfy V (f (x, (x))) V (x). With this feedback function, Assumption 3 is satisfied and lim supx→0 |(x)| = 0. 

Proof. The result follows directly from discrete-time comparison lemmas, the absence of finite-escape time in discrete-time systems, and the continuity of  and W. 

The following corollary comes from combining Proposition 6 with Proposition 4 (compare with Shim & Teel, 2003; Sontag, 1981; Teel & Praly, 1994).

min

(9)

622

M.J. Messina et al. / Automatica 41 (2005) 617 – 628

Corollary 2. Suppose system (1) is asymptotically controllable to the origin with vanishing controls and is UCO with h(0, 0) = 0 and (0, 0) = 0. Then system (1) can be made RGAS by dynamic output feedback. 5.2. Finite horizon model predictive control The solution to an infinite horizon optimization problem is typically computationally intractable; consequently, finite horizon optimization algorithms, such as finite horizon MPC, are used instead. However, not every finite horizon optimization algorithm yields a robustly stabilizing control even if it is strengthened with so-called “stability constraints” such as terminal set constraints (see, for example, Grimm, Messina, Teel, & Tuna (2004) for examples of nonrobustness). For an overview of relevant MPC concepts and typically employed MPC methods, see Mayne, Rawlings, Rao, and Scokaert (2000) and Allgöwer and Zheng (2000) (for a more complete discussion of the methods used here, see Grimm, Messina, Teel, & Tuna, 2003a). In this section we study a class of MPC algorithms that renders the origin of the closed-loop RGAS if the optimization horizon is chosen long enough. We emphasize that RGAS is not necessarily implied by global asymptotic stability if the feedback law is discontinuous (see Grimm et al., 2004 for MPC examples). Our assumptions are meant to assert RGAS when the stabilizing feedback law may be discontinuous either because it is desirable from system considerations or required for stabilization (for example, the system x + = x + [u, u3 ]T from Meadows, Henson, Eaton, & Rawlings (1995) or a discretized version of the nonholonomic integrator x˙1 = u1 , x˙2 = u2 , x˙3 = x1 u2 − x2 u1 from Brockett (1983), both discussed by Grimm et al., 2003a). We use the cost function (parameterized by horizon N) JN (x, uN−1 ) := g((N, x, uN−1 )) N−1  %((k, x, uN−1 ), u(k)) +

(10)

k=0

constructed from the terminal cost g : Rn → R  0 and the stage cost % : Rn × Rm → R  0 . We do not require g to be a local control Lyapunov function; unlike the typical MPC setting, our stability results do not depend on g explicitly. We consider the optimization problem VN (x) := inf JN (x, uN−1 ) uN −1 u(k) ∈ U, k ∈ {0, 1, . . . , N − 1}, subject to (N, x, uN−1 ) ∈ X,

(11)

where VN (x) is the value function, U ⊆ Rm is the control input set, and X ⊆ Rn is the terminal constraint set. The terminal constraint set can be, for example, the origin X = {0}, the whole space X = Rn (that is, no terminal constraint), a sublevel set of a Lyapunov function X = {z ∈ Rn : V (z)  c}, or a hyperplane containing the origin. We do not consider explicit state constraints on the problem for

simplicity and because we are interested in global results, but see Grimm et al. (2003b) for constrained nominal robustness results. When an admissible control input sequence (a sequence for which all constraints are satisfied) achieves the infimum, the MPC feedback law N : Rn → U returns the first element of the sequence given the current state, that is, N (x)=u(0, x), where JN (x, uN−1 (x))=VN (x); we abuse notation slightly to emphasize the x dependence of the sequence. To guarantee the existence of such a sequence, we make the following two assumptions (see Keerthi & Gilbert, 1985), which are typical for MPC. Assumption 5. The functions g and % are continuous. Assumption 6. Either the control input set U is compact or for any x ∈ Rn , sup0  i  N−1 |u(i)| → ∞ implies JN (x, uN−1 ) → ∞. Also, for any x ∈ Rn , there exists at least one admissible sequence uN−1 . We assume here that the optimization problem that yields the MPC feedback law is globally feasible, that is, a solution exists for all x ∈ Rn , for a long enough horizon length. This allows us to focus on global results. Assumption 7. There exists M  1 such that for N  M, optimization (11) has a solution for all x ∈ Rn . Finally, for our results here, we make the following additional two assumptions. The first is not required, but reduces the complexity of our results; it is essentially an observability requirement. The second is essentially a controllability requirement and is satisfied by systems for which you can find a feasible control law whose costs are independent of the horizon length. An example would be a system that is controllable to the origin in a finite number of steps and for which the control is not explicitly penalized (see Section 6). For the assumptions, we consider a continuous proper indicator function  : Rn → R  0 , that is, there exist  ,  ∈ K∞ such that  (|x|) (x)  (|x|) for all x ∈ Rn . Assumption 8. For all u ∈ U, %(x, u) (x). Assumption 9. There exists a  1 such that for all N  M (from Assumption 7), VN (x)  a (x) for all x ∈ Rn . Under these assumptions, we can establish the following result, a similar version of which is given by Grimm et al. (2003b) along with more general results. It is also a special case of the result of Grimm et al. (2003a) if there is no terminal constraint, that is, X = Rn . Proposition 7. Consider the system x + = f (x, u) under Assumptions 5–9. Then, for all horizons N > a 2 + M − 1,  is locally bounded, and the origin of the MPC closedloop x + = f (x, N (x)) is RGAS. Furthermore, if, in

M.J. Messina et al. / Automatica 41 (2005) 617 – 628

addition to the above assumptions, there exists a continuous, positive definite function % : Rn × Rm → R  0 such that %(x, u) + %(f (x, u), v) % (x, u) for all v ∈ U, then lim supx→0 |N (x)| = 0. We can now assert results for the use of MPC-generated feedback control laws in output feedback configurations. The first follows immediately from Propositions 4 and 7 and Corollary 1 while the second follows immediately from Propositions 2, 5, and 7. Corollary 3. Suppose system (1) is UCO, h(0, 0) = 0,

(0, 0) = 0, and that Assumptions 5–9 are satisfied. Further assume that there exists a continuous positive definite function % : Rn × Rm → R  0 such that %(x, u) + %(f (x, u), v) % (x, u) for all v ∈ U. Then for all horizons N > a 2 + M − 1, the interconnection (5) of controller (7) and system (1) is RGAS. Corollary 4. Suppose system (1) is detectable and Assumptions 5–9 are satisfied. Then for all horizons N > a 2 +M −1, if the function e , from the equivalence between RGAS (from Proposition 7) and IISS, and the function 1 , from Proposition 2, are such that e (1 (s, ·)) is summable for all s, interconnection (5) of controller (8) and system (1) is RGAS. 6. Example Consider the following system:

+

x + x23 x x + = 1+ = 1 =: f (x, u), x2 x2 + u 3 y = x13 =: h(x).

(12)

Note that the linearization is neither detectable nor stabilizable; however, the nonlinear system is stabilizable. The system is UCO for r = 2; therefore, we can use (6) as an observer with the continuous output map 1/3 1/3 1/3 xˆ = (2, (, ), ); (, ) := [1 , (2 − 1 )1/3 ]T to stabilize (12). Let X = {0}, %(x, u) = (x) := |x1 | + |x2 |3 , and g(x) = 0. Since % and g are continuous, Assumption 5 holds. The state of the system can be driven to the origin from any initial condition in two steps with the open-loop 1/3 input sequence u = {−(x2 + (x1 + x23 )1/3 )1/3 , −x2 }. This input sequence is admissible; hence Assumption 7 holds with M = 2. Observe that supi |ui | → ∞ implies JN (x, uN−1 ) → ∞. Therefore, Assumption 6 is satisfied. This gives us the existence of an optimal control sequence ∗ uN−1 (x); we emphasize that we do not need to check that this sequence is continuous. Since %(x, u) = (x), Assumption 8 holds. Since the deadbeat control law above cannot be better than the optimal control law, we can use it to obtain the upper bound VN (x)  3(x) for all horizons N  2. Then, Assumption 9 holds with a = 3. Finally, define % (x, u) := %(x, u)+%(f (x, u), v)=|x1 |+|x2 |3 +|x1 +x23 |+|x2 +u3 |3 , which is positive definite. Since any initial condition can be

623

driven to the origin in two steps, Proposition 7 can be applied with M =2 and we conclude that for all N > 32 +2−1=10, Assumption 3 holds and lim supx→0 |N (x)| = 0. Finally, since h(0) = 0, (0, 0) = 0, by Corollary 3 we conclude that the origin of the interconnection of the MPC-generated controller and the observer above is RGAS for all N > 10.

7. Conclusions We have shown that discrete-time nonlinear feedback systems that employ possibly discontinuous feedback control laws can be used for certainty equivalence output feedback provided they are robustly asymptotically stabilizing. We have given an MPC formulation along with two observer structures that together satisfy the conditions required to make the origin of the systems in question RGAS and have applied these ideas to an example.

Appendix A. Sketch of the proof of Proposition 1 Condition S0 ⇐⇒ S1 is Kellett and Teel (2003, Theorem 3). Condition S1 ⇒ S4 follows from Angeli, Sontag, and Wang (2000, Lemma IV.1) and Jiang and Wang (2001, Lemma 3.5, Remark 3.3) (see similar results by Sontag (1990) in continuous time.) Condition S4 ⇒ S1 follows from the discrete-time nonlinear small gain theorem. See Jiang and Wang (2001, Section 4). Condition S0 ⇒ S5 follows as in the result of Angeli (1999). Condition S2 ⇒ S3 follows by taking e(k) = 0 for all k. Condition S2 ⇒ S1 : For the given  ∈ KL in Definition 2, let  ∈ K∞ satisfy (s, 0) (s) for all s  0. Note that (s)  s for all s  0 since (s, 0)  s. Also let k ∗ : R  0 → Z>0 satisfy (s, k ∗ (s))  21 s. Without loss of generality, let ε in Definition 2 to be a class-KL function of (, ) and continuous. Then, for the purposes of establishing S1, define (s) := ε( 21 −1 (s), 4·s), which is continuous and positive definite. Suppose, using , that the constraint max{ |e(k)|, |d(k)| } (|(k, x, e, d)|A ) holds for all k. We claim that for each initial condition x, there exists % ∈ {0, . . . , k ∗ (|x|A )} such that |(%, x, e, d)|A  21 |x|A and, for all j ∈ {0, . . . , %}, |(j, x, e, d)|A (|x|A ). This claim would establish RGAS; it can be proven by contradiction. Condition S3 ⇒ S2 : Consider the system x + = f (x, (x + e)) + d and define z := x + e. Define d := d + e+ + f (z − e, (z)) − f (z, (z)). Then

From the continuity of f, the loz+ = f (z, (z)) + d. cal boundedness of , and the compactness of A, there exists a function ε1 : R>0 × R>0 → R>0 such that for each pair (, ), if |z|A  and max{e, d}  ε1 (, ),

. Then, by assumption, there exists  ∈ K∞ then |d|

624

M.J. Messina et al. / Automatica 41 (2005) 617 – 628

and for each pair (, ) there exist functions ε2 : R>0 × R>0 → R>0 and k : R>0 × R>0 → Z>0 such d  ε2 (, ), then for all k  0, that if |z|A  and  |(k, z, d)|A  max{(|z|A ), } and for all k  k(, ) |(k, z, d)|A . Using these bounds and the ideas of Lin, Sontag, and Wang (1996), modified to account for the disturbances, we can construct a class-KL estimate for the z and then x systems to finish the proof. Condition S5 ⇒ S2 follows the proof technique used to establish the results of Teel, Moreau, and Nesic (2003): First, define (s) := max{ e (s), d (s)}. Let the pair (, ) be given, let k ∗ be such that (, k ∗ ) , and 2k ∗ −1 let ε be such that i=0 (2ε) . Assume |x|A  and max{e, d}  ε. Then, for all k ∈ {0, . . . , 2k ∗ }, |(k, x, e, d)|A  max{(|x|A , k), } and for all k ∈ {k ∗ , . . . , 2k ∗ }, |(k, x, e, d)|A . Using time invariance and restarting at k = k ∗ we have, for all k ∈ {k ∗ , . . . , 3k ∗ },|(k, x, e, d)|A . Continuing this yields |(k, x, e, d)|A  max{(|x|A , k), } for all k  0. The other implications follow from the ones above.

Appendix B. Proof of Proposition 2 GAS without disturbances: By assumption, there exist 1 , 2 ∈ KL and 2 ∈ K such that, for all k  0, |x(k) − x(k)| ˆ 1 (|X(0)|, k), | (k)|  max{2 (|X(0)|, k), 2 (max{x, x − xˆ })}. (B.1) Using the equivalence of RGAS and IISS, there exist  ∈ KL and e ∈ K such that, for all k  0,   k−1 

e (|x(i) − x(i)|) ˆ . (B.2) |x(k)|  max (|x(0)|, k), i=0

Moreover, by assumption, 1 and e can be chosen so that there exists 1 ∈ K∞ satisfying ∞ 

e (1 (s, i)) 1 (s + 1).

(B.3)

i=0

|x(k)|

  max 3 (|x(0)|, k), 3

x − xˆ  H (2 (|X(0)| + 1))

x  max{(|X(0)| + 1, 0), 1 (|X(0)| + 1)} =: 2 (|X(0)| + 1).

(B.4)

Using the equivalence of RGAS and AISS, there exists a continuous, nonincreasing function H : R  0 → R>0 such that if xˆ − x is replaced by H (x) e, the system x + = f (x, (x + H (x) e)) is input-to-state stable with respect to e, that is, there exist 3 ∈ KL and 3 ∈ K such that ∀k  0.

(B.5)

Since we obtain the original equation for x by letting

e = 1/H (|x|)(xˆ − x) and s → 1/H (s) is nondecreasing,

 . (B.6)

GAS follows from the same arguments used in the nonlinear small gain theorem (Jiang & Wang, 2001). More explicitly, it follows from (B.1) and (B.6) that   1 (|X(0)|, 0) x  max 3 (|X(0)|, 0), 3 H (2 (|X(0)| + 1)) =: 4 (|X(0)|),    max{2 (|X(0)|, 0), 2 (max{4 (|X(0)|), 1 (|X(0)|, 0)})} =: 5 (|X(0)|) which establishes global stability. For future use, define 6 (s) := max{4 (s), 5 (s)}. To show uniform convergence, let ,  > 0 be given. Let 1 > 0 sat1 , 2 ( 1 )}  and 2 > 0 satisfy max{ 2 , isfy max{ ∗ ∗ ∗

3 ((1/H (2 (6 ()+1)))2 )}  1 . Let k1 , k2 , and k3 satisfy

1 (, k)  2 ,

∀k  k1∗ ,

3 (6 (), k)  1 , 2 (6 (), k) ,

(B.7)

∀k  k2∗ , ∀k  k3∗ .

(B.8) (B.9)

Let k ∗ = k1∗ + k2∗ + k3∗ and |X(0)| . From (B.1) and (B.7), we have that when k  k1∗ , x − x ˆ  2  1 . Then using time-invariance, (B.6), and (B.8) we have that |x(k)|  1  for k  k1∗ +k2∗ . Then using time-invariance, (B.1), and (B.9), we have that | (k)|  for k  k ∗ . Letting |X|=max{|x|, | |} we can conclude that for all k  k ∗ , |X(0)| implies that |X(k)| ; thus GAS is established. RGAS: We establish RGAS by exploiting the equivalence between RGAS and SPAS in the worst case size of outer perturbations. We will make use of some of previous calculations. In this case, (B.1) becomes: for each pair , >0 there exists ε > 0 such that if X   and d  ε, then

 max{1 (|X(0)|, k), }, |x(k) − x(k)| ˆ

Combining (B.1)–(B.3), we have

|x(k)|  max{3 (|x(0)|, k), 3 ( e)},

we have, using (B.4), for all k  0,

(B.10)

| (k)|  max{2 (|X(0)|, k),

2 (max{x, x − xˆ }), }.

Without loss of generality, assume 2 (s)  s for all s  0. Then, condition (B.2) becomes  k−1  |x(k)|  max (|x(0)|, k),

e (|x(i) − x(i)|), ˆ k−1  i=0



d (|d(i)|)

i=0

(B.11)

M.J. Messina et al. / Automatica 41 (2005) 617 – 628

for some e satisfying (B.3) and some d , and condition (B.6) becomes  |x(k)|  max 3 (|x(0)|, k), 3     x − xˆ  d × , 4 . (B.12) H (x) H (x) Using Proposition 1, it is enough to show SPAS in the worstcase size of outer perturbations, that is, there exists  ∈ K∞ such that for each pair ,  > 0 there exist ε > 0 and integer k > 0 such that if |X(0)|  and d  ε, then |X(k)|  max{(|X(0)|), }, |X(k)| , ∀k  k.

∀k  0, (B.13)

Without loss of generality, we only consider  ∈ (0, 2 (1)], where 2 was defined in (B.4), and  ∈ [2 (1), ∞), it follows that  and max{2 (s + 1), } = 2 (s + 1). Given ,  > 0, let k ∗ be defined as above, that is, k ∗ := k1∗ + k2∗ + k3∗ , where k1∗ , k2∗ and k3∗ satisfy  := 6 () and let  > 0 satisfy (B.7)–(B.9). Next, define 2k ∗ −1

max{, 2 ( i=0 max{ e (), d ()}), 4 ((1/H (2 (6 ()+ )}  2 , where 2 is chosen as above. Let the pair 1)))

,  generate ε > 0 so that (B.10) is satisfied. Finally, let ε > 0 be defined as ε : = min{ ε, }. It follows from causality, the GAS derivations, the fact that 2 (1), (B.10), and (B.12) that if |X(0)|  and d  ε, then for all k ∈ {0, 1, . . . , 2k ∗ }, |X(k)|  max{6 (|X(0)|), }

(B.14)

and for all k ∈ {k ∗ , k ∗ + 1, . . . , 2k ∗ }, |X(k)| . Iterating now, from k = k ∗ and using  we get that for all k ∈ {k ∗ , k ∗ +1, . . .}, |X(k)| . Then the system satisfies (B.13) with  := 6 and k := k ∗ . From Proposition 1 we can assert that system (4) is RGAS.  Appendix C. Proof of Proposition 3 Preamble: We first claim that, under Assumptions 3 and 4, the system x + = f (x, (x + e)) is globally small input to state stable, that is, there exists a > 0 and a , a ∈ K∞ such that if ||e||  a , then |(k, x, e)|  max{a (|x|), a (||e||)}.

Now consider the system x + = f (x, (x + e)) + d. Under Assumptions 3 and 4, for each pair ,  > 0 and each positive

integer k, there exists ε such that if for all k ∈ {0, . . . , k}, |(k, x, e, d)|  , ||e(k)||  a /2, ||d(k)||  ε,

(C.3)

then for all k ∈ {0, . . . , k},

}. |(k, x, e, d)|  max{2a (|x|), 2 a (||e||),

(C.4)

The first step in verifying this relationship is to compare the solution of x + = f (x, (x + e)) + d to the solution of z+ = f (z, (x + e)), with z(0) = x(0), and to use the local boundedness of  and the continuity of f to get the existence of ε > 0 such that |x(k) − z(k)|  min{ /2, a /2} for all k ∈ {0, . . . , k} as long as (C.3) holds. The second step is to recognize that, with the definition e := e + x − z, we have z+ = f (z, (z + e)) and | e(k)|  a for all k ∈ {0, . . . , k}. The third step is to use (C.1) for this system and make use of causality. GAS without disturbances: We follow the proof of Proposition 2, entering after the inequality (B.2). We let k : R  0 → Z  0 be nondecreasing and such that 1 (|X(0)|, k(|X(0)|))  a . Then it is straightforward to see from (B.2) that, for all k ∈ {0, . . . , k(|X(0)|)}, |x(k)|  max{(|X(0)|, 0), k(|X(0)|) e (1 (|X(0)|, 0))} =: (|X(0)|) and, using time-invariance and (C.1), for all k  k(|X(0)|), |x(k)|  max{a ◦ (|X(0)|), a (1 (|X(0)|, 0))}.

(C.5)

We then let 2 ∈ K∞ be such that max{(s), a ◦ (s), a (1 (s, 0))} 2 (s + 1).

(C.6)

Then (B.4) holds and the result follows from the proof of Proposition 2. RGAS: The proof here is analogous to the corresponding part for the proof of Proposition 2. The use of (B.3) is avoided, much like it was in the argument above for the case of GAS in the absence of disturbances, but this time using the second claim of the preamble of this proof. 

(C.1)

This is established as follows: According to Assumption 3, Proposition 1, and Definition 4, there exist a continuous, nonincreasing function b : R  0 → R>0 and b , b ∈ K∞ such that if ||e||  b (|x|), then |(k, x, e)|  max{b (|x|), b (||e||)}.

625

(C.2)

Let , c > 0 and  ∈ K∞ come from Assumption 4. Define a := min{ , b ◦ −1 (c)}, a (s) := max{(s), b (s)}, and a (s) := b (s). Then (C.1) can be verified by checking the case |x| −1 (c) using Assumption 4, and checking the case |x| −1 (c) using (C.2) and the fact that b is nonincreasing.

Appendix D. Proof of Proposition 4 Due to the assumptions on  and h, there exists  ∈ K∞ such that max{|(x)|, ˆ |h(x, (x))|} ˆ (max{|x|, |x − x|}). ˆ Since the system is a stable linear system driven by (x), ˆ h(x, (x))+e, ˆ and additive disturbances, the bound on | (k)| in Assumption 1 holds without a requirement on X. For a given input sequence ui and initial state x define the output sequence y(i) := h((i, x, ui−1 ), u(i)) for i  0. The dependence of y(i) on x and ui is left implicit. We also use (·) and (·) to denote the solutions of (6) and we use the shorthand notation x(k) = (k, x, uk−1 ).

626

M.J. Messina et al. / Automatica 41 (2005) 617 – 628

To bound |x − x|, ˆ we first define, for j ∈ {1, . . . , r}, y(j − r − 1) := j (0) and u(j − r − 1) := j (0) so that k−r k−r yr−1 and ur−1 are well defined for all k  0. Without perk−r | ≡ 0 and |(k) − turbations, for all k  0, |(k) − yr−1 k−r k−r k−r ), we have ur−1 | ≡ 0; for k  r, since yr−1 = hr−1 (x, ur−1 that x(k) ˆ = x(k). With perturbations, due to the structure of the  and  subsystems, for any ε > 0 there exists ε > 0 k−r |  ε such that if max{e, d}  ε we have |(k) − yr−1 k−r ε.Using this fact and the continuity and |(k) − ur−1 |  of we can assert that for each pair ,  > 0, there exists ε > 0 such that if X  and max{e, d}  ε then |x(k) − x(k)| ˆ  for all k  r. So, to establish Assumption 1 it just remains to establish the existence of  ∈ K∞ and for each pair ,  > 0 there exists ε > 0 such that if X  and max{e, d}  ε then |x(k) − x(k)| ˆ  max{(|X(0)|), } for all k ∈ {0, . . . , r − 1}. For this, it is enough to notice that the origin is an equilibrium point of (4) and that its right-hand side converges to zero as the state converges to zero, due to the assumptions on h(0, 0), (0, 0), and lim supx→0 |(x)|.The result follows. 

Appendix E. Proof of Proposition 7 Assumptions 6 and 9 imply that  is locally bounded. Since the stage cost is nonnegative and Assumption 8 holds, (x)  %(x, N (x))  VN (x). An upper bound on VN (x) comes from Assumption 9. Consequently,

(x)  VN (x)  a (x).

(E.1)

Let uN−1 (x) be such that VN (x) = JN (x, uN−1 (x)) and N (x) = u(0, x). If the additional assumption holds with % , then since N  2 (since a  1, M  1, and N is an integer), VN (x)  %(x, N (x)) + %(f (x, N (x)), u(1, x)). From Assumption 9, % (x, (x))  %(x, N (x)) + %(f (x, N (x)), u1 (x))  VN (x)  a (x). Then, since lim supx→0 |(x)| = 0 and % is continuous and positive definite, lim supx→0 |N (x)| = 0. Given any ε > 0 and any z satisfying z ∈ f (x, N (x)) + 1 (x)) εB =: FN (x, ε), we define k := (k, z, uN−M−1 for k ∈ {0, 1, . . . , N − M} and use the notations k = (k, x, uN−1 (x)) and uk = u(k, x) for all k that apply. For L  M, define

aN (x, L, ε) := bN (x, L, ε)= :

sup

L−M 

z∈FN (x, ε) k=1

{%(k−1 , uk ) − %(k , uk )},

sup z∈FN (x, ε) k∈{M, ..., L}

× {a (k−M ) − a (k−M+1 )}. Note that aN (·, ·, 0) = bN (·, ·, 0) = 0. Moreover, given L  M, for each pair ,  > 0 there exists ε > 0 such that

|x|  and ε  ε imply aN (x, L, ε)+ bN (x, L, ε) . This follows from the assumed continuity of the functions f, %, and  and the local boundedness of . For z ∈ FN (x, ε) and for each j ∈ {0, . . . , N − M}, VN (z) − VN (x)

 − g(N ) − +

N−1  k=0

N−M−j  −1 k=0

%(k , uk )

%(k , uk+1 ) + VM+j (N−M−j )

 − %(x, N (x)) + VM+j (N−M−j )+aN (x, N −j, ε)  − (x) + a (N−M−j ) + aN (x, N − j, ε)  − (x) + a (N−M−j +1 ) + N (x, N − j, ε), (E.2) where N (s1 , s2 , s3 ) = aN (s1 , s2 , s3 ) + bN (s1 , s2 , s3 ). Using Assumptions 8 and 9, the definition of k , N−M+1  and (10)-(11) we have (k )  N−M+1 k=1 k=1 %(k , uk )  VN−M+2 (x)  a (x), which implies that for at least one index k ∈ {1, . . . , N − M + 1}, (k )  a (x)/(N −M+1). Then choosing j =N −M+1−k, we have that

(N−M−j +1 ) 

a (x) . N −M +1

(E.3)

Combining (E.2) and (E.3) yields VN (z) − VN (x) 

 a2  − 1− (x) + N (x, N − j, ε). (E.4) N −M +1

Let N > a 2 + M − 1, define c := 1 − (1 − a 2 /(N − M + 1))a −1 , and note that 0 < c < 1. From (E.1) and (E.4), VN (z)  cV N (x) + N (x, N − j, ε).

(E.5)

Since  is a proper indicator function, there exist  ,  ∈ K∞ such that  (|x|) (x)  (|x|). Define 1 (s) :=  (s) and 2 (s) := a ·  (s). Hence, from (E.1), 1 (|x|)  VN (x) 2 (|x|). To proceed, define a class-KL function (s, k) := k −1 1 (2c 2 (s)). Given the pair ,  > 0, define V N :=  := supVN (x)  V N |x|, and let >0 sup|x|   VN (x),  min{(1 − c)1 ()/2, (1 − c)V N }. Note that satisfy  . Let ε > 0 be such that |x|   and ε  ε imply maxj ∈[0, N−M] {N (x, N − j, ε)}  . Now, consider the , if |x|  and system x + = f (x, N (x)) + d. Since  d  ε, then, from (E.5),

 V N . VN (x + )  cV N (x) +

(E.6)

Then for all |x| , the solutions do not leave set {x ∈ Rn | VN (x)  V N } ⊆ {x ∈ Rn | |x|  } and thus (E.6) holds

M.J. Messina et al. / Automatica 41 (2005) 617 – 628

for all k; therefore VN ((k, x, d))  ck VN (x) + 

k−1 

ci  ck VN (x) +

i=0



 2  max 2ck VN (x), . 1−c 

Combining (E.7) with the bounds on VN gives   |(k, x, d)|  max

−1 k −1 1 (2c 2 (|x|)), 1

2  1−c

 1−c (E.7)



 max{(|x|, k), }. The result follows from Definition 3 and Proposition 1.  References Adetola, V., & Guay, M. (2003). Nonlinear output feedback receding horizon control of sampled data systems. In Proceedings of the American control conference, Vol. 6 (pp. 4914–4919). Denver, CO. Allgöwer, F., & Zheng, A. (Eds.) (2000). Nonlinear model predictive control. Boston: Birkhäuser. Angeli, D. (1999). Intrinsic robustness of global asymptotic stability. Systems and Control Letters, 38(4–5), 297–307. Angeli, D., Sontag, E. D., & Wang, Y. (2000). A characterization of integral input-to-state stability. IEEE Transactions on Automatic Control, 45(6), 1082–1097. Brockett, R. W. (1983). Asymptotic stability and feedback stabilization. In R. W. Brockett, R. S. Millman, H. J. Sussmann (Eds.), Differential geometric control theory. Boston: Birkhäuser. De Oliveira Kothare, S. L., & Morari, M. (2000). Contractive model predictive control for constrained nonlinear systems. IEEE Transactions on Automatic Control, 45(6), 1053–1071. Findeisen, R., Imsland, L., Allgöwer, F., & Foss, B. A. (2003a). Output feedback stabilization of constrained systems with nonlinear predictive control. International Journal of Robust and Nonlinear Control, 13(3–4), 211–227. Findeisen, R., Imsland, L., Allgöwer, F., & Foss, B. A. (2003b). State and output feedback nonlinear model predictive control: an overview. European Journal of Control, 9(2–3), 190–207. Gauthier, J. P., & Bornard, G. (1981). Observability for any u(t) of a class of nonlinear systems. IEEE Transactions on Automatic Control, 26(4), 922–926. Glad, S. T. (1983). Observability and nonlinear dead beat observers. In Proceedings of the 22nd IEEE conference on decision and control, Vol. 2 (pp. 800–802), San Antonio, TX. Grimm, G., Messina, M. J., Teel, A. R., & Tuna, S. (2003a). Model predictive control when a local control Lyapunov function is not available. In Proceedings of the American control conference, Vol. 5 (pp. 4125–4130). Denver, CO (Full version submitted to IEEE Transactions on Automatic Control). Grimm, G., Messina, M. J., Teel, A. R., & Tuna, S. E. (2003b). Nominally robust model predictive control with state constraints. In Proceedings of the 42nd IEEE conference on decision and control, Vol. 2 (pp. 1413–1418) Maui, Hawaii, USA. Grimm, G., Messina, M. J., Teel, A. R., & Tuna, S. E. (2004). Examples when nonlinear model predictive control is nonrobust. Automatica, 40(10), 1729–1738. Grizzle, J. W., & Moraal, P. E. (1990). Newton, observers and nonlinear discrete-time control. In Proceedings of the 29th IEEE conference on decision and control, Vol. 2 (pp. 760–767). Honolulu, HI. Jiang, Z.-P., & Wang, Y. (2001). Input-to-state stability for discrete-time nonlinear systems. Automatica, 37(6), 857–869.

627

Kazakos, D., & Tsinias, J. (1993). Stabilization of nonlinear discretetime systems using state detection. IEEE Transactions on Automatic Control, 38(9), 1398–1400. Kazakos, D., & Tsinias, J. (1994). The input to state stability condition and global stabilization of discrete-time systems. IEEE Transactions on Automatic Control, 39(10), 2111–2113. Keerthi, S. S., & Gilbert, E. G. (1985). An existence theorem for discretetime infinite-horizon optimal control problems. IEEE Transactions on Automatic Control, 30(9), 907–909. Kellett, C. M., & Teel, A. R. (2003). Results on discrete-time controlLyapunov functions. In Proceedings of the 42nd IEEE conference on decision and control, Vol. 6 (pp. 5961–5966). Maui, Hawaii, USA. Kellett, C. M., & Teel, A. R. (2004). Discrete-time asymptotic controllability implies smooth control: Lyapunov function. Systems and Control Letters, 52(5), 349–359. Lin, Y., Sontag, E. D., & Wang, Y. (1996). A smooth converse Lyapunov theorem for robust stability. SIAM Journal on Control and Optimization, 34(1), 124–160. Magni, L., De Nicolao, G., & Scattolini, R. (1998). Output feedback receding-horizon control of discrete-time nonlinear systems. In Preprints of the 4th IFAC NOLCOS, Vol. 2 (pp. 422–427). Oxford, UK. Mayne, D. Q., Rawlings, J. B., Rao, C. V., & Scokaert, P. O. M. (2000). Constrained model predictive control: stability and optimality. Automatica, 36(6), 789–814. Meadows, E. S., Henson, M. A., Eaton, J. W., & Rawlings, J. B. (1995). Receding horizon control and discontinuous state feedback stabilization. International Journal of Control, 62(5), 1217–1229. Michalska, H., & Mayne, D. Q. (1995). Moving horizon observers and observer-based control. IEEE Transactions on Automatic Control, 40(6), 995–1006. Nijmeijer, H. (1982). Observability of autonomous discrete time nonlinear systems: a geometric approach. International Journal of Control, 36(5), 867–874. Scokaert, P. O. M., Rawlings, J. B., & Meadows, E. S. (1997). Discretetime stability with perturbations: application to model predictive control. Automatica, 33(3), 463–470. Shim, H., & Teel, A. R. (2003). Asymptotic controllability and observability imply semiglobal practical asymptotic stabilizability by sampled-data output feedback. Automatica, 39(3), 441–454. Sontag, E. D. (1981). Conditions for abstract nonlinear regulation. Information and Control, 51(2), 105–127. Sontag, E. D. (1984). A concept of local observability. Systems and Control Letters, 5(1), 41–47. Sontag, E. D. (1990). Further facts about input to state stabilization. IEEE Transactions on Automatic Control, 35(4), 473–476. Teel, A., & Praly, L. (1994). Global stabilizability and observability imply semiglobal stabilizability by output feedback. Systems and Control Letters, 22(5), 313–325. Teel, A. R., Moreau, L., & Nesic, D. (2003). A unified framework for inputto-state stability in systems with two time scales. IEEE Transactions on Automatic Control, 48(9), 1526–1544. Tsinias, J. (1991). A generalization of Vidyasagar’s theorem on stabilizability using state detection. Systems and Control Letters, 17(1), 37–42. Vidyasagar, M. (1993). Nonlinear systems analysis (2nd ed.), Englewood Cliffs: Prentice-Hall. Michael J. Messina received his B.S. degree in Engineering in 2001 from Harvey Mudd College and his M.S. degree in Electrical and Computer Engineering in 2002 from the University of California, Santa Barbara, where he is currently a Ph.D. student.

628

M.J. Messina et al. / Automatica 41 (2005) 617 – 628 Sezai E. Tuna received a B.S. degree in electrical and electronics engineering from Orta Dogu Teknik Universitesi, Ankara, in 2000. He has since been a Ph.D. student in electrical and computer engineering at the University of California, Santa Barbara.

Andrew R. Teel received his Ph.D. degree in Electrical Engineering from the University of California, Berkeley, 1992. He is currently a professor in the Electrical and Computer Engineering Department at the University of California, Santa Barbara where he is also director of the Center for Control Engineering and Computation. He is a Fellow of the IEEE.