Randomized algorithms for quadratic stability of quantized sampled-data systems

Randomized algorithms for quadratic stability of quantized sampled-data systems

Available online at www.sciencedirect.com Automatica 40 (2004) 839 – 846 www.elsevier.com/locate/automatica Brief paper Randomized algorithms for ...

342KB Sizes 0 Downloads 36 Views

Available online at www.sciencedirect.com

Automatica 40 (2004) 839 – 846

www.elsevier.com/locate/automatica

Brief paper

Randomized algorithms for quadratic stability of quantized sampled-data systems; Hideaki Ishiia;∗ , Tamer Ba-sara , Roberto Tempob a Coordinated

Science Laboratory, University of Illinois, 1308 West Main Street, Urbana, IL 61801, USA Politecnico di Torino, Corso Duca degli Abruzzi 24, Torino 10129, Italy

b IEIIT-CNR,

Received 23 September 2002; received in revised form 21 July 2003; accepted 16 December 2003

Abstract In this paper, we present a novel development of randomized algorithms for quadratic stability analysis of sampled-data systems with memoryless quantizers. The speci3c randomized algorithm employed generates a quadratic Lyapunov function and leads to realistic bounds on the performance of such systems. A key feature of this method is that the characteristics of quantizers are exploited to make the algorithm computationally e6cient. ? 2003 Elsevier Ltd. All rights reserved. Keywords: Quadratic stability; Quantizers; Randomized algorithms; Sampled-data systems

1. Introduction Quantization in control systems has recently become an active research topic. The need for quantization inevitably arises when digital networks are part of the feedback loop, and it is of interest to reduce the data rate necessary for the transmission of control signals. In particular, the minimum data rates for certain stabilization problems are found explicitly in Nair and Evans (2000) and Tatikonda (2000). These results, however, rely on the use of time-varying quantizers with memory to achieve the minimum rates. Other related works include Brockett and Liberzon (2000) and Elia and Mitter (2001). In Hou, Michel, and Ye (1997), Wong and Brockett (1999), Ishii and Ba-sar (2002), Ishii and Francis (2002a,b), on the other hand, the e@ect of memoryless quantizers in the context of sampled-data systems is studied. Certainly, if



This paper was not presented at any IFAC meeting. This paper was recommended for publication in revised form by Associated Editor Andrew R. Teel under the direction of Editor H. Khalil.  This work was supported in part by NSF Grant CCR 00-85917 ITR. ∗ Corresponding author. Tel.: +1-217-265-5544; fax: +1-217244-1653. E-mail addresses: [email protected] (H. Ishii), [email protected] (T. Ba-sar), [email protected] (R. Tempo). 0005-1098/$ - see front matter ? 2003 Elsevier Ltd. All rights reserved. doi:10.1016/j.automatica.2003.12.011

a quantized discrete-time signal takes only a 3nite number of 3xed values, then trajectories may go close to an equilibrium, but not converge, i.e., asymptotic stability cannot be achieved. Then, a question that naturally arises is, how close can the trajectories get to the equilibrium point? When the sampling period is large, another question is, how close do the trajectories stay to the equilibrium point between sampling instants? These questions have been addressed in the cited references, and bounds on trajectories have been found analytically, albeit at the expense of crude conservatism. In this paper, we follow a probabilistic approach to analyze the stability of quantized sampled-data systems. The method we introduce systematically 3nds more realistic bounds on the trajectories as well as on the sampling period. In particular, we consider the setup in Ishii and Francis (2002a) where stabilization of linear sampled-data systems with memoryless quantizers is studied; the stability criterion is quadratic in the continuous-time domain, with a 3xed decay rate, thus assuring a satisfactory intersample behavior. We propose a gradient-based algorithm that 3nds sequentially a quadratic Lyapunov function, if one exists, in a 3nite number of iterations with probability one. The motivation for taking the probabilistic approach is as follows. To overcome the degree of conservatism in the original method of Ishii and Francis (2002a), it is mandatory to check the behavior of individual trajectories

840

H. Ishii et al. / Automatica 40 (2004) 839 – 846

of the closed-loop system. However, the number of such trajectories would certainly contribute to the complexity of any deterministic algorithm to perform this task. In contrast, probabilistic algorithms generally have low complexity at the expense of small risk expressed in probability; for an overview on this subject, we refer to Vidyasagar (1998) and Tempo, Cala3ore, and Dabbene (2004). Speci3cally, we employ the randomized algorithms proposed in Polyak and Tempo (2001) for robust linear quadratic regulators and subsequently, used for other problems in, e.g., Cala3ore and Polyak (2001), Fujisaki, Dabbene, and Tempo (2003), Liberzon and Tempo (2003) and Oishi and Kimura (2003). We emphasize that these works are mainly concerned with robust control problems, while the use of such algorithms for stability analysis of nonlinear and quantized systems has not been explored yet. In this respect, the contribution of this paper is twofold. First, we rigorously extend the approach to the analysis of sampled-data nonlinear systems. As a consequence, the proposed algorithm has the unique feature that it involves randomization in the two spaces of state and time. Second, to further reduce the complexity, the structure of the quantized system is exploited and incorporated into the algorithm. We develop a su6cient condition for the quadratic stability problem, which is tuned for the purpose of reducing the number of trajectories. The paper is organized as follows. In Section 2, we de3ne the stability criterion for quantized systems and formulate the problem. In Section 3, a su6cient condition for this stability is given. In Section 4, the randomized algorithm is developed and the main result on its convergence is presented. We give numerical results in Section 6 and 3nally some concluding remarks in Section 7. Notation: Given a set X ⊂ Rn , Xc is its complement, cl(X) is its closure, and @X is its boundary. For x ∈ Rn , x denotes its Euclidean norm. We also denote by Bx (r) the closed ball in Rn with center x and radius r. The space of symmetric matrices in Rn × n is equipped with the inner  product X; Y  = trXY and the Frobenius norm n X  = ( i; k=1 xi;2 k )1=2 , where xi; k is the (i; k)-th entry of the matrix X . Let C ⊂ Rn × n be the cone of symmetric nonnegative-de3nite matrices. We de3ne the projection of a symmetric n × n matrix X on C by X + := arg minY ∈C X − Y .

Fig. 1. Quantized sampled-data system.

where x(t) ∈ Rn is the state and u(t) ∈ Rm is the control input. We assume that (A; B) is stabilizable and that A is not Hurwitz, 1 i.e., it has at least one eigenvalue with a nonnegative real part. The sampler ST is given by ST : x → xd : xd (k) = x(kT ), k ∈ Z+ , where T ¿ 0 is the sampling period, and the zeroth order hold HT by HT : ud → u : u(t) = ud (k), t ∈ [kT; (k + 1)T ), k ∈ Z+ . We now give the de3nition of a quantizer Q. We say that a partition {Qj }j∈S of Rn with an index set S is 6nite if for every r ¿ 0 there is a 3nite subset SN of S such that B0 (r) ⊂ ∪j∈SN Qj . Then, given a 3nite partition {Qj }j∈S of Rn and a set {uj }j∈S ⊂ Rm of control inputs, a quantizer Q : Rn → {uj }j∈S is de3ned by Q(x) = uj

if x ∈ Qj ;

j ∈ S:

(2)

Clearly, such a quantizer is memoryless. Here, each Qj in the partition is called a cell. For technical reasons, we assume that the cells Qj , j ∈ S, are polytopes, i.e., intersections of half spaces, with nonempty interior. Now, in Fig. 1, the control input u(t) is expressed as u(t) = Q(x(kT ));

t ∈ [kT; (k + 1)T );

k ∈ Z+ :

(3)

For a positive-de3nite P ∈ Rn × n , de3ne the quadratic Lyapunov function VP (x) := x Px. Its time derivative V˙ P : Rn × Rm → R along trajectories of system (1) is given by V˙ P (x; u) := (Ax + Bu) Px + x P(Ax + Bu). We now give the stability de3nition used in this paper. Denition 2.1. Given a scalar R0 ¿ 0, positive-de3nite matrices P, J ∈ Rn × n , and a 3xed quantizer Q, for the closed-loop system in Fig. 1, the ball B0 (r0 ) with r0 ¿ 0 is quadratically attractive from B0 (R0 ) with respect to (P; J ) if every trajectory x(·) of the system (1) under the control input (3) with x(0) ∈ B0 (R0 ) satis3es V˙ P (x(t); u(t)) 6 − x(t) Jx(t)

or x(t) ∈ ErP0

(4)

ErP0

for t ¿ 0, where is the largest level set of VP contained in B0 (r0 ), given by

2. Problem formulation

ErP0 := {x ∈ Rn : VP (x) 6 r02 min (P)}:

In this section, we present the setup of quantized control systems and formulate the stability analysis problem. Consider the continuous-time system depicted in Fig. 1. The linear time-invariant plant (A; B) is given by

We note that this de3nition of stability for sampled-data systems is in the continuous-time domain. Hence, the Lyapunov function VP (x(t)) decreases at a certain rate even 1

x(t) ˙ = Ax(t) + Bu(t);

(1)

(5)

We consider only the case when A is not Hurwitz, because otherwise the problem to be posed becomes trivial, as we will see shortly.

H. Ishii et al. / Automatica 40 (2004) 839 – 846

between sampling instants, and the ellipsoid ErP0 is an invariant set. We also remark that r0 is usually smaller than R0 . On the other hand, we observe in (4) that if the matrix A is Hurwitz then, by setting u(·) ≡ 0, there always exists a P for attractiveness for the given J (with r0 = 0); hence the problem becomes trivial. The quadratic stability problem is now de3ned as follows: Given a scalar R0 ¿ 0, a quantizer Q, a sampling period T ¿ 0, and a positive-de3nite matrix J ∈ Rn × n , 3nd a positive-de3nite matrix P ∈ Rn × n and a scalar r0 ¿ 0 such that for the closed-loop system in Fig. 1, the ball B0 (r0 ) is quadratically attractive from the ball B0 (R0 ) with respect to (P; J ). In the rest of this section, we provide some notation. First, we denote by (x0 ; u; t) the state of system (1) at time t with the initial condition x0 ∈ Rn (at t=0) and theconstant control t input u ∈ Rm , that is, (x0 ; u; t) := eAt x0 + 0 eA B d u. For  m P = P ¿ 0 and u ∈ R , let XP (u) := {x ∈ Rn : V˙ P (x; u) 6 − x Jx}:

(6)

This is the set of states at which the Lyapunov function decreases when u is applied. Next, we introduce two sets that will be used throughout the paper. Let D0 ; D1 ⊂ Rn be ellipsoids with D1 ⊂ B0 (R0 ) ⊂ D0 . These sets represent estimates of the invariant sets we are looking for, as will be seen shortly. We assume that cl(Qj ) ∩ @Di does not contain isolated points for i = 0; 1, j ∈ SN . We impose this technical assumption, required for convergence of Algorithm 4.4 in Section 4, to avoid the generation of isolated points, which is an event with probability zero. To check whether this assumption is satis3ed, we can follow a two-step procedure: First, check if a hyperplane containing one side of the polytope Qj is tangent to the ellipsoid Di . If so, then 3nd if the tangential point is in cl(Qj ). By the de3nition of 3nite partitions, there is a 3nite subset of {Qj }j∈S that covers D0 ; denote by SN the index set of the cells in this subset, with N elements. Similarly to (5), we de3ne the ellipsoid EP RP0 := {x ∈ Rn : VP (x) 6 R20 max (P)}. Note that the bounding eigenvalue in this set is the maximum one, and hence this ellipsoid is the smallest level set of VP containing the ball B0 (R0 ).

3. A sucient condition for quadratic stability In this section, we give a su6cient condition for the quadratic stability problem. It basically says that the behavior of trajectories starting in D0 must be checked for t ∈ [0; T ]. Special attention is paid to keep the number of such trajectories low. In particular, we need to check only the trajectories starting on the boundaries of D0 , D1 , and the cells Qj . Further, for those starting in @Qj , the test can stop as soon as they enter Qj \D1 or leave D0 .

841

Proposition 3.1. Suppose P=P  ¿ 0 and r0 ¿ 0 satisfy the following conditions: (i) D1 ⊂ ErP0 ⊂ EP RP0 ⊂ D0 , where ErP0 is strictly in EP RP0 ; (ii) for every x0 ∈ @D0 ∪ @D1 , x0 ∈ XP (Q(x0 )); (iii) for every j ∈ SN , x0 ∈ @Qj ∩D0 , and t ∈ [0; T ], if there does not exist t1 ∈ (0; t] such that (x0 ; uj ; t1 ) ∈ cl(Qj ∪ D0c \D1 ), then (x0 ; uj ; t) ∈ XP (uj ) ∪ D1 :

(7)

Then, for the closed-loop system in Fig. 1, the ball B0 (r0 ) is quadratically attractive from B0 (R0 ) with respect to (P; J ). We 3rst provide a lemma on the complement of XP (u) and then a proof for the proposition. Lemma 3.2. For u ∈ Rm and P = P  ¿ 0, XPc (u) is nonempty, and each of its connected components is unbounded. Proof. Let M := PA+A P+J and m := PBu. We have from (6) that XPc (u) = {x : x Mx + 2x m ¿ 0}. Note that since A is not stable and J ¿ 0, M has at least one positive eigenvalue. Denote by y the corresponding eigenvector. Clearly, either y or −y is in XPc (u), so the set is nonempty. Now 3x x0 ∈ XPc (u). Let z = [y (Mx0 + m)]y if y (Mx0 + m) = 0 and z =y otherwise. Then, x0 +$z ∈ XPc (u) for all $ ¿ 0 because (x0 + $z) M (x0 + $z) + 2(x0 + $z) m ¿ $2 z  Mz + 2$z  (Mx0 + m) ¿ 0: Thus, the connected component of XPc (u) containing x0 is unbounded. Proof of Proposition 3.1. For j ∈ SN , let Qˆ j := Qj ∩ EP RP0 . We 3rst claim that cl(Qˆ j ) ⊂ XP (uj ) ∪ D1 :

(8)

In (iii), for every j and x0 ∈ @Qj ∩ D0 , (7) must hold for t = 0. Thus, @Qj ∩ D0 ⊂ XP (uj ) ∪ D1 :

(9)

On the other hand, because XP (uj ) is a closed set, we have from (ii) cl(Qj ) ∩ (@D0 ∪ @D1 ) ⊂ XP (uj ):

(10)

Now (9) and (10) imply (@Qj ∩ D0 \D1 ) ∪ [cl(Qj ) ∩ (@D0 ∪ @D1 )] = @(Qj ∩ D0 \D1 ) ⊂ XP (uj ): By Lemma 3.2, the boundary of the bounded set Qj ∩D0 \D1 being in XP (uj ) implies its interior being in XP (uj ) as well.

842

H. Ishii et al. / Automatica 40 (2004) 839 – 846

Thus, cl(Qj ∩ D0 \D1 ) ⊂ XP (uj ). Finally, by (i), Qˆ j ⊂ Qj ∩ D0 . Hence we have cl(Qˆ j \D1 ) ⊂ XP (uj ) and obtain (8). Now we must show in the following that, for every j ∈ SN , x0 ∈ Qˆ j , and t ∈ [0; T ], (x0 ; uj ; t) ∈ XP (uj ) ∪ D1 :

(11)

Then, it follows that every trajectory x(·) with x(0) ∈ B0 (R0 ) satis3es x(t) ∈ XP (Q(x(kT )))∪D1 for t ∈ [kT; (k +1)T ) and k ∈ Z+ . By the de3nition of XP (u) and (i), at all t ¿ 0, either VP (x(t)) is decreasing or x(t) ∈ ErP0 . This implies quadratic attractiveness of the closed-loop system. Fix j ∈ SN and x0 ∈ Qˆ j . We write x(t) = (x0 ; uj ; t) for simplicity. Because Qˆ j is an intersection of the polytope Qˆ j and the ellipsoid EP RP0 , the trajectory x(t), t ∈ (0; T ), intersects @Qˆ j at only a 3nite number of times, say, at {ti }li=1 with 0 ¡ t1 ¡ · · · ¡ tl ¡ T . In addition, let t0 = 0 and tl+1 = T . For t ∈ (ti ; ti+1 ), i = 0; : : : ; l, either (a) x(t) ∈ Qˆ j or (b) x(t) ∈ Qˆ j . In both cases, we claim that x(t) ∈ XP (uj ) ∪ D1 , t ∈ (ti ; ti+1 ). For (a), this follows directly from (8). For (b), it follows from (iii); here we do not consider the case when x(ti ) is in @EP RP0 , which is not covered by (iii), because by (8) the trajectory cannot leave Qˆ j through @EP RP0 . Finally, because XP (uj ) ∪ D1 is a closed set, we have x(ti ) ∈ XP (uj ) ∪ D1 , i = 0; 1; : : : ; l + 1. Thus, (11) is shown. The key idea in Proposition 3.1 is to reduce the number of trajectories as much as possible. This is evident when we compare it with a simpler, equivalent condition as follows: Suppose that P and r0 satisfy the condition (i) in the proposition and also (ii) each trajectory starting in the entire D0 satisfy (7) for t ∈ [0; T ]. The volume of the set of initial states here is clearly positive, whereas in the proposition it is zero. We notice that in the proposition the ellipsoids EP RP0 and ErP0 are invariant sets. Moreover, the region where the Lyapunov function VP (x(t)) increases is contained in D1 and thus all trajectories enter ErP0 . We note that the sets D0 and D1 do not depend on P. This is an important aspect to develop an algorithm that searches for P.

4. Randomized algorithm for quantized systems In this section, we 3rst state some technical results and then present the main results of this paper: The randomized algorithm given as Algorithm 4.4 for the quadratic stability problem and Theorem 4.5 which shows its probabilistic convergence. First, we discuss the structure of the solution set of the problem. De3ne by P the set of matrices P satisfying (ii) and (iii) in Proposition 3.1. The set P will be referred to as the set of feasible matrices. As shown below, this set has the property that if P ∈ P then for every ' ¿ 1 there is a neighborhood of 'P in P.

Lemma 4.1. Suppose P = ∅. If P ∈ P and ' ¿ 1, then 'P + SP ∈ P for all SP = SP  ∈ Rn × n satisfying SP 6

(' − 1)minx∈@D1 x Jx : 2 maxx∈@D0 x(maxx∈@D0 Ax + maxj∈SN Buj )

The proof is similar to those in stability results for perturbed systems as in Khalil (1996, Example 5.1) and is thus omitted. Remark 4.2. The lemma says that for any r ¿ 0 there is a ball of radius r contained in P and hence is not generic. This is a fairly strong property of the set of feasible matrices. Next, we introduce an operator G˜ on symmetric matrices ˜ with the property G(P) ¿ )0 I for a 3xed )0 ¿ 0. It is used in the proposed algorithm to ensure that P is positive de3nite. In particular, it replaces the projection P + ¿ 0 used in the algorithm in Polyak and Tempo (2001). Given X = X  ∈ Rn × n , let its eigenvalue decomposition be X = U,U  , where U is orthogonal and , = diag(1 ; 2 ; : : : ; n ). Now, for ) ¿ 0, let ,) := diag(max(1 ; )); : : : ; max(n ; ))). First we de3ne G) (X ) := U,) U  . In particular, it is known that the projection of X on C is given by X + = G0 (X ) (Polyak, 1964). Now, given r ¿ 0 and r1 ∈ (0; r), de3ne the operator G˜ by ˜ ) := G)0 (X ); where )0 := [(r 2 − r12 )=n]1=2 : G(X

(12)

˜ ) can be chosen Clearly, the smallest eigenvalue )0 of G(X arbitrarily large. We also note that the parameter r represents the radius of a ball contained in the set P; as discussed in Remark 4.2, such a ball exists for any r ¿ 0. The following lemma exhibits a certain relation of this operator with r and r1 . We skip the proof, which is based on basic properties of the projection [ · ]+ . Lemma 4.3. For symmetric matrices P1 ; P2 ∈ Rn × n with ˜ 1 ) − P2 2 6 P1 − P2 2 + r 2 − r12 . P2 ¿ 0, G(P Now, we introduce probability density functions in state and time, since the main algorithm generates random samples in the two spaces of x and t. To generate a state, Proposition 3.1 suggests that it must be accompanied with an index because, in (iii), boundary points of adjacent cells must be distinguished. Thus, we take the density function fx; j for x and j as fx; j (x; j) ¿ 0

if (x; j) ∈ F;

(13)

where F is the set of all pairs (x; j) that appear in Proposition 3.1 and is given by F := {(x; j) : x ∈ [Qj ∩ (@D0 ∪ @D1 )] ∪ (@Qj ∩ D0 ); j ∈ SN }. On the other hand, the generation of t is required when a state x corresponding to (iii) in the proposition is generated. The density function ft for t takes

H. Ishii et al. / Automatica 40 (2004) 839 – 846

the form ft (t) ¿ 0

if t ∈ [0; T ]:

(14)

In the algorithm, due to the randomization in two spaces, we use the notation with double indices. At the kth iteration, the algorithm generates a pair (x[k] ; j [k] ) ∈ F of state and index according to fx; j and then, if necessary, it also l−1 ⊂ [0; T ] of time instants according generates a set {t [k; i] }i=0 to ft . Other parameters updated iteratively take such double indices as well, e.g., P [k; i] . Finally, to simplify notation, we de3ne a scalar function v(P; x; j; t) := V˙ P ((x; uj ; t); uj ) + (x; uj ; t) J(x; uj ; t) for P = P  ¿ 0, (x; j) ∈ F, and t ¿ 0. This function has the property v(P; x; j; t) 6 0 ⇔ (x; uj ; t) ∈ XP (uj ):

(15)

The main algorithm is based on the gradient of v in P. Since v is linear in P, its gradient ∇P v is given by ∇P v(P; x; j; t) = (A(x; uj ; t) + Buj )(x; uj ; t) + (x; uj ; t)(A(x; uj ; t) + Buj ) . This is then used in the iterative randomized algorithm for the quadratic stability problem, presented next. Algorithm 4.4. (1) Set an initial P [0; 0] ¿ 0, and set r ¿ 0 and r1 ∈ (0; r) for G˜ in (12) (2a) Generate (x[k] ; j [k] ) ∈ F according to the density function fx; j given in (13). (2b) If x[k] ∈ @D0 ∪ @D1 , then set  ˜ [k; 0] − /[k; 0] ∇v[k; 0] ); G(P    (16) P [k+1; 0] = if x[k] ∈ XP[k; 0] (uj[k] );    [k; 0] P ; otherwise; where ∇v[k; 0] := ∇P v(P [k; 0] ; x[k] ; j [k] ; 0) and /[k; 0] is the step size given by /[k; 0] :=

v(P [k; 0] ; x[k] ; j [k] ; 0) + r∇v[k; 0]  : ∇v[k; 0] 2

(2c) If x[k] ∈ @Qj[k] ∩ D0 , then l−1 (i) generate {t [k; i] }i=0 ⊂ [0; T ] according to ft in (14) with the indices in increasing order:

0 6 t [k; 0] ¡ · · · ¡ t [k; l−1] 6 T ; (ii) if t [k; i] = 0 and (x[k] ; uj[k] ; t [k; i] ) ∈ cl(Qj[k] ∪ D0c \D1 ), then set P [k+1; 0] = P [k; i] ; otherwise, set  ˜ [k; i] − /[k; i] ∇v[k; i] ); G(P    P [k; i+1] = if (x[k] ; uj[k] ; t [k; i] ) ∈ XP[k; 0] (uj[k] ) ∪ D1 ;    [k; i] P ; otherwise; (17)

843

where ∇v[k; i] := ∇P v(P [k; 0] ; x[k] ; t [k; i] ) and /[k; i] is the step size given by v(P [k; 0] ; x[k] ; t [k; i] ) + r∇v[k; i]  ; /[k; i] := ∇v[k; i] 2 (iii) set P [k+1; 0] = P [k; l] . (3) Check EP RP0[k; 0] ⊂ D0 and 3nd r0 ¿ 0, if it exists, such that D1 ⊂ ErP0[k; 0] ⊂ EP RP0[k; 0] . We say that an update in P occurs when a new P is generated in either (16) or (17). Here, we emphasize that in the algorithm the choices of the initial P and r ¿ 0 are arbitrary. The following is the main theorem of the paper. Theorem 4.5. Suppose P = ∅. Then, Algorithm 4.4 converges in a 6nite number of steps with probability one. To prove this result, we 3rst state a technical lemma. It shows that in the algorithm, if P [k; i] ∈ P, then the probability to generate x[k] , j [k] , and t [k; i] that force an update is positive. We de3ne several sets for P ∈ P and j ∈ SN . One is the set of states that cause updates in (16) Z1j (P) := {x ∈ Qj ∩ (@D0 ∪ @D1 ) : x ∈ XP (uj )}: The next two are related to the updates in (17). For the trajectory starting at a given initial state x ∈ Rn under a constant control uj , j ∈ SN , let the set of times which force the updates be Tj; x (P) := {t ∈ [0; T ] : @t1 ∈ (0; t] such that (x; uj ; t1 ) ∈ cl(Qj ∪ D0c \D1 ); and (x; uj ; t) ∈ XP (uj ) ∪ D1 }; and let the corresponding set of states be Z2j (P) := {x ∈ @Qj ∩ D0 : Tj; x (P) = ∅}: Lemma 4.6. In the algorithm, if P [k; i] ∈ P, then   (i) Prob({x[k] ∈ p=1; 2 j∈SN Zpj (P [k; i] )}) ¿ 0, and (ii) Prob({t [k; i] ∈ Tj; x[k] (P [k; i] )}) ¿ 0 for x[k] ∈ Z2j (P [k; i] ) and j ∈ SN . Proof. We prove only (i) since (ii) can be shown similarly. Because P [k; i] ∈ P, by the de3nition of P, we have ∪p; j Zpj = ∅. Suppose Z1j = ∅ for some j. Then it follows that Prob({x[k] ∈ Z1j }) ¿ 0. This is because (a) the set Qj ∩ (@D0 ∪ @D1 ) is a union of intersections of ellipsoidal surfaces and half spaces and does not contain isolated points, by the technical assumption on Qj , D0 , and D1 (in Section 2), (b) XPc [k] (uj ) is nonempty and open, and (c) fx; j (x; j) ¿ 0 for all x ∈ Z1j . Otherwise, we have Z2j = ∅ for some j. Let x∗ ∈ Z2j and t ∗ ∈ Tj; x∗ . It is straightforward to 3nd a neighborhood N of x∗ such that, for every x0 ∈ N, t ∗ ∈ Tj; x0 .

844

H. Ishii et al. / Automatica 40 (2004) 839 – 846

Here, Prob({x[k] ∈ N ∪ (@Qj ∩ D0 )}) ¿ 0 and N ∪ (@Qj ∩ D0 ) ⊂ Z2j . Thus, Prob({x[k] ∈ Z2j }) ¿ 0.

Further, by (20),

Proof of Theorem 4.5. For ease of notation, we rename the sequence of matrices {P [k; i] : k ∈ Z+ ; i ∈ {0; 1; : : : ; l}} as {P [k] : k ∈ Z+ } with a single index in the order they appear in the algorithm. We also do the same for {x[k] } and {t [k; i] } and obtain {x[k] } and {t [k] }. (This means x[k] =· · ·=x[k+l−1] for some k.) For a 3xed k, we assume that P [k] ∈ P and that an update occurs at this step with the state and index pair (x[k] ; j [k] ) ∈ F and the time t [k] . Let P ∗ be a feasible solution such that a ball of radius r centered at P ∗ is in P. By Lemma 4.1, such a P ∗ always exists. To prove the claim, we show in the following that

Substituting (21) and (23) into (22), we have

P [k+1] − P ∗ 2 ¡ P [k] − P ∗ 2 − r12 :

(18)

It is then clear that the number of updates cannot be more than P [0] −P ∗ =r12 . On the other hand, by Lemma 4.6, x[k] and t [k] that induce an update can be generated with nonzero probability. Thus, the algorithm cannot stop updating at an infeasible P. Observe in the algorithm that if the matrix is updated according to (16), then x[k] and t [k] satisfy (i) x[k] ∈ @D0 ∪ @D1 , x[k] ∈ XP[k] (uj[k] ), and t [k] = 0; otherwise, the update is in the form (17), and (ii) x[k] ∈ @Qj[k] ∩ D0 and (x[k] ; uj[k] ; t [k] ) ∈ XP[k] (uj[k] ) ∪ D1 ∪ D0c . It follows that, in both cases (i) and (ii), we have by (15) v[k] := v(P [k] ; x[k] ; j [k] ; t [k] ) ¿ 0:

(19)

Let ∇v[k] := ∇P v(P [k] ; x[k] ; j [k] ; t [k] ), and let Pˆ := P ∗ + r∇v[k] =∇v[k] . Notice Pˆ ∈ P by the de3nition of P ∗ . This implies ˆ x[k] ; j [k] ; t [k] ) 6 0: v(P;

(20)

We are now in a position to show (18). From the algo˜ [k] − /[k] ∇v[k] ), rithm, P [k+1] takes the form P [k+1] = G(P where /[k] :=

v[k] + r∇v[k]  : ∇v[k] 2

(21)

∇v[k] ; P [k] − P ∗  ¿ v[k] + r∇v[k] :

(23)

P [k+1] − P ∗ 2 6 P [k] − P ∗ 2 −

1 (v[k] + r∇v[k] )2 + r 2 − r12 : ∇v[k] 2

Finally, by (19), we obtain (18). The proof of Theorem 4.5 is based on the assumption that a feasible matrix exists. In practice, we may start the analysis with a system designed using the method in Ishii and Francis (2002a). There, under a similar setup, an analytic, but conservative solution is obtained for the problem of designing Q, T , and r0 for quadratic attractiveness. We can run the analyses in a sequence using smaller r0 , larger T , and/or coarser quantizers, which make the problem more di6cult. Since the original values are always solutions for the problem, by varying the values, one should obtain more realistic estimates. We take this approach in the example in the next section. In general, it is di6cult to estimate the number of steps needed before obtaining a feasible P. One can always run the algorithm until no update occurs for many random samples. To con3rm if the obtained P is actually feasible or not, one can check the conditions in Proposition 3.1 deterministically for a given number of samples. The rate of convergence of the algorithm is determined by several parameters. One is the density function fx; j , whose choice is arbitrary under (13). However, clearly, the convergence is faster if the density is higher in regions where updates occur more frequently. Other parameters are r ¿ 0 and r1 ∈ (0; r). While a large r implies that the solution P ∗ in the proof is far from the origin, a small r makes the ˜ minimum eigenvalue )0 of G(P) in (12) small, which may be problematic in achieving EP RP0 ⊂ D0 . On the other hand, though a large r1 means faster convergence, as we see in (18), it also determines )0 . Hence, it is generally not easy to determine these parameters.

Now, by Lemma 4.3, 5. A numerical example

P [k+1] − P ∗ 2 6 P [k] − /[k] ∇v[k] − P ∗ 2 + r 2 − r12 =P [k] − P ∗ 2 + (/[k] )2 ∇v[k] 2 −2/[k] ∇v[k] ; P [k] − P ∗  + r 2 − r12 :

(22)

Here, the third term can be written as follows because ∇v[k] ˆ is a gradient and by the de3nition of P: ∇v[k] ; P [k] − P ∗  ˆ + ∇v[k] ; Pˆ − P ∗  =∇v[k] ; P [k] − P ˆ x[k] ; j [k] ; t [k] ) + r∇v[k] : =v[k] − v(P;

In this section, we consider the magnetic ball levitation system which was studied in Ishii and Francis (2002a, Section 4.7). We used the proposed algorithm to obtain performance estimates of the system designed there. In this system, a steel ball of mass M is levitated by an electromagnet. The position y of the ball is kept at an equilibrium by controlling the voltage v. In the coil, the current is i, and the resistance and inductance are R and L, respectively. The system is linearized to (1) around an equilibrium [y0 y˙ 0 i0 ] for the nominal voltage v0 = 10 V. The state is x = [y − y0 y˙ − y˙ 0 i − i0 ] and the system

H. Ishii et al. / Automatica 40 (2004) 839 – 846

0    B= 0 ; 1 L

where M = 0:068 kg, R = 10 U, L = 0:41 H, and K1 = 3:3 × 10−5 Nm2 =A2 , and g = 9:8 m=s2 . The controller designed in the reference above is based on an LQR optimal state feedback K obtained by a Riccati equation solution   1:48 × 104 −3:75 × 103 7:80 × 105   4 : P0 =  279 −70:9   1:48 × 10 3 −70:9 18:0 −3:75 × 10 A logarithmic quantizer Q and a sampling period T are designed so that the ball B0 (r0 ) is quadratically attractive from B0 (R0 ) with respect to (P0 ; 4J0 ), where r0 =32, R0 =10, 4 = 0:10, and J0 = −(A + BK) P0 − P0 (A + BK). The index set is S = Z, and the partition cells are given by Q0 = {x : Kx ∈ (−'0 ; '0 )} and Qj = {x : Kx ∈ [sgn(j)'0 5|j|−1 , sgn(j)'0 5|j| )} for j = 0, where '0 = 0:451 and 5 = 1:78. The control inputs are uj = sgn(j)60 5|j|−1 , j ∈ S, where 60 = 0:652. The designed sampling period is Tdesign = 3:07 × 10−3 . The conservatism of this controller is in the large radius r0 = 32 used in design; in simulation, trajectories normally entered a ball of radius 0.02, even when the size of T was doubled. We ran Algorithm 4.4 several times consecutively, starting each run with the P resulting from the previous run. For each run, we used 10,000 samples in state and, for each sampled state, 4 samples in time, i.e., l = 4. When there was no update for two runs in a row, we modi3ed r0 ; this seemed enough for checking when we used a nonuniform density function fx; j , which was obtained through trial and error. We used T = 1:5 × Tdesign . In the 3rst run, we set P [0; 0] = P0 and took D0 and D1 to be level sets of P0 so that B0 (R0 ) ⊂ D0 and D1 ⊂ B0 (r0 ) with r0 =6:0. There were two updates. In the next run, we started with the P from the last run and the sets D0 and D1 rede3ned by this new P. There was no update in this run and the next one. We modi3ed r0 to 0.50 and continued in this manner. The parameters in the runs are shown in Table 1. After 19 runs with 28 updates, we arrived at r0 = 0:053. The resulting P is   1:48 × 104 −3:75 × 103 7:80 × 105   4 : P1 =  290 −70:3  1:48 × 10  3 −3:75 × 10 −70:3 21:2 A time response was calculated for the initial state x(0) = [0:70 × 10−3 0 0] . The plot of x(t) is depicted in Fig. 2. We con3rm that the trajectory goes under the line indicating r0 = 0:053. For this trajectory, we plotted VP (x(t)) and v(P; x(t); j(t); t) in Fig. 3; the solid lines are for P = P1 and the dashed lines for P = P0 . In the top plot, the horizontal line corresponds to the size of the level set ErP01 which

Table 1 The runs for T = 1:5 × Tdesign Runs

r0

# of updates

Indices at updates

1 2,3 4 5 6 7 8,9 10 –19

6.0 6.0 0.50 0.50 0.50 0.50 0.50 0.25 – 0.053

2 none 15 9 1 1 none none

0,−10 none 0,1,−14 −6, −11, 13 12 13 none none

0.16

norm of x(t)

and



0.12 0.08 0.04 0

0

0.1

0.2 time t

0.3

0.4

0.2

0.3

0.4

0.2

0.3

0.4

Fig. 2. The plot of x(t). 0.02

VP(x(t))



0.015 0.01 0.005 0 0

0.1

0.5

V(P(x(t), j(t), t))

matrices A and B are given by   0 1 0    Mg Rg  A =  2 Rg  0 −2 K1 v0   v0 0 0 − RL

845

0 -0.5 -1 -1.5

0

0.1

time t

Fig. 3. VP (x(t)) (top) and v(P; x(t); j(t); t) (bottom) for P = P0 in solid lines and P = P1 in dashed lines.

all trajectories should enter. A similar line for the level set with P0 is too close to 0 and is not visible. For quadratic attractiveness, v in the bottom plot must be smaller than 0, whenever V is above the horizontal line in the top plot. The solid lines satisfy this, but not the dashed lines. 6. Conclusion For sampled-data systems with quantizers, we have developed a quadratic stability analysis method using a randomized algorithm. It provides a computationally e6cient way to verify quadratic attractiveness in a systematic manner. This approach is not limited to linear systems and applies equally to nonlinear plants, at least at the conceptual level. Details, however, are di@erent and more involved, and this is a subject for future research. Acknowledgements This work was carried out while Roberto Tempo was visiting the Coordinated Science Laboratory at the University of

846

H. Ishii et al. / Automatica 40 (2004) 839 – 846

Illinois. The support by the CSL Visiting Research Professor Program is gratefully acknowledged.

Illinois at Urbana-Champaign. His research interests are in networked control systems, probabilistic algorithms, and hybrid systems.

References

Tamer Ba-sar received B.S.E.E. degree from Robert College, Istanbul, and M.S., M.Phil, and Ph.D. degrees in engineering and applied science from Yale University. After stints at Harvard University and Marmara Research Institute (Gebze, Turkey), he joined the University of Illinois at Urbana-Champaign in 1981, where he is currently the Fredric G. and Elizabeth H. Nearing Professor of Electrical and Computer Engineering. He has published extensively in systems, control, communications, and dynamic games, and has current interests in robust nonlinear and adaptive control, modeling and control of communication networks, control over wireless links, resource management and pricing in networks, risk-sensitive estimation and control, and robust identi3cation. Dr. Ba-sar is the Editor-in-Chief of Automatica, Editor of the BirkhYauser Series on Systems & Control, Managing Editor of the Annals of the International Society of Dynamic Games (ISDG), and member of editorial and advisory boards of several international journals. He has received several awards and recognitions over the years, among which are the Medal of Science of Turkey (1993), and Distinguished Member Award (1993), Axelby Outstanding Paper Award (1995) and Bode Lecture Prize (2004) of the IEEE Control Systems Society (CSS). He is a member of the National Academy of Engineering, a member of the European Academy of Sciences, a Fellow of IEEE, a past president of CSS, and a past president of ISDG.

Brockett, R. W., & Liberzon, D. (2000). Quantized feedback stabilization of linear systems. IEEE Transactions on Automatic Control, 45, 1279–1289. Cala3ore, G., & Polyak, B. T. (2001). Stochastic algorithms for exact and approximate feasibility of robust LMIs. IEEE Transactions on Automatic Control, 46, 1755–1759. Elia, N., & Mitter, S. K. (2001). Stabilization of linear systems with limited information. IEEE Transactions on Automatic Control, 46, 1384–1400. Fujisaki, Y., Dabbene, F., & Tempo, R. (2003). Probabilistic design of LPV control systems. Automatica, 39, 1323–1337. Hou, L., Michel, A. N., & Ye, H. (1997). Some qualitative properties of sampled-data control systems. IEEE Transactions on Automatic Control, 42, 1721–1725. Ishii, H., & Ba-sar, T. (2002). Remote control of LTI systems over networks with state quantization. In Proceedings of 41st Conference on Decision and Control, Las Vegas, Nevada (pp. 830 –835). Ishii, H., & Francis, B. A. (2002a). Limited Data Rate in Control Systems with Networks. Lecture Notes on Control and Information Science, Vol. 275, Berlin: Springer. Ishii, H., & Francis, B. A. (2002b). Stabilizing a linear system by switching control with dwell time. IEEE Transactions on Automatic Control, 47, 1962–1973. Khalil, H. K. (1996). Nonlinear systems (2nd ed.). Uppersaddle River, NJ: Prentice-Hall. Liberzon, D., & Tempo, R. (2003). Gradient algorithms for 3nding common Lyapunov functions. In Proceedings of 42nd Conference on Decision and Control, Maui, Hawaii (pp. 4782– 4787). Nair, G. N., & Evans, R. J. (2000). Stabilization with data-rate-limited feedback: Tightest attainable bounds. Systems and Control Letters, 41, 49–56. Oishi, Y., & Kimura, H. (2003). Computational complexity of randomized algorithms for solving parameter-dependent linear matrix inequalities. Automatica, 39, 2149–2156. Polyak, B. T. (1964). Gradient methods for solving equations and inequalities. USSR Computational Mathematics and Mathematical Physics, 4(6), 17–32. Polyak, B. T., & Tempo, R. (2001). Probabilistic robust design with linear quadratic regulators. Systems and Control Letters, 43, 343–353. Tatikonda, S. (2000). Control Under Communication Constraints. Ph.D. thesis, Massachusetts Institute of Technology. Cambridge, MA. Tempo, R., Cala3ore, G., & Dabbene, F. (2004). Randomized Algorithms for Analysis and Control of Uncertain Systems. Springer, to appear. Vidyasagar, M. (1998). Statistical learning theory and randomized algorithms for control. IEEE Control Systems Magazine, 18(6), 69–85. Wong, W. S., & Brockett, R. W. (1999). Systems with 3nite communication bandwidth constraints II: Stabilization with limited information feedback. IEEE Transactions on Automatic Control, 44, 1049–1053. Hideaki Ishii was born in 1972 in Tokyo, Japan. He received the B.Eng. degree in engineering systems from the University of Tsukuba in 1996, the M.Eng. degree in applied systems science from Kyoto University in 1998, and the Ph.D. degree in electrical engineering from the University of Toronto in 2002. Since 2001, he has been a Postdoctoral Research Associate of the Coordinate Science Laboratory at the University of

Roberto Tempo was born in CuorgnZe, Italy, in 1956. In 1980 he graduated in Electrical Engineering at Politecnico di Torino, Italy. From 1981 to 1983 he was with the Dipartimento di Automatica e Informatica, Politecnico di Torino. In 1984 he joined the National Research Council of Italy (CNR) at the research institute IEIIT, Torino, where he is a Director of Research of Systems and Computer Engineering since 1991. He has held visiting and research positions at the University of Illinois at Urbana-Champaign, German Aerospace Research Organization in Oberpfa@enhofen and Columbia University in New York. Dr. Tempo’s research activities are mainly focused on robustness analysis and control of uncertain systems and identi3cation of complex systems in the presence of bounded errors. On these topics he has given plenary and semi-plenary lectures at various conferences and workshops, including the Robust Control Workshop, Newcastle, Australia, 2000, the Benelux Meeting on Systems and Control, Mierlo, The Netherlands, 2000, and the Mathematical Theory of Networks and Systems (MTNS), Padova, Italy, 1998. Dr. Tempo is author or co-author of more than one hundred research papers published in international journals, books and conferences. In 1993, he received the “Outstanding Paper Prize Award” from the International Federation of Automatic Control (IFAC) for a paper published in Automatica. Dr. Tempo has been an Associate Editor of IEEE Transactions on Automatic Control and Systems and Control Letters and he is currently an Editor of Automatica. He has served as member of the program committee of several IEEE, IEE, IFAC and EUCA (European Union of Control Associations) conferences and as program Vice-Chair for short papers of the IEEE Conference on Decision and Control, 1999. He is Vice-President for Conference Activities of the IEEE Control Systems Society for the period 2002–2003 and a Fellow of the IEEE for “Contributions to Robust Identi3cation and Control of Uncertain Systems.” He is also a member of the EUCA Council.