A characterization of optimal scaling for structured singular value computation

A characterization of optimal scaling for structured singular value computation

Systems & Control Letters 15 (1990) 105-109 North-Holland 105 A characterization of optimal scaling for structured singular value computation N a s ...

283KB Sizes 28 Downloads 110 Views

Systems & Control Letters 15 (1990) 105-109 North-Holland

105

A characterization of optimal scaling for structured singular value computation N a s s e r M. K h r a i s h i * and Abbas Emami-Naeini * Systems Control Technology, lnc, 2300 Geng Road~ Palo Alto, CA 94303, U S A Recewed 9 December 1989 Revased 31 March 1990

Abstract In tins note we discuss the problem of finding the opttmal dmgonal scahng matnces used for computing an upper bound on the structured smgular value of a complex matrix In parracular, we propose a new charactenzatlon of the optimal elements of the dmgonal scahng matrtx. A non-dffferenttable convex programming approach to compuUng the optimal diagonal scahng winch utlhzes tins new charactenzatmn ~s also proposed

Keywords Structured singular value; mu-analys~s, mu; robust-

structured singular value, the problem is either handled as a non-convex optimizatmn problem [3], or as a non-&fferentlable convex one as was origmally suggested by Doyle [2]. In the following, we propose a simple non-differentiable convex optimizauon method for calculatxng the structured singular value using diagonal scaling. The method is based on a rather interesting observation regarding the characterization of the optimal scaling.

2. The structured singular value problem Given a certain complex matnx A ~ C" ×", and as the m-tuple

a structure ~

ness, structured perturbations

1. Introduction

9g'=

k2

,

(1)

k The structured singular value, as introduced by Doyle [2], serves as a tool for analyzing robustness characteristics of systems m the presence of structured perturbations. The problem of computing the structured singular value is a non-trivial one. Different approaches for computing or approxamating the structured singular value of a matrix have been developed over the past few years; see [2,3,7,8]. Simple bounds on the structured singular value are usually obtained by either performing Frobemus norm scaling using Osborne's method [5] for matrix scahng or by using the PerronFrobenius theorem as was suggested by Safonov [7] As for actually attempting to compute the

* The authors are Research Engineer repect~vely Semor Research Engineer wath the Advanced Technology Divasmn_ Telephone (415) 494-2233 Research was supported by S C T s Internal Research and Development funds, ProJect # 8553890

where m

£ k, = . ,

(2)

*=1

define ~a for some 8 _> 0 as

& a {diag(a,,..., a,~)l A, ~ C k'×k' and 8 ( a , ) ~ 8, for t = l , . . . , m } .

(3)

If there is no 8 such that det(1 + AA ) = 0

(4)

for some A ~ &rs, then the structured smgular value /~(A) [2] is defined as

~(A)~0;

(5)

otherwise, define y ( A ) as v(A)~

0167-6911/90/$3.50 © 1990 - Elsevaer Soence Pubhshers B V (North-Holland)

inf(81det(I+AA)=0,

A e~rs},

(6)

106

N M Khralsht, A Emamr-Naeml / Opttmal scahng ]or structured singular value

and the structured singular value p(A) as

where

v(A)

(7)

~',GR

fort=l,_

,m

The advantage of this transformation is that the optimization problem m Equation (12) is a convex optlrmzation problem Convexity was proven in [8,91.

That is, for any 8 such that /~(A)8 < 1, we are assured that d e t ( I + AA) 4:0 for all A ~ ~'8Computing /z(A) for a matrax A with an arbitrary structure ~" ~s a non-convex optimization problem. Yet, an approxamate solution that serves as an upper bound on the structured singular value may be obtained using a convex optimizatmn approach. To see how this may be done, define ~ as

, d,,Ik,,)[

~ r & (diag(dll,~,

d , > 0 , for t = l .... , m )

mf 6 ( D A D - l ) , D~-~j

(9)

r

where equahty holds if m _< 3 [2]. The optimal diagonal scaling problem of concern is that of finding a D ~ r such that 6(DAD -~) best approximates /~(A) That is, we are looking for D ~ ~ r whach solves the following optmuzation problem: mf D~

6(DAD-I).

Before proceeding further, we need to introduce the notion of a subgradtent or generahzed gra&ent This notion generalizes the concept of a gradient for dlfferentlable functmns to convex non-dlfferentlable functions. A subgradient g of a convex function f at a point x 0 ~ R ' , ~s a vector m R " , such that

f(x) -f(xo) (8)

where Ik, is the identity matrix of sxze k,. It is well known that /z(A) <

3. The subgradient algorithm

> (g, x - x0)

(14)

for any x In the domain of f, where (g, x - x0) corresponds to the inner (dot) product between g and x - x 0. If the function f Is differenuable at the point x 0, then g = V'~f[ ~0. Unlike the gradient, a subgradient need not to be umque. However, regardless of the dxfferentlability of the function, the set of subgradmnts of a convex function f at a point x, call it G/(x), is convex, closed and non-empty; see for instance Shor [10]. The significance of the notion of a subgradlent may be seen from the following theorem.

(10)

For the solution to be bounded, it is typically required that the matrix A be irreducible [5] That ~s, there exists no permutation matrix • such that

(~TAt~-----[AI A2]A3 .

(11)

The above minimization problem is equivalent (because of the posltlVaty condition on the d,'s) to that of [8] lnf 5(eSA e-S), sEs~

(12)

where 5% is the set of all matrices of the form S = diag(sllk~, .., s,,lk~),

(13)

Theorem 1. A vector x ots the opttmal solution for the problem of mmtmtzmg a convex functton f, if and only tf 0 ~ Gf(xo), the set of subgradwnts o f f at x o. The proof of this theorem may be found in [6] or [101. The above theorem forms a basis for characterizlng the optimal solution for a convex program of the form minimize

f(x),

where f : ~2-, R is a convex (possibly non-differentiable) function on some convex set ~2. To confirm that a solution x 0 is a global optimal solution, it is enough to check whether 0 ~ Gf(x0).

107

N M Khrmshl, A Emamt-Naeml /Opttmal scahngfor structured singular value

Furthermore, ufliT.ing the concept of a subgradient, there are exceedingly simple algorithms for minimiT.ing a convex non-differentlable optimization problem. For example, consider the following algorithm [10]: Subgradient Algorithm. Consider the convex program described by minimize

f (x )

where x ~ R m, and assume that the solution set is non-empty; then, 1. set k = 0; choose an ~mual guess, x0; 2. choose g to be a subgradient of f at x; ,3. set x , + 1 = x , - h k g for some h k > 0; 4. set k = k + l and go to s t e p 2 The interesting result is that if Y.A k = oo and llnlk~oo~ k -~-0, a sub-sequence of the above sequence of Xk'S converges to the optimal solution. One such suitable choice of A,'s is Ak = 1 / k . More elaborate implementations of the above simple algorithm with guaranteed rates of convergence can be found in [10].

Since the relation between s and S is linear, convexity is preserved. That is, the optimization problem in Eq. (15) is still a convex optimization problem. Furthermore, note that this new optirmzation problem is an unconstrained one with no structure or positlvaty constraints. Given any vector s ~ R % define the function

f ( $ ) ~ o(edlag(Ps)A edlag(-Ps)).

Since f is convex, it is almost differentiable. Actually, it is dlfferentiable for points where the maxim u m singular value is isolated (i.e. of multiplicity 1). For such points we have that the subgradlent is equivalent to the gradient. That is, g/(s) = vsf. Next we attempt to evaluate the subgradient of f for a given s even if the m a ~ r n u m singular value is not isolated. Assuming u and v are any left and right slngulax vector pair corresponding to the m a x t m u m singular value in the expression for f ( s ) , we have that

Consider the matrix S in Eq. (13) and its diagonal entries. That is, for any S, consider the vector s ~ R " , such that

[

II ul II 2

g

(s)

-

-

II vl II 2

where

lk 1 lk 2

where v, and u, are the subvectors of v and u, respectively, corresponding to the t-th block.

Theorem 2 (Subgradlent Calculation). For any s ~ R m, g f ( s ) ts a subgradwnt of the function f ( s ) .

S = diag(Ps ),

01

Proof. See Appendtx A.

lk~

and l k , IS a vector of k, ones. The optimization problem in Eq. (12) can be re-stated as inf 8(ed'ag(eS)A ed'ag(-es)).

sER"

(18)

T,

where the relation between s and S is g~ven by the simple hnear transformation

0

l ,

II um II 2 - II vm II 2

p=

(17)

f ( s ) = U* ed'ag(Ps)A edlag(--Ps)o

Careful examination of this expression reveals that a possible subgradlent is given by

4. Characterizing the optimal diagonal scaling

= [sl .....

(16)

(15)

[]

We note that Eq. (18) gwes a sLrnple expression for computing the subgradlent of the comphcated function in Eq. (16). This computed subgradient m a y be utilized in any non-differentlable convex optlnuzation algorithm. Most such algorithms require the computation of a single subgradient. Furthermore, since the singular vectors corresponding to the m a x i m u m singular value are the only ones needed for computing the subgradient, an algorithm such as the one proposed in [4] may

N M Khratsht, A Emarnt-Naeml / Opmnal scahng for structured smgular value

108

be used for c o m p u t m g such vectors rather than resorting to a conventional singular value decomposition. M a n y subgradient-type algorithms for non-differenUable convex optirmzauon m a y be found m [10]_ If we define Gi(s ) as the set of all convex combinations (convex hull) of the subgradients of f corresponding to the different singular vector pairs, the following theorem gives a characterization of the optimal diagonal scaling. It should be noted here that sirnflar results were obtained in

[]]

To see that v solves this problem, we first note that 11vii = 1. Define (20)

A - & edlag~P*n)A e dlag~ P'"~

N o t e that /~(A) = t~(A) [2]. That is,

=

max { l l A x l l l l l x ,

II 'ell =1

II [IA-xll

= II(Xx), II }

(21)

But, since v is a singular vector of ~,Tcorrespondmg to the m a x i m u m singular value, we have that

Theorem 3 (Main Result). I f for some s o ~ R m we have that

IIXvll =

sup

{llXxll};

Ilxll =1

0 ~ at(So),

that IS, /z(A) < II ~Tv 11- But

then s o solves the opnmtzanon problem m Eq. (15), and D = e m° solves the optimlzatton problem m Eq (10). Furthermore, tf the maxtmum smgular value ts tsolated, then

xv =

=f(s0)u

Thus, from the optimality conditions we have that

II v, II = II u, II-

/~(A) = f ( s 0 ) ,

So,

the structured smgular value of A, regardless of the number of blocks m

II (Xv), II = f ( s 0 ) I I v, II-

Proof. The first part follows directly from the charactenzation of the solution of a convex program as was discussed in the prevaous section and from the fact that there is a one-to-one correspondence between elements of R " and elements of As for the second statement, we note that if the maxamum singular value is isolated, then f is dxfferentiable and the singular vectors u and v are u m q u e Consequently, Gi(so) = (0}. That is,

II u, l] = II v, II

for t = 1,

,m

But, Fan and Tits [3, Theorem 2.1] showed that

~(A)=

max { l l h x l l l l l x ,

II ~11 =1

llllaxll

= II(hx),ll },

(19)

where x, and ( A x ) , are the subvectors of x and A x , respectively, corresponding to the t-th block.

1 The authors wish to thank the a n o n y m o u s rexaewer for b n n g l n g thas reference to their attentmn and for other helpful suggestions

That is, the conditions m Eq (21) are satisfied and is m fact equal to the structured singular value of A []

f(so)

5. Conclusions In this paper we presented a new charactenzation of the optimal diagonal scahng of a m a t n x used for findmg an u p p e r b o u n d on the structured singular value for the matrix. Tins characterization was g~ven in terms of the singular vectors corres p o n d m g to the maxamum singular value of the scaled matrix The above characterization m a y be used as a basis for a non-differentiable convex optimization a p p r o a c h to calculating the optimal scaling. One such procedure is p r o p o s e d and a simple calculation of the quantities required by the procedure is given. This convex optlmazation a p p r o a c h overcomes the difficulties encountered with non-convex o p t i r m z a u o n approaches, which m a y converge to local solutions Furthermore, the ease by wtnch this a p p r o a c h m a y be implemented overcomes the difficulties encountered with other steepest-de-

N M Khralshl, A EmamI-Naemt / Optimal scahngfor structured singular value

scent type convex optimization algonthms, especially since the optimal diagonal scaling problem is a non-differentlable one.

109

f ( s o + fls) >_h(fl),

(27)

f( 0) = h (0).

(28)

Subtracting we have,

f ( s o + fls) - f(so) >_h(fl) - h(O).

Appendix A. Proof of Theorem 2

(29)

So, for/3 > 0, Let A, Jd, and P be as defined above That is, A ~ C nxn, ocd is some given structure, and P R nx.1. For given s and s o ~ R m define h : R -o R

/(So +/3s) - f ( s o ) h(/3) - h(O) /3 >-/3

as

Thus,

h(fl) = u* ed'~(v(~°+O'))A ea'~(-v('°+O'))v

(22)

where u, v ~ C n are chosen to be any singular vector pair corresponding to the maximum singular value of

lnf f ( s ° + / 3 s ) - f ( s ° ) /~>o /3

>__inf h ( f l ) - h ( O ) p>o

/3 (31)

But, h is differentiable. Thus,

ediag(PSo)A ediag(- Pso)

ah

It is obvious that h is infinitely differentlable in

(30)

/'(So; s) >__~

=

(g:(so), s).

(32)

/3. Careful algebraic manipulation reveals that

References II Ul II 2 _ II

Oh

II 2

UBI~:0 = f ( s 0 )

,s

(23)

II um II 2 - II v., II 2 where v, and u, are the subvectors of v and u, respectively, corresponding to the t-th block of the structure .Xe'. Comparing the above with Eq. (18), we have that

s ) = ah I =o.

(24)

To prove Theorem 2, it suffices to show [6, Theorem 23.2] that

f'(so; s) > (g/(so), s)

(25)

for all s o and s, where f ' ( s 0 ; s) is the directional derivative of f at s o in the direction s and is defined as f'(s0; s)=

inf /~>o

f ( s o +/3s) - / ( S o ) /3

(26)

From the definition of the m a x i m u m singular value, we have

[1] R_ Daniel, B. Kouvantahs and H Latchman, Pnnclpal chrectaon ahgnment a geometnc framework for the complete solution to the/~-problem, lEE Proc D 133 (1986) 45-56 [2] J Doyle, Analysis of feedback systems with structured uncertainties, IEE Proc D 129 (1982) 242-250 [3] M. Fan and A_ Tits, Characterization and efficient computataon of the structured singular value, IEEE Trans Automat Control 31 (1986) 734-743. [4] G_ Golub, F Luk and M. Overton, A block Lanczos method for computing the singular values and correspondmg singular vectors of a matrix, A C M Trans Math Software 7 (1981) 149-169. [5] E Osborne, On pre-condltiomng of matrices, J Assoc Comput Mach 7 (1960) 338-345. [6] T. Rockafellar, Convex Analysts (Princeton Umverslty Press, Pnnceton, N J, 1970)_ [7] M_ Safonov, Stablhty margins of diagonally perturbed multivanable feedback systems, lEE Proc D 129 (1982) 251-256 [8] M_ Safonov and J. Doyle, Mtmmlzlng conservativeness of robustness singular values, in: S_ Tzafestas, Ed_, Multivariable Control (Reidel, New York, 1984) [9] R. Sezgmer and M Overton, The largest singular value of eXAoe - x is convex on convex sets of commuting matrices, IEEE Trans Automat Control 35 (1990) 229-230. [10] N Shor, Mtmmizatlon Methods for Non-dffferentlable Functions (Spnnger-Verlag, Berlin-New York, 1985)