RELAXATIONS OF ROBUST LINEAR PROGRAMS AND VERIFICATION OF EXACTNESS

RELAXATIONS OF ROBUST LINEAR PROGRAMS AND VERIFICATION OF EXACTNESS

Preprints of the 5th IFAC Symposium on Robust Control Design ROCOND'06, Toulouse, France, July 5-7, 2006 RELAXATIONS OF ROBUST LINEAR PROGRAMS AND V...

233KB Sizes 0 Downloads 30 Views

Preprints of the 5th IFAC Symposium on Robust Control Design

ROCOND'06, Toulouse, France, July 5-7, 2006

RELAXATIONS OF ROBUST LINEAR PROGRAMS AND VERIFICATION OF EXACTNESS S.G. Dietz ∗,1 C.W. Scherer ∗



Delft Center for Systems and Control, Delft University of Technology Mekelweg 2, 2628 CD, Delft

Abstract: Motivated by a robust positive real synthesis problem, we consider robust linear programming problems with the main goal of verifying whether the computed solution of a particular LMI relaxation is exact. It it shown that this requires to solve a polynomial system of equations. The main contribution of this paper is an algorithm to solve polynomial systems. Contrary to existing approaches, we suggest a technique which does not require the computation of a Gr¨ obner basis of the ideal generated by the polynomials that define the equations. Keywords: Robust linear programming, polynomial equations, relaxations, verification of exactness.

1. INTRODUCTION

2. PROBLEM FORMULATION

It is by now well-known that a large number of control problems (e.g. H∞ or H2 synthesis) can be reformulated as an optimization problem with LMI constraints. Moreover, robust analysis and synthesis translate into so-called robust LMIs that can be approximately solved using suitable relaxation schemes, as shown in e.g. El Ghaoui and Lebret (1998); Ben-Tal and Nemirovski (2000); Scherer (2005b); Hol and Scherer (2005).

Instead of directly stating the problem considered in this of paper, we begin with a control related design problem which will lead us to the genuine challenge of this paper. With discrete-time stable ˆ S(z, δ) ˆ that deSISO transfer functions T (z, δ), pend rationally on an uncertain parameter δˆ ∈ δˆ ⊂ Rs , consider the following synthesis problem: Minimize   ˆ + S(z, δ)Q(z) ˆ sup 2 Re T (z, δ) (1)

In this paper, we consider the particular problem of a robust linear program (LP) and show that verifying exactness of some relaxation for a robust LP can be reduced to a polynomial system of equations. As the main result of the paper, we propose to solve such equations with an algorithm that does not require the computation of a Gr¨ obner basis of the ideal generated by the defining polynomials. 1

Research supported by the Technology Foundation STW, applied science division of NWO and the technology programme of the Ministry of Economic Affairs.

ˆ δ, ˆ |z|≥1 δ∈

over Q that is parameterized with given stable basis functions q0 , . . . , qk as Q(z) = q0 (z) +

k 

xj qj (z).

j=1

Hence x = (x1 , . . . , xk ) ∈ Rk is the decision variable. In order to rewrite (1) as a robust LP with real uncertainties only, let us parameterize 1/z as µ + iν with µ, ν ∈ R. If we define ˆ µ, ν) : δˆ ∈ δ, ˆ µ2 + ν 2 ≤ 1}, δ = {δ = (δ,

Preprints of the 5th IFAC Symposium on Robust Control Design

ROCOND'06, Toulouse, France, July 5-7, 2006

the robust LP problem that corresponds to (1) reads as infimize γ subject to     F (δ) F (δ) J(x, γ) < 0 ∀ δ ∈ δ, 1 1

(2)

where ⎛

0 ⎜0 ⎜ ⎜ J(x, γ) = ⎜ ⎜ ⎝: 1

⎞ 1 x1 ⎟ ⎟ :⎟ ⎟ ⎟ .. . xk ⎠ x1 . . . xk −γ 0

...

If we then define G(y) = −diag(Y0 , Y1 , . . . , Yr ),

H(y) = P,

it is elementary to verify that (3) does indeed imply (4). Under a suitable constraint qualification on the description of δ, one can show that the upper bound γrel does converge to (2) if including in the basis matrices Uj (δ) all monomials up to a certain total degree d, and if d → ∞. For a detailed discussion the reader is referred to Hol and Scherer (2005).

2.2 Verifying Exactness

and ⎛

⎜ ⎜ F (δ) = ⎜ ⎜ ⎝

1 ˆ , δ) T ( µ+iν

1 ˆ 0( 1 ) + S( µ+iν , δ)q µ+iν ⎟ 1 ˆ 1( 1 ) S( µ+iν , δ)q ⎟ µ+iν ⎟



.. . 1 ˆ k( 1 ) , δ)q S( µ+iν µ+iν

⎟. ⎠

In the sequel, we will make use of an LFR F (δ) = C∆(δ)(I − A∆(δ))−1 B + D with ∆(δ) depending linearly on δ. It is assumed that the LFR is wellposed on δ. 2.1 SOS-relaxations The robust LP (2) can be approximately solved using various relaxation schemes. As argued by Scherer (2005a), a broad class of algorithms is defined by choosing real-linear Hermitian-valued mappings G(y), H(y) in the auxiliary variable y ∈ Y so that G(y) ≤ 0 (3) implies     ∆(δ) ∆(δ) H(y) ≥ 0 for all δ ∈ δ. (4) I I Then the infimal γ subject to     I 0 I 0 H(y) G(y) < 0, A B A B     C D C D + J(x, γ) <0 0 I 0 I

(5)

defines an upper bound γrel of (2). If δ is semi-algebraic, δ = {δ | g1 (δ) ≤ 0, . . . , gr (δ) ≤ 0} with polynomials g1 (δ), . . . , gr (δ), the problem can be dealt with sum-of-squares (sos) relaxations (see e.g. Parrilo (2003); Hol and Scherer (2005)). For this purpose, choose polynomial basis matrices U0 (δ), . . . , Ur (δ) and let Y denote the affine manifold of all y = (P, Y0 , Y1 , . . . , Yr ) which satisfy      r ∆(δ) ∆(δ) P + Uj (δ) Yj Uj (δ)gj (δ) I I j=1

= U0 (δ) Y0 U0 (δ)

(6)

Being able to only approximately solve the robust LP problem, it is desirable to judge the quality of the relaxation. Unfortunately, systematic techniques for estimating the relaxation gap are not available. Instead, in this paper we consider the particular question of how to detect whether γrel actually equals (2), based on the following result from Scherer (2005b). Theorem 1. The relaxation value γrel equals (2) iff there exists a dual optimal solution of the relaxation for which the Lagrange multiplier Φ of the second constraint in (5) can be written as Φ=

q 

zi R(δ i )R(δ i )T

(7)

i=1

where q = rank(Φ), zi ≥ 0, δ i ∈ δ, i = 1, . . . , q, and   ∆(δ)(I − A∆(δ))−1 B R(δ) = . 1 This leads us to the problem of verifying whether the optimal dual multiplier Φ admits a representation as in (7). In practice, one hence computes, together with γrel , the dual optimal multiplier Φ, and one then tries to solve the equation (7) for zi , δ i respectively. Since R(δ) has only one column and zi are scalars, these latter variables can be easily eliminated as described next. Let us just collect all row vectors which form of the left-kernel a basis

˜ . Then (7) holds iff of Φ into the matrix C˜ D

˜ R(δ) = 0 for δ ∈ {δ 1 , . . . , δ q }. C˜ D Therefore, verifying (7) is reduced to solving this system of rational equations in δ. Multiplication with common denominators hence requires us to find solutions of a system of polynomial equations in δ. Remark 2. Let us stress again that the direct derivation of the polynomial equations relied on the fact that R(δ) is a column, which in turn

Preprints of the 5th IFAC Symposium on Robust Control Design

ROCOND'06, Toulouse, France, July 5-7, 2006

ˆ and S(z, δ) ˆ being SISO was implied by T (z, δ) transfer functions. For the generalized version of Theorem 1 in which rank(R(δ)) > 1, the reader is referred to Scherer (2005b). Whereas the case q = 1 is very similar to rank(Φ) = 1 and well understood, the situation q > 1 remains to be explored.

Definition 5. Suppose the components of the polynomial row vector b = (b1 , . . . , bn ) form a basis of Ps \I. Then the always existing matrices M1 , . . . , Ms , satisfying

Remark 3. Note that the success of applying Theorem 1 depends upon the fact that primal-dual LMI solvers return Lagrange multiplier Φ of the desired structure, if it exists.

For more detailed information concerning this definition, see (Stetter (2004); Cox and O’Shea (1992); Corless and Trager (1997); Corless (1996)). It is easy to see why the matrices M1 , . . . , Ms contain information about V (I). In fact, (9) implies

Remark 4. If the decision variable x is onedimensional, the problem under consideration is closely related to polynomial optimization, for which related techniques for extracting optimal solutions from moment relaxations have been developed in Henrion and Lasserre (2005).

xi b(x) − b(x)Mi ∈ I n ,

3. SOLVING THE POLYNOMIAL SYSTEM: ¨ STETTER-MOLLER METHOD From now on we focus on finding solutions of the system of equations p1 (x1 , . . . , xs ) = 0 .. .

(8)

pt (x1 , . . . , xs ) = 0 where p1 , . . . , pt ∈ Ps and Ps denotes the algebra of all complex polynomials in the variables x = (x1 , . . . , xs ). For solving this classical problem different techniques are available. We mention homotopy methods as an alternative to the method we present; for references see Sommese and Wampler (2005b,a). Let us turn to our procedure by introducing the following algebraic notions. Let I = p1 , . . . , pt be the ideal generated by the polynomials p1 , . . . , pt , and denote by V (I) the variety of I, the set of all common zeros of all polynomials in I. Moreover, introduce the factor space Ps \I (with elements denoted as [p]) which plays a fundamental role in Stetter’s approach for solving (8) (Stetter (2004); Cox and O’Shea (1992)). Let us assume from now on that V (I) is finite, which implies that Ps \I is a finite-dimensional vector-space. Once a basis of this vector space is known, it is well-established (and will be recalled in the next sections) how (8) can be solved by linear algebra methods. Systematic techniques for computing a basis of Ps \I are based on determining some Gr¨obner basis of the ideal I. As the main contribution of this paper, we present an approach that does not require to find bases of I or Ps \I, while still allowing to compute the solutions of (8) by techniques from numerical linear algebra.

[xi b(x)] = [b(x)]Mi ,

i = 1, 2, . . . , s.

(9)

are called multiplication or Stetter matrices corresponding to the basis b.

where I n denotes the space of (row) vectors with n-components in I. Evaluating at some zero z = (z1 , . . . , zs ) ∈ V (I) leads to zi b(z) − b(z)Mi = 0, and exploiting b(z) = 0 implies that the component zi of z is an eigenvalue of Mi . In Stetter (2004), V (I) is computed by finding a set of joint eigenvectors of the commuting matrices M1 , . . . , Ms . We will rather sketch an approach based on a block-diagonalization of Mi which lends itself to the desired generalization.

3.1 Algorithm: V (I) from Stetter matrices The algorithm discussed in this section systematically constructs the zeros of I from the multiplication matrices M1 , . . . , Ms . Algorithm 1. Construct similarity transformations (applied to all matrices Mi simultaneously) in the following iterative fashion. Step 1. Suppose that λ11 , . . . , λk11 is the list of all pairwise different eigenvalues of M1 . We can then transform M1 by similarity into M1 = diag(M11 , . . . , M1k1 ) with σ(M1j ) = {λj1 } for j = 1, . . . , k1 . Since M2 , . . . , Ms commute with M1 , we infer (Corless and Trager, 1997, Proposition 4) that they admit the same block-diagonal structure Mi = diag(Mi1 , . . . , Mik1 ), i = 1, . . . , s. Moreover, let us record that σ(M1j ) is a singleton, j = 1, . . . , k1 . Step 2. For any j ∈ {1, . . . , k1 } consider the block M2j . As in Step 1, we can transform this matrix by similarity into j,l

M2j = diag(M2j,1 , . . . , M2 j ), where each block on the diagonal has only one eigenvalue, and for different blocks these eigenvalues are different. Again by the commutation

Preprints of the 5th IFAC Symposium on Robust Control Design

ROCOND'06, Toulouse, France, July 5-7, 2006

property, all other blocks necessarily admit the same block-diagonal structure j,lj

Mij = diag(Mij,1 , . . . , Mi

)

for

i = 1, . . . , s.

This refinement into lj sub-blocks is performed for all j = 1, . . . , k1 . The number of blocks in Mi is then given by k2 := l1 + · · · + lk1 : Mi = diag(Mi1 , . . . , Mik2 ), i = 1, . . . , s

Now note that [bv] = 0 since the components of [b] are linearly independent and v = 0. If we choose q ∈ I, we have [q] = 0, and we can hence conclude q(λj1 , . . . , λjs ) = 0. Since q ∈ I was arbitrary, we have proved (λj1 , . . . , λjs ) ∈ V (I). Proof of ⊃. Take a zero z = (z1 , . . . , zs ) ∈ V (I). Since [xi b − bMi ] = 0 we infer

and σ(Mij ) is a singleton, j = 1, . . . , k2 , i = 1, 2. After s steps. The iteration ends with a similarity transformation such that Mi = diag(Mi1 , . . . , Miks ) and σ(Mij ) = {λji }, j = 1, . . . , ks , i = 1, . . . , s.

xi b(x) − b(x)Mi ∈ I n for i = 1, . . . , s. Evaluation at z implies zi b(z) − b(z)Mi = 0 for i = 1, . . . , s. Now partition b(z) = (b1 (z), . . . , bks (z)) according to the block-diagonal structure of Mi . Since b(z) = 0, there exists some j with bj (z) = 0, and for this particular index we infer zi bj (z) = bj (z)Mij for i = 1, . . . , s. This implies zi = λji for all i = 1, . . . , s as desired.

3.2 Justification of Algorithm 1 In this section we show how to extract V (I) from Mij as constructed in Algorithm 1. In contrast to the approach of Stetter (2004), there is no need to actually determine joint eigenvectors of the multiplication matrices.

Remark 7. The same result holds if the matrices Mi are only transformed into upper block triangular form with unitary similarities (Schur decomposition).

Theorem 6. With the notation as in Algorithm 1,

4. SOLVING POLYNOMIAL SYSTEMS WITHOUT KNOWING PS \I

{(λj1 , . . . , λjs ) : j = 1, . . . , ks } = V (I). Proof. We assume that b has been transformed such that (9) is valid for the block-diagonal matrices Mi resulting from Algorithm 1. Proof of ⊂. Fix j ∈ {1, . . . , ks }. Since Mij , i = 1, . . . , s, are a commuting family of matrices, there exists a common eigenvector v j = 0. Since Mi = diag(Mi1 , . . . , Miks ), we can extend v j with zero components to obtain some v = 0 satisfying Mi v = vλji

for

i = 1, . . . , s.

(10)

For any multi-index α = (α1 , . . . , αs ) ∈ Ns0 , let us define (M1 , . . . , Ms )α = M1α1 M2α2 · · · Msαs .

In the previous section we have seen how to extract V (I) from the Stetter matrices, which can be computed as soon as a basis of Ps \I is known. In this section we propose an algorithm for determining V (I) without this structural knowledge. Again, we assume that V (I) is a finite set. Define p = (p1 , . . . , pt ), and fix some monomial (row) vector b = (b1 , b2 , . . . , bn ) with b1 = 1 as well as some monomial matrices Vi with t columns. In the first step of the algorithm, it is required to verify whether the system of linear equations xi b(x) − b(x)Ni = p(x)Ci Vi (x), i = 1, . . . , s (11)

This operation extends in a natural fashion to an  arbitrary polynomial q(x) = α cα xα in Ps as  q(M1 , . . . , Ms ) = cα (M1 , . . . , Ms )α .

in the matrix variables Ni and Ci is solvable. It is not difficult to see that the mere solvability of this equation does indeed imply that the components of [b] span Ps \I.

Since M1 , . . . , Ms are pairwise commuting, it is straightforward to check that (9) implies

Proposition 8. Suppose that (11) has a solution. Then span{[b1 ], . . . , [bn ]} = Ps \I.

[q][b] = [qb] = [b]q(M1 , . . . , Ms ).

Proposition 8 implies that one can extract a basis [˜b] of Ps \I from [b]. This basis corresponds to a subspace V ⊂ Rn that is invariant under all N1 , . . . , Ns , while the restrictions of N1 , . . . , Ns to

α

Therefore, by using (10), we infer [q][b]v = [b]q(M1 , . . . , Ms )v = [bv]q(λj1 , . . . , λjs ).

Preprints of the 5th IFAC Symposium on Robust Control Design

ROCOND'06, Toulouse, France, July 5-7, 2006

V correspond to the Stetter matrices M1 , . . . , Ms with respect to ˜b. In order to prepare for the algorithm in the next subsection, let us formulate this insight in terms of similarity transformations.

Step i = 2 For all j = 1, . . . , k1 , choose a basis matrix Kj of the largest subspace Kj that satisfies

Proposition 9. Suppose that (11) has a solution. Then there exists a similarity transformation   R1 −1 , T = (T1 T2 ) T = R2

Define

for which  i    i

M11 M12 R1 Ni T 1 T 2 = , i R2 0 M22

For j = 1, . . . , k1 , let us now denote the rj ˆ j (Rj N2 Tj )Lj with different eigenvalues of Pj = L algebraic

i are the Stetter matrices with and such that M11 respect to the basis [˜b] = [b]T1 of Ps \I.

The procedure to extract the zeros of I from the Ni s fails unless we know the subspace V (which is equivalent to knowing a basis of Ps \I). In the next section we reveal how Algorithm 1 can be modified in order to still extract V (I) directly from the matrices Ni .

4.1 Extracting V (I) from Ni , i = 1, . . . , s Given a matrix A, we say that the similarity T transforms A into block root-subspace form if T −1 AT = diag(A1 , . . . , Ak ) where σ(Ai ) = {λi }, λi = λj , i, j = 1, . . . , k, i = j. Algorithm 2. Suppose that N1 , . . . , Ns are solutions of (11). Then the following algorithms iteratively constructs s lists of complex numbers Λj ∈ (C ∪ {∞})n on the basis of a sequence of similarity transformations T (j) , j = 1, . . . , s. Step i = 1

Choose nonsingular T (1) = T1 , . . . , Tk1 and R(1) = col(R1 , . . . , Rk1 ) = (T (1) )−1 such that ⎛ ⎞ R1

⎜ .. ⎟ (12) ⎝ . ⎠ N1 T1 · · · Tk1 Rk1

is in block root-subspace form. Let λj and dj denote the eigenvalue and the dimension of the block Rj N1 Tj for j = 1, . . . , k1 , and collect this information, with the all ones vector ej of length j, as ⎛ ⎞ λ1 ed1 ⎜ ⎟ .. Λ1 = ⎝ ⎠. . λk1 edk1

Define Vj1 = Im Tj for j = 1, . . . , k1 .

N2 Kj ⊆ Kj and Kj ⊆ Vj1 . Lj := Rj Kj ˆ and let Lj be a left inverse of Lj . Extend Lj to the nonsingular matrix (Lj Mj ).

multiplicity dj1 , . . . , djrj by λj1 , . . . , λjrj , and transform Pj as ⎛ ˆj ⎞ Q1   ⎜ .. ⎟ j j ⎝ . ⎠ Pj Q1 · · · Qrj ˆj Q rj

into block root-subspace form (with blocks of dimension dj1 , . . . , djrj ). Then define

Uj = Tj Lj Qj1 . . . , Tj Lj Qjrj , Wj = Tj Mj , and update the similarity transformation T (2) as

(13) T (2) = U1 W1 · · · Uk1 Wk1 with r1 + 1 + r2 + 1 + . . . column blocks of size d1 , . . . , d1 , dˆ1 , d2 , . . . , d2 , dˆ2 , . . . 1

r1

1

r2

as inherited from Uj , Wj . Moreover define

ˆ1 W ˆk W ˆ1 ··· U ˆk , R(2) = (T (2) )−1 = col U 1 1 again with the inherited row partition. Note that     ˆj N2 Uj ˆj

∗ U U ˆ j N2 Wj . ˆ j N 2 Uj W j = 0 W W One can show that the eigenvalues of the block ˆ j N2 Wj cannot occur as the second component W of a zero (z1 , . . . , zs ) in V (I), and only the eigenˆj N2 Uj will be collected values λji of the matrix U as candidates for second components of zeros in V (I). This motivates to define ⎞ ⎛ 1 λ1 ed11 ⎟ ⎜ : ⎟ ⎜ ⎜ λ1 e 1 ⎟ ⎜ r1 dr1 ⎟ ⎟ ⎜ ⎜ ∞ edˆ1 ⎟ ⎟ ⎜ 2 Λ2 = ⎜ λ1 ed21 ⎟ ⎟ ⎜ ⎟ ⎜ : ⎟ ⎜ 2 ⎜ λr2 ed2r ⎟ ⎜ 2 ⎟ ⎝ ∞ e ˆ2 ⎠ d : which records relevant and irrelevant eigenvalues with their respective multiplicities. Steps i = 3, . . . , s The iteration proceeds with N3 in the same fashion, with the only difference that the refinement of

Preprints of the 5th IFAC Symposium on Robust Control Design

T (2) is only performed with respect to the blocks Uj in the partition (13). This is continued for N4 , . . . , Ns . After s steps, we end up with s lists Λ1 , . . . , Λs that contain the desired information on the elements in V (I), as summarized in the next main result of this paper. Theorem 10. Let N1 , . . . , Ns satisfy (11), and run Algorithm 2 to generate Λ1 , . . . , Λs . Then each z ∈ V (I) can be found as a row in (Λ1 , . . . , Λs ) which does not contain ∞. Proof. The proof and a coordinate-free geometric interpretation of the algorithm can be found in Dietz and Scherer (2005). Since the eigenvalues of Ni are a superset of the eigenvalues of the Stetter matrices, one can extract V (I) from N1 , . . . , Ns by solving the linear system of equations resulting from (11) for some chosen monomial (row) vector b. If no solution is found, one should add monomial terms to b and V1 , . . . , Vs until (11) is indeed satisfied. Remark 11. The suggested algorithm reduces to the classical Stetter method in case that the elements of b in (11) form a basis of Ps \I. Indeed, in this case the matrices N1 , . . . , Ns are pairwise commuting. In Step 1, since R(1) N1 T (1) is in block root-subspace form, one can hence infer that R(1) Ni T (1) , i = 2, . . . , s, are all block-diagonal. For Step 2, this just implies that Kj is equal to the full space Vj1 , which in turn implies that the blocks Wj in (13) are all empty, and the list Λ2 will not contain any component ∞. Remark 12. An s-tuple as constructed from the s lists Λ1 , . . . , Λs is only a candidate solution and therefore not guaranteed to lie in V (I). However, referring to Proposition 9, if Im(T1 ) is the largest invariant subspace for some Ni , Algorithm 2 will construct precisely V (I). Remark 13. Due to the iterative nature of the algorithm it is difficult to derive theoretical bounds on its computational complexity. In general using the Groebner basis approach is inefficient when the basis is large if compared to the number of isolated solutions to (8). As part of our future work, we will compare the numerical behavior of different algorithms. Remark 14. Let us finally stress the strong analogy between sos relaxations and the suggested procedure to solve (8). Both techniques make use of a priori chosen monomial bases and if b, Vi , i = 1, . . . , s comprise sufficiently many monomials equation (11) is guaranteed to have a solution.

ROCOND'06, Toulouse, France, July 5-7, 2006

5. SUMMARY AND CONCLUSIONS For robust linear programs, we have shown that the exactness of a general class of LMI relaxations can be tested by solving a system of polynomial equations. Moreover, we have proposed a new algorithm for computing the zeros of such a system of polynomial equations. As the principle difference if compared to classical techniques, this algorithm only requires the solution of a linear system of equations and block-diagonalization of matrices.

REFERENCES A. Ben-Tal and A. Nemirovski. Robust solutions of linear programming problems contaminated with uncertain data. Math. Program. Ser. A., 88:411–424, 2000. Gianni P.M. Corless, R.M. and B.M. Trager. A reordered Schur factorization method for zerodimensional polynomial systems with multiple roots. In ISSAC, pages 133–140, 1997. R.M Corless. Gr¨ obner basis and matrix eigenvalue problems. ACM SIGSAM Bulletin, 30(4):26– 32, 1996. J. Cox, D. Little and D. O’Shea. Ideals, Varieties, and Algorithms. Springer, 1992. S.G. Dietz and C.W. Scherer. Computing zeros of a polynomial system. Technical report, 2005. Oustry F. El Ghaoui, L. and H. Lebret. Robust solutions to uncertain semidefinite programs. SIAM J. Optim., 9(1):33–52, 1998. D. Henrion and J.B. Lasserre. Detecting global optimality and extracting solutions in gloptipoly. In Positive Polynomials in Control. 2005. C. Hol and C.W. Scherer. Sum of squares relaxations for polynomial semi-definite programming. Mathematical Programming, 2005. P.A. Parrilo. Semidefinite programming relaxations for semialgebraic problems. Mathematical Programming, 2003. C.W. Scherer. Relaxations for robust linear matrix inequality problems with verifications for exactness. SIAM Journal on Matrix Analysis and Applications, 27(2):365–395, 2005a. C.W. Scherer. LMI relaxations in robust control. European Journal of Control,, 2005b. A.J. Sommese and C.W. Wampler. The Numerical Solution of Systems of Polynomials arising in Engineering and Science. World Scientific Publishing, Singapore, 2005a. Verschelde J. Sommese, A.J. and C.W. Wampler. Introduction to Numerical Algebraic Geometry. In Solving Polynomial Equations: Foundations, Algorithms, and Applications. Springer, 2005b. H.J. Stetter. Numerical Polynomial Algebra. SIAM, 2004.