Operations Research Letters 38 (2010) 361–365
Contents lists available at ScienceDirect
Operations Research Letters journal homepage: www.elsevier.com/locate/orl
A facial reduction algorithm for finding sparse SOS representations Hayato Waki ∗ , Masakazu Muramatsu Department of Computer Science, The University of Electro-Communications, 1-5-1 Chofugaoka, Chofu-shi, Tokyo, 182-8585, Japan
article
info
Article history: Received 5 November 2009 Accepted 4 May 2010 Available online 31 May 2010 Keywords: Facial reduction algorithms Polynomial optimization problems Semidefinite programming Sum of squares
abstract The facial reduction algorithm reduces the size of the positive semidefinite cone in SDP. The elimination method for a sparse SOS polynomial [M. Kojima, S. Kim, H. Waki, Sparsity in sums of squares of polynomials, Math. Program. 103 (2005) 45–62] removes monomials which do not appear in any SOS representations. In this paper, we establish a relationship between a facial reduction algorithm and the elimination method for a sparse SOS polynomial. © 2010 Elsevier B.V. All rights reserved.
1. Introduction Since Lasserre [5] and Parrilo [8] proposed semidefinite programming (SDP) relaxation for polynomial optimization problems (POPs), various powerful algorithms for solving POPs by using SDP and sums of square (SOS) polynomials have been proposed. These results are summarized in an excellent survey [6]. In general, if a POP is large-scale, e.g. it has hundreds of variables, then the resulting SDP becomes too huge to compute the optimal value. Furthermore, it is suggested in [16] that some SDP relaxation problems for POPs are heavily ill-posed. It is necessary to exploit a structure of a given POP, e.g. sparsity and/or symmetry, for reducing the size of the SDP. For this, Kojima et al. [4] proposed a method to reduce the size of the SDP obtained from a sparse SOS polynomial. This method was also discussed in [7]. In this paper, we call the method the elimination method for a sparse SOS polynomial (EMSSOSP). EMSSOSP removes monomials which do not appear in any SOS representations of a sparse SOS polynomial. We call those monomials redundant. A facial reduction algorithm (FRA) was proposed by Borwein and Wolkowicz [1,2]. Ramana et al. [11] showed that FRA for SDP can generate an SDP which has an interior feasible solution. It is well-known that we can solve such an SDP by the primal–dual interior-point methods without numerical difficulty. The purpose of this paper is to establish a relationship between FRA and EMSSOSP. Specifically, we show that EMSSOSP can be regarded as a partial application of FRA. This is one of the contributions of this paper.
∗
Corresponding author. E-mail addresses:
[email protected] (H. Waki),
[email protected] (M. Muramatsu). 0167-6377/$ – see front matter © 2010 Elsevier B.V. All rights reserved. doi:10.1016/j.orl.2010.05.011
As a by-product of this result, we prove that a computationally heavy part of EMSSOSP proposed in [4] is redundant for finding a set of unnecessary monomials for an SOS representation of a sparse SOS polynomial. This part enumerates all integer vectors in the convex hull of a set, and the authors in [4] reported that this part has much more computational cost than the other part. In contrast, we can find the same monomial set as the original version in [4] by replacing this part by using all integer points in Nnr . We can expect that we will obtain the same monomial set at less computational cost because the latter is obvious. In this paper, let R and N be the sets of real and natn ural numbers, respectively. Pn We define for n, r ∈ nN, Nr = n n α = (α1 , . . . , αn ) ∈ N i=1 αi ≤ r . For n ∈ N, S and S+ denote the sets of n × n symmetric matrices and positive semidefinite matrices, respectively. For A ⊆ Rn and α ∈ R, we define α A := {α a | a ∈ A}. 2. The elimination method for a sparse SOS representation EMSSOSP removes redundant monomials for f . The resulting SDP is equivalent to the original SDP constructed by Parrilo [8] and the size of the SDP becomes smaller than the original SDP. In [13], EMSSOSP was demonstrated that computational efficiency of SDP relaxation was improved. a polynomial with degree 2r and we write f (x) = P Let f be α α f x , where xα := x1 1 · · · xαn n , fα denotes the coefficient α α∈F corresponding with the monomial xα and F is the set of α ∈ Nn such that fα is nonzero. Then F , a finite subset of Nn2r , is called the support of f . If the number of elements in F is small, then we call f sparse. Pk 2 We assume that f is an SOS polynomial, f (x) = j=1 gj (x) , where k and the coefficients of polynomials gj are unknown. Now because gj is a polynomial, we write gj using a finite set Gj ⊆ Nnr
362
H. Waki, M. Muramatsu / Operations Research Letters 38 (2010) 361–365
as follows: gj (x) =
P
α∈Gj (gj )α x
α
, where (gj )α is the coefficient α
k j =1
corresponding with the monomial x . Let G = ∪ Gj . Then we can rewrite polynomials gj by G instead of Gj . Indeed, if G \ Gj 6= ∅, set the coefficient (gj )α to be zero for α ∈ G \ Gj . In this case, we say that f has an SOS representation with G. Also if the number of the set G is small, then we say that f has a sparse SOS representation with G. Once G is found, we can construct an SDP by using the following lemma. We remark that the lemma is equivalent to Theorem 1 in [10] if G = Nnr . Lemma 2.1 (Lemma 2.1 in [4]). Let G be a finite subset of Nn and uG (x) = (xα : α ∈ G). Then, f has an SOS representation with G if #(G) and only if there exists a positive semidefinite matrix V ∈ S+ such T n that f (x) = uG (x) VuG (x) for all x ∈ R . From Lemma 2.1, to find an SOS representation with G of f , we consider the following problem:
Find subj. to
#(G)
V ∈ S+ f (x) = uG (x)T VuG (x) (∀x ∈ Rn ).
(1)
We regard the constraint of (1) as an identity on x. By comparing coefficients of all monomials in the both sides of the identity, we obtain an SDP. If G = Nnr , then the resulting SDP is identical to Parrilo’s SDP relaxation. If f is a sparse SOS polynomial, we can expect that an SOS representation of f are sparse, i.e., the number of elements in G is small. To find such a small set G, EMSSOSP was proposed in [4]. Later, we describe the detail of EMSSOSP. To this end, we give the following theorem and lemma, which play an essential role on EMSSOSP. Theorem 2.2 (Theorem 1 and Lemma in Section 3 of [12]). Let f and F be a polynomial and its support, respectively. We define F e := F ∩ (2Nn ). Assume that f is a nonnegative polynomial, i.e., f (x) ≥ 0 for all x ∈ Rn . Then F is included in conv(F e ), where conv(A) is the convex hull of A ⊆ Rn . Moreover, if f has an SOS representation ¯ where with G, then (gj )α = 0 for all j = 1, . . . , k and α ∈ G \ G, ¯G := 1 conv(F e ) ∩ Nn . 2 Lemma 2.3 (Lemma 3.1 in [4]). Assume that f has an SOS representation with G and that there exist nonempty sets B, H ⊆ G such that the triplet (B, H , G) satisfies
H ⊆ G, B ⊆ G, G = H ∪ B, (B + B) ∩ F = ∅ and (B + B) ∩ (G + H ) = ∅.
(2)
Then f has an SOS representation with H. Note that (2) implies B ∩ H = ∅. Lemma 2.3 is an extension of Proposition 3.7 in [3]. For a given G ⊆ Nnr , we give an algorithm of EMSSOSP(G): Algorithm 2.4 (EMSSOSP(G)). Input:
G ⊆ Nnr .
Step 1: Set G0 = G and i = 0. Step 2: If there do not exist nonempty sets Bi , H i ⊆ Gi such that the triplet (B, H , G) = (Bi , H i , Gi ) satisfies (2), then stop and return Gi . Step 3: Otherwise set Gi+1 = H i and i = i + 1, and go back to Step 2.
¯ ) is EMSSOSP proposed in [4]. We remark that EMSSOSP(G Step 1 and Step 2 of EMSSOSP(G) are based on Theorem 2.2 and Lemma 2.3, respectively. Specially, in [4], the authors proposed a practical algorithm for finding (B, H , G) in Step 2, which uses a directed graph constructed from F e and Gi . In fact, their algorithm finds a single integer vector α ∈ G as B at each iteration. In [4], it was demonstrated that the algorithm was very efficient for randomly generated problems. ¯ ). Clearly, we Let G∗ be the finite set returned by EMSSOSP(G have (gj )α = 0 for all j = 1, . . . , k and α ∈ Nnr \ G∗ . Moreover, it follows that 12 F e ⊆ G∗ . EMSSOSP(G) removes redundant monomials for f . ¯ ), In [4], the authors reported that before executing EMSSOSP(G ¯ and that we need we need to enumerate all integer points of G a high computational cost for this part. In particular, we can find G∗ by EMSSOSP(Nnr ). Because enumerating all integer points of Nnr is obvious, we can obtain G∗ at less computational cost. In ¯ ) for fact, SparsePOP [14] uses EMSSOSP(Nnr ) instead of EMSSOSP(G practical efficiency. The following theorem justifies this usage in SparsePOP. This is one of the contributions of this paper. ¯ then Theorem 2.5. Assume that f is an SOS polynomial. If G ⊇ G, EMSSOSP(G) returns G∗ . We postpone the proof till Appendix. 3. An FRA and a relationship between FRA and EMSSOSP 3.1. A facial reduction algorithm We consider the following SDP: sup bT y subj. to C −
m X
Ai yi ∈ Sn+ ,
(3)
i =1
where b P ∈ Rm , C , A1 , . . . , Am ∈ Sn . For A, B ∈ Sn , we define n A•B = i,j=1 Aij Bij . For SDP (3) which does not have any interior feasible solutions, FRA reduces the closed convex cone Sn+ to a smaller closed convex subcone. If we generate a smaller SDP by replacing Sn+ by the smaller subcone, then (i) the resulting SDP is equivalent to (3), and (ii) it has an interior feasible solution. Because of (ii), we can expect that the numerical stability of the primal–dual interior-point methods is improved for the resulting SDP. FRA was first proposed by Borwein and Wolkowicz [1,2], and later simplified by Pataki [9]. Although the FRA works for conic programming (CP) with a nonempty feasible region, FRA for CP without assuming the feasibility was proposed in [15]. We give details of FRA. A closed subcone F of Sn+ is a face of Sn+ if for any X , Y ∈ Sn+ , X + Y ∈ F implies that X , Y ∈ F . F ∗ denotes ∗ n the dual cone of a cone PmF , i.e., F = {S ∈ S | X • S ≥ 0 for all X ∈ F }. Let A = {C − i=1 Ai yi | y ∈ Rm }. FRA reduces Sn+ to the smallest face including A ∩ Sn+ , which is called the minimal cone for (3). We give an algorithm of FRA for SDP (3): Algorithm 3.1 (Facial Reduction Algorithm). Step 1: Set i = 0 and F0 = Sn+ . Step 2: If ker A ∩ Hc− ∩ Fi ∗ ⊆ span(W1 , . . . , Wi ), then stop and return Fi . Step 3: Find Wi+1 ∈ (ker A ∩ Hc− ∩ Fi ∗ ) \ span(W1 , . . . , Wi ). Step 4: If C • Wi+1 < 0, then stop. SDP (3) is infeasible. Step 5: Set Fi+1 = Fi ∩ {Wi+1 }⊥ and i = i + 1, and go back to Step 2.
H. Waki, M. Muramatsu / Operations Research Letters 38 (2010) 361–365
363
In this algorithm, Hc− denotes the half-space {W ∈ Sn | C • W ≤ 0} and we define ker A := {W ∈ Sn | Ai • W = 0 (i = 1, . . . , m)}. Moreover, we set span(∅) = {0}. In [15], it was shown that Algorithm 3.1 can find the minimal cone for SDP (3) or detect the infeasibility of SDP (3) in a finite number of iterations. In addition, if we know in advance that SDP (3) has a feasible solution, then we can replace Hc− by ker c T := {W ∈ Sn | C • W = 0} in FRA.
Proof. The triplets (Bi , H i , Gi ) for all i = 0, . . . , s − 1 satisfy (2). +1 We construct Wi+1 = (yiα+β )α,β∈G from Bi as follows:
3.2. A relationship between FRA and EMSSOSP
Claim 1. Wi+1 6∈ span(W1 , . . . , Wi ).
In this subsection, we reveal a relationship between FRA and EMSSOSP(G). Specially, we show that EMSSOSP can be regarded as a partial application of FRA. When we fully apply FRA, we obtain a smaller SDP in general. We give such an example at the end of this subsection. Let f be a polynomial with degree 2r. We assume that f has ¯ ⊆ G ⊆ Nnr . a sparse SOS representation with G satisfying G Applying EMSSOSP(G) into f , it generates Gs and the sequence {(Bi , H i , Gi )}si=−01 satisfying (2) for each i = 0, 1, . . . , s − 1, where G0 = G and Gs = G∗ . Then we can construct an SDP by G∗ . From (1), we obtain the following SDP:
Proof of Claim 1. Because (Bi + Bi ) ∩ (Gi + H i ) = ∅, we have
sup 0 subj. to fα = Eα • V (α ∈ G + G),
#(G)
V ∈ S+ ,
(4)
Z
yiα+1 =
xα dx
for all α ∈ Bi + Bi ,
0
for all α ∈ (G + G) \ (Bi + Bi ),
S
where S is a compact set with nonempty interior.
Hi Bi
Wi+1 =
(Eα )β,γ =
1 0
α = β + γ,
for all β, γ ∈ G.
o.w.
X
W = yα Eα = (yα+β )α,β∈G for some yα . W X α∈G+G fα yα = 0
(5)
α∈F
The following lemma shows that one can construct W ∈ ker A ∩ ker c T ∩ Fi ∗ from (Bi , H i , Gi ) satisfying (2). Lemma 3.2. Let B ⊆ Nn be a nonempty finite set. We define yα for all α ∈ B + B as follows:
Z
xα dx,
yα =
Hi
Bi
O O
O S2i
(S1i )
T
(S3i )T
G\Gi S1i S3i S4i
,
(8)
where S2i is positive definite for all i = 0, 1, . . . , s−1 by Lemma 3.2. On the other hand, because Bi ⊆ Gi and the submatrices (Wj )Gi ,Gi = O for all j = 1, . . . , i, therefore, Wi+1 6∈ span(W1 , . . . , Wi ). We prove this theorem by induction on i. We consider the case of i = 0. From G \ G0 = ∅ and the form of Wi+1 in (8), we have
W1 =
SDP (4) has a feasible solution because we have assumed that f has a sparse SOS representation with G. Therefore we can replace Hc− by ker c T in FRA. It is not difficult to verify that the set corresponding to the set ker A ∩ ker c T is
G \G i
where we define Eα by
H0 B0
H0
B0
O O
O S20
,
where S20 is positive semidefinite. Therefore, we have #(G)
S+
˜ ∩ {W1 } = V V = V O ⊥
0 O , V˜ ∈ S#+(H ) O
,
and this coincides with the face F1 . We assume that the ith face Fi is as follows:
V˜ Fi = V ∈ S#(G) V = O O
O O O
O #(H i−1 ) . O and V˜ ∈ S+ O
Because H i−1 = Gi , the dual Fi ∗ is
˜ S1 W #(Gi ) ˜ W = and W ∈ S , + S1T S2 Fi ∗ = W ∈ S#(G) . S ∈ R#(Gi )×#(G\Gi ) , S ∈ S#(G\Gi )
1
2
S
where S ⊆ Rn is a compact set with nonempty interior. Then W = (yα+β )α,β∈B is positive definite. Proof. Clearly, W is positive semidefinite. We prove that z T Wz = 0 implies z = 0. From the definition of W , z T Wz = 0 implies that the polynomial z (x) is zero on S. Because S has nonempty interior, z (x) is the zero polynomial, and thus z = 0. We give our main theorem. This theorem implies that EMSSOSP(G) can be interpreted as FRA. Theorem 3.3. Let G ⊆ Nnr . We assume that f has a sparse SOS representation with G and that EMSSOSP(G) generates G∗ = Gs and the triplets (Bi , H i , Gi ) for all i = 0, . . . , s − 1 which satisfy (2). Then FRA for SDP (4) can generate the following faces:
Fi+1
V˜ #(G) O = V ∈S V = O
O O O
O #(H i ) . O and V˜ ∈ S+ O
V˜ O
O O
#(G∗ )
and V˜ ∈ S+
.
Proof. From Claim 1, Wi+1 6∈ span(W1 , . . . , Wi ). It follows from Lemma 3.2 that S2i in (8) is positive definite. From this fact and Gi = H i ∪ Bi , Wi+1 ∈ Fi ∗ . In addition, from the definition of Wi+1 and F ⊆ H i + H i , Wi+1 belongs to the set of (5). Consequently, we obtain the desired result. Now, the face Fi+1 = Fi ∩ {Wi+1 }⊥ is
Fi+1 =
V ∈S
#(G)
V˜ V = O O
O O O
O #(H i ) . O and V˜ ∈ S+ O
(6)
Therefore, (6) is proved by induction. Specially, it follows from Bs−1 = Gs−1 \ H s−1 and H s−1 = G∗ that Bs−1 ∪ (G \ Gs−1 ) = G \ G∗ . Therefore, we obtain the sth face (7) written by G∗ . We show that SDP obtained by EMSSOSP(G) is equivalent to an #(G) SDP obtained by replacing S+ by F s . From (1), the SDP obtained by EMSSOSP(G) is
(7)
sup 0 subj. to fα = E˜ α • V˜ (α ∈ G∗ + G∗ ),
In particular, the face Fs is
Fs = V ∈ S#(G) V =
Claim 2. Wi+1 ∈ (ker A ∩ ker c T ∩ Fi ∗ ) \ span(W1 , . . . , Wi ).
where E˜ α := (Eα )G∗ ,G∗ .
#(G∗ )
V˜ ∈ S+
,
(9)
364
H. Waki, M. Muramatsu / Operations Research Letters 38 (2010) 361–365 #(G)
On the other hand, we generate an SDP by replacing S+ for SDP (4), we obtain the following SDP: sup 0 subj. to fα = Eα • V (α ∈ G + G),
by Fs
V ∈ Fs .
(10)
Then, the feasible region of SDP (10) is equivalent to that of SDP (4). From the form of V ∈ Fs , we obtain the following SDP:
sup 0 subj. to fα = Eα •
V˜ O
O O
#(G∗ )
(α ∈ G + G),
V˜ ∈ S+
.
Acknowledgements We thank Dr. Johan Löfberg for his comments. The first author was supported by Grant-in-Aid for JSPS Fellow 20003236. The second author was partially supported by Grant-in-Aid for Scientific Research (C) 19560063. Appendix. A proof of Theorem 2.5
(11) For α ∈ (G + G) \ (G∗ + G∗ ), from the definition of Eα , (Eα )β,γ = 0 for all β, γ ∈ G∗ , and thus the linear equalities fα = Eα • V for all α ∈ (G + G) \ (G∗ + G∗ ) are equal to fα = 0. On the other hand, because F ⊆ G∗ + G∗ , fα = 0 for all α ∈ (G + G) \ (G∗ + G∗ ), and thus these equalities are trivial in SDP (11). It follows from this discussion that SDP (11) is equivalent with SDP (9). Consequently, we conclude that the SDP obtained by EMSSOSP(G) is equivalent #(G) to the SDP obtained by replacing S+ by Fs . Note that FRA is not equivalent to EMSSOSP(G) and may generate a smaller SDP. We give such an example. x41 − 2x21 x22 + x42 . Applying = {(2, 0), (0, 2), (1, 1)} and
Example 3.4. We consider f = EMSSOSP(N22 ) into f , we obtain G∗ the following SDP from SDP (9): sup 0 ! 1 0 0 subj. to 1 = 0 0 0 • V , 0 = 0 0 0 ! 0 1 0
−2 =
0 0 0
1=
1 0 0
0 1 0
0 0 1
0 0 0
0 0 1
• V, 0 =
0 0 0 0 0 0
1 0 0 0 0 1
• V, ! • V,
! • V , V ∈ 3. (12)
The set (5) is
W = (yα+β )α,β∈G∗ , W ∈ 3 y(4,0) − 2y(2,2) + y(0,4) = 0
and
W =
1 1 0
y(3,1) = y(1,3) = 0.
!
1 1 0
0 0 1
!
) =0 ,
which is smaller than 3. By replacing 3 by F for SDP (12) and removing trivial equalities, we obtain the following SDP:
(14)
Proof. It is sufficient to prove G = (H ∩ G) ∪ (B ∩ G), ((B ∩ G) + (B ∩ G)) ∩ F = ∅ and ((B ∩ G) + (B ∩ G)) ∩ (G + (H ∩ G)) = ∅. We omit the proofs because it is easy to check these equalities.
∈ ker A ∩ ker c T ∩ 3.
1 F = V ∈ 3 V • 1 0
sup subj. to
H ⊆ P, B ⊆ P, P = H ∪ B, (B + B) ∩ F = ∅ and (B + B) ∩ (P + H ) = ∅.
Then the triplet (B ∩ G, H ∩ G, G) satisfies (2). 0 0 1
The face F generated by W is
(
By applying a similar argument in the proofs of Lemma 3.3 and Theorem 3.1 in [4], we can prove that the existence of the smallest ˆ ∈ Γ 0 . Furthermore, using the same argument as the finite set G ¯ ), we can see that EMSSOSP(G0 ) returns G. ˆ This case of EMSSOSP(G ∗ ˆ implies that for Theorem 2.5, it is sufficient to prove G = G. ˆ we use the following lemma: To prove G∗ ⊆ G,
Then we have 1 1 0
In [4], the authors proved that if, G, G0 ∈ Γ , then G ∩ G0 ∈ Γ . ˇ in Γ in the This implies the existence of the smallest finite set G ˇ sense that G ⊆ G for any G ∈ Γ . From the definition of Γ , we can construct the triplets (Bi , H i , Gi ) for all i = 0, . . . , s − 1 which ¯ and Gs = G. ˇ Therefore, EMSSOSP(G¯ ) satisfy (2) and we have G0 = G ˇ It is not difficult to verify by using Lemma 3.3 in [4] can return G. ¯ ) is always G. ˇ that the set G∗ returned by EMSSOSP(G ¯ and replace G¯ in the To prove Theorem 2.5, we fix G0 ⊇ G definition of Γ by G0 . We define Γ 0 as follows:
Lemma A.1. Assume that ∅ 6= B ⊆ P , G ⊆ P , B ∩ G 6= ∅ and that the triplet (B, H , P ) satisfies
for some yα .
For W in the set (5), we define yα as follows: y(4,0) = y(0,4) = y(2,2) = 1
¯ := 1 conv(F e ) ∩ Nn ∈ Γ . 1. G 2 2. if G ∈ Γ and there exist nonempty sets B, H ⊆ G such that the triplet (B, H , G) satisfies (2), H ∈ Γ .
1. G0 ∈ Γ 0 . 2. if G ∈ Γ 0 and there exist nonempty sets B, H ⊆ G such that the triplet (B, H , G) satisfies (2), H ∈ Γ 0 .
!
0 1 0
We define a set family Γ as follows:
0 V˜ (2,0),(2,0) = 1, V˜ (2,0),(0,2) = −1, V˜ (0,2),(0,2) = 1 ˜ ˜ 0,2),(0,2) = 0 V˜ (2,0),( 2,0) + 2V(2,0),(0,2) + V( ˜ (2,0),(2,0) V˜ (2,0),(0,2) V V˜ = ∈ S2+ . V˜ (2,0),(0,2) V˜ (0,2),(0,2)
(13)
From SDP (13), we can find an SOS representation of f which is f = (x21 − x22 )2 . From this example, we see that FRA can find a smaller
˜ = {(2, 0), (0, 2)} than G∗ = {(2, 0), (0, 2), (1, 1)}. The reaset G son is that EMSSOSP(G) uses only the information of the support F of f , while FRA exploits the information of the coefficients of f .
¯ 6= ∅, we can For the triplet (B, H , G0 ) satisfying (14), if B ∩ G ¯ from G¯ and thus G¯ \(B ∩ G¯ ) ⊆ G0 \ B. Otherwise, remove at least B ∩ G ¯ ⊆ G0 \ B. These imply that the resulting set obtained by we have G ¯ ) is included in the resulting set the first iteration of EMSSOSP(G ¯ ⊆ G0 . By obtained by the first iteration of EMSSOSP(G0 ) because G ˆ applying Lemma A.1 into these sets repeatedly, we have G∗ ⊆ G. ∗ ˆ On the other hand, to prove G ⊇ G, it is sufficient to show ¯ ∈ Γ 0 , then Γ ⊆ Γ 0 . To that Γ ⊆ Γ 0 . From the definition of Γ 0 , if G ¯ ∈ Γ 0 , we use Algorithm A.3 based on the following lemma. prove G Because we have assumed that f is an SOS polynomial, it follows from Theorem 2.2 that F ⊆ conv(F e ). Lemma A.2. Assume that there exist nonempty sets B, H ⊆ G such that the triplet (B, H , G) satisfies
H ⊆ G, B ⊆ G, G = H ∪ B, (B + B) ∩ conv(F e ) = ∅ and (B + B) ∩ (G + H ) = ∅.
Then the triplet (B, H , G) also satisfies (2).
(15)
H. Waki, M. Muramatsu / Operations Research Letters 38 (2010) 361–365
Proof. Because of F ⊆ conv(F e ), the triplet (B, H , G) satisfies (2).
¯ from G0 . We give an algorithm to find G Algorithm A.3 (The Restricted Version of EMSSOSP(G0 )). Step 1: Set i = 0. Step 2: If there do not exist Bi , H i ⊆ Gi such that the triplet (Bi , H i , Gi ) satisfies (15), then stop and return Gi . Step 3: Otherwise set Gi+1 = H i and i = i + 1, and go back to Step 2. We assume that Algorithm A.3 requires s iterations. Then it generates triplets (Bi , H i , Gi ) for i = 0, 1, . . . , s − 1 which satisfy (15) and we obtain a finite set Gs . From Lemma A.2 and the definition of Γ 0 , it follows that Gi ∈ Γ 0 for all i = 0, 1, . . . , s. The ¯ ∈ Γ 0. following proposition ensures that G
¯ Proposition A.4. We have Gs = G. Proof. It follows from assumption (15) in Lemma A.2 that 1 ¯ ⊆ Gi for all conv(F e ) ∩ Nn ⊆ Gi for all i = 0, . . . , s. Thus G 2 i = 0, 1, . . . , s. ¯ Let α ∈ Gs be a vertex of conv(Gs ). We prove Gs ⊆ G. Then α ∈ Nn and 2α ∈ conv(F e ). Indeed, because (B, H , G) = ({α}, Gs \ {α}, Gs ) does not satisfy (15), we obtain the condition 2α ∈ conv(F e ) or 2α ∈ Gs + Gs \ {α}. The latter does not hold because α is a vertex of conv(Gs ). This shows 2α ∈ conv(F e ). This implies 2conv(Gs ) ⊆ conv(F e ), and thus 2Gs ⊆ conv(F e ). ¯ Therefore, we obtain Gs ⊆ 12 conv(F e ) ∩ Nn = G.
¯ ∈ Γ 0 , and thus G¯ ⊆ G∗ . This From Proposition A.4, we have G completes to prove Theorem 2.5.
365
References [1] J.M. Borwein, H. Wolkowicz, Facial reduction for a cone-convex programming problem, J. Aust. Math. Soc. 30 (1981) 369–380. [2] J.M. Borwein, H. Wolkowicz, Regularizing the abstract convex program, J. Math. Anal. Appl. 83 (1981) 495–530. [3] M.D. Choi, T.Y. Lam, B. Reznick, Sums of squares of real polynomials, Proc. Sympos. Pure Math. 58 (1995) 103–126. [4] M. Kojima, S. Kim, H. Waki, Sparsity in sums of squares of polynomials, Math. Program. 103 (2005) 45–62. [5] J.B. Lasserre, Global optimization with polynomials and the problems of moments, SIAM J. Optim. 11 (2001) 796–817. [6] M. Laurent, Sums of squares, moment matrices and optimization over polynomials, in: M. Putinar, S. Sullivant (Eds.), Emerging Applications of Algebraic Geometry, Springer, 2009, pp. 157–270. [7] J. Löfberg, Pre- and post-processing sum-of-squares programs in practice, IEEE Trans. Automat. Control 54 (5) (2009) 1007–1011. [8] P.A. Parrilo, Semidefinite programming relaxations for semialgebraic problems, Math. Program. 96 (2003) 293–320. [9] G. Pataki, A simple derivation of a facial reduction algorithm and extended dual systems, Dept. Stat. and OR, University of North Carolina at Chapel Hill, 1996. [10] V. Powers, T. Wörmann, An algorithm for sums of squares of real polynomials, J. Pure Appl. Algebra 127 (1998) 99–104. [11] M.V. Ramana, L. Tunçel, H. Wolkowicz, Strong duality for semidefinite programming, SIAM J. Optim. 7 (1997) 641–662. [12] B. Reznick, Extremal PSD forms with few terms, Duke Math. J. 45 (1978) 363–374. [13] H. Waki, S. Kim, M. Kojima, M. Muramatsu, Sums of squares and semidefinite programming relaxations for polynomial optimization problems with structured sparsity, SIAM J. Optim. 17 (2006) 218–242. [14] H. Waki, S. Kim, M. Kojima, M. Muramatsu, H. Sugimoto, SparsePOP: a sparse SDP relaxation of polynomial optimization problems, ACM Trans. Math. Softw. 35 (2) (2008) 15. Available from: http://www.is.titech.ac.jp/~kojima/SparsePOP. [15] H. Waki, M. Muramatsu, Facial reduction algorithms for conic optimization problems, Department of Computer Science, The University of ElectroCommunications, 2009. [16] H. Waki, M. Nakata, M. Muramatsu, Strange behaviors of interior-point methods for solving semidefinite programming problems in polynomial optimization, Comput. Optim. Appl. (in press).