Improved approximations for max set splitting and max NAE SAT

Improved approximations for max set splitting and max NAE SAT

Discrete Applied Mathematics 142 (2004) 133 – 149 www.elsevier.com/locate/dam Improved approximations for max set splitting and max NAE SAT Jiawei ...

303KB Sizes 5 Downloads 130 Views

Discrete Applied Mathematics 142 (2004) 133 – 149

www.elsevier.com/locate/dam

Improved approximations for max set splitting and max NAE SAT Jiawei Zhanga , Yinyu Yea;1 , Qiaoming Hanb;2 a Department

of Management Science and Engineering, Terman Engineering Center 492, Stanford University, Stanford, CA 94305, USA b School of Management Science and Engineering, Nanjing University, Nanjing 210093, PR China Received 5 July 2000; received in revised form 26 July 2002; accepted 30 July 2002

Abstract We present a 0.7499-approximation algorithm for Max Set Splitting in this paper. The previously best known result for this problem is a 0.7240-approximation by Andersson and Engebretsen (Inform. Process. Lett. 65 (1998) 305), which is based on a semide;nite programming (SDP) relaxation. Our improvement is resulted from a strengthened SDP relaxation, an improved rounding method, and a tighter analysis compared with that in Andersson and Engebretsen (1998). c 2004 Elsevier B.V. All rights reserved.  Keywords: Max-Set-Splitting; Max NAE SAT; Approximation algorithm; Semide;nite programming relaxation

1. Introduction In the Max Set Splitting problem, we are given a ;nite set U = {1; 2; : : : ; m}, a collection C = {S1 ; S2 ; : : : ; Sn } of subsets of U , and the nonnegative weight wj for each subset Sj ∈ C. The goal is to ;nd a partition of U into two subsets, U = U1 ∪ U2 , that maximizes the total weight of subsets in C which are split. A subset Sj ∈ C is said to be split by the partition U = U1 ∪ U2 if Sj ∩ U1 = ∅ and Sj ∩ U2 = ∅. This problem is also called Max Hypergraph Cut. Max Ek -Set Splitting is a special case of Max Set Splitting in which all subsets in C have cardinality exactly k. For any ;xed k ¿ 2, Max Ek -Set Splitting was shown to be NP-hard by LovBasz [12]. Petrank [14] proved that there exists some constant ¿ 0 such that the existence of a polynomial time (1 − )-approximation algorithm would imply P = NP. Max E2 -Set Splitting is exactly the extensively studied MAX CUT problem, which is known to be NP-hard to approximate within 16 + for any constant ¿ 0 [15]. Goemans and Williamson [6], in a major breakthrough, have 17 used semide;nite programming (SDP) and obtained an approximation algorithm for MAX CUT with a performance guarantee 0.87856. For Max E3 -Set Splitting, it has been shown by Kann et al. [10] to be approximatable within the same performance guarantee as MAX CUT; and this bound has been improved to 0.87867 [18], and 0.90871 [17] by Zwick. Very recently, Guruswami [7] have showed that Max E3 -Set Splitting is not approximatable within a factor of 19 + . 20 For k ¿ 4, Max Ek -Set Splitting is approximatable within 1 − 21−k [2,10], and this is the best possible since it is hard to approximate within a factor of 1 − 21−k + for any constant ¿ 0 [9,7]. Arora et al. [4] designed a PTAS for dense instances of the Max-Set-Splitting problem. For the general Max Set Splitting problem, the previously best known result was a 0.72405-approximation algorithm due to Andersson and Engebretsen [3]. 

This research was supported in part by NSF grants DMI-9908077 and DMS-9703490. This work was done while the author was visiting Fudan University, Shanghai, PR China. 2 This work was done while the author was visiting Computational Optimization Laboratory, Department of Management Sciences, University of Iowa. The author is supported by in part by NSFC grants 10226017 and 10201011. E-mail address: [email protected] (J. Zhang). 1

c 2004 Elsevier B.V. All rights reserved. 0166-218X/$ - see front matter  doi:10.1016/j.dam.2002.07.001

134

J. Zhang et al. / Discrete Applied Mathematics 142 (2004) 133 – 149

Ageev and Sviridenko [1] gave a 12 -approximation algorithm for this problem with the constraint |U1 | = l for any given integer l(1 6 l 6 m). Note that Zwick [18] also constructed an approximation algorithm for the general Max Set Splitting problem. Based on the strong numerical evidence, he conjectured that the algorithm has performance guarantee 0.7977. His conjecture has not been proved yet. A closely related problem of Max Set Splitting is Max NAE SAT, which is a variant of the well-known Max SAT problem. In the Max NAE SAT problem, we are given a set of variables U = {x1 ; x2 ; : : : ; xm }, a collection C of disjunctive Boolean clauses of literals, where a literal is either a variable xi ∈ U or its negation xOi . Each clause Sj ∈ C is associated with a nonnegative weight wj . The objective is to maximize the total weight of the satis;ed clauses, i.e., contain at lease one true literal and at least one false literal, over all assignments of truth values to the variables of U . Max (Ek -)Set Splitting is a special case of Max (Ek )-NAE SAT in which all literals appear unnegated. All of the results for Max (Ek -)Set Splitting mentioned above hold for Max (Ek )-NAE SAT. In this paper, building on the algorithm of Andersson and Engebretsen [3], we present an improved 0.74996-approximation algorithm for Max Set Splitting. Recall that the algorithm of Andersson and Engebretsen [3] has three steps: (1) solve a semide;nite programming (SDP) relaxation of Max Set Splitting, (2) ;nd a partition by using the rounding method of Goemans and Williamson [6], (3) add a probabilistic postprocessing step, where the partition of the second step is perturbed. We improve the algorithm of Andersson and Engebretsen [3] in the following ways. First, we present a strengthened SDP relaxation for Max Set Splitting by adding some valid inequalities. Second, we round the SDP solution by an improved rounding method, which was introduced by Nesterov [13], Zwick [18] and Ye [16]. Our improvement also relies on a tighter analysis compared with that in [3]. We also consider the satis;able instances of the Max NAE SAT problem, i.e., all the clauses in C can be satis;ed in an optimal assignment. In this case, the performance ratio of our algorithm can be improved to 0.8097. The only known result for satis;able instances is a conjectured 0.8638-approximation algorithm [18]. For satis;able Max E3 -NAE SAT, there is a 0.912-approximation algorithm [18]. This paper is organized as follows. In Section 2, we present a strengthened SDP relaxation-based approximation algorithm for Max Set Splitting. In Section 3, we analyze the quality of the partition resulted from rounding an optimal SDP solution. We analyze the performance ratio of the perturbation step and the ;nal partition in Section 4. In Section 5, we extend our algorithm to Max NAE SAT. In Section 6, we show that the satis;able Max NAE SAT can be approximated with a factor of (at least) 0.8097.

2. Semidenite programming relaxation The Max Set Splitting can be formulated as w∗ := Maximize

n 

w j zj

j=1

(MSS) subject to

1 |Sj | − 1

 i; k∈Sj |i¡k

xi ∈ {−1; 1} zj ∈ {0; 1}

1 − xi xk ¿ zj 2

for all Sj ∈ C;

for i = 1; : : : ; m; for i = 1; : : : ; n:

In the above formulation zj = 1 if Sj is split, zj = 0 otherwise. xi = 1 if i ∈ U1 and xi = −1 if i ∈ U2 . The constraint  1 − xi xk 1 ¿ zj for all Sj ∈ C |Sj | − 1 2 i; k∈Sj |i¡k

is valid since  i; k∈Sj |i¡k

1 − xi xk 2



=0;

if xi = xk

¿ |Sj | − 1;

otherwise:

for all i; k ∈ Sj ;

J. Zhang et al. / Discrete Applied Mathematics 142 (2004) 133 – 149

135

The following SDP relaxation of (MSS) is due to Andersson and Engebrestsen [3]. Maximize

n 

w j zj

j=1

(AE-SDP) subject to

1 |Sj | − 1

 i; k∈Sj |i¡k

Xii = 1

1 − Xik ¿ zj 2

for all Sj ∈ C;

for i = 1; : : : ; m;

X ¡ 0; 0 6 zj 6 1

for i = 1; : : : ; n:

We observed that the SDP relaxation (AE-SDP) can be strengthened by the following lemma: Lemma 1. If ti ∈ {−1; 1} for i = 1; 2; : : : ; L, then    L ti tk ¿ − ; 2 i; k|16i¡k6L

where x denotes the maximum integer which is less than or equal to x. Proof. If ti ∈ {−1; 1} for i = 1; 2; : : : ; L, then  L 2   0 if L is even; ti ¿ 1 otherwise: i=1 Therefore, the lemma follows. Corollary 1. If ti ∈ {−1; 1} for i = 1; 2; : : : ; L, then  2 L   if L is even;   1 − ti tk 4 6  2  i; k|16i¡k6L  (L + 1)(L − 1) if L is odd: 4 Now, we are ready to give a strengthened SDP relaxation of (MSS) by adding those valid inequalities provided in Lemma 1. wSDP :=

Maximize

n 

w j zj

j=1

(MSS-SDP) subject to

1 |Sj | − 1 

 i; k∈Sj |i¡k

Xik ¿ −

i; k∈Sj |i¡k

Xii = 1

1 − Xik ¿ zj 2 

|Sj | 2

for all Sj ∈ C;

 for all Sj ∈ C;

for i = 1; : : : ; m;

X ¡ 0; 0 6 zj 6 1

for i = 1; : : : ; n:

Let (X ∗ ; z ∗ ) be an optimal solution of (MSS-SDP). Then, the approximation algorithm for Max Set Splitting is as follows.

136

J. Zhang et al. / Discrete Applied Mathematics 142 (2004) 133 – 149

Algorithm MSS 1. SDP solving: Solve (MSS-SDP) to obtain the semide;nite matrix X ∗ . Repeat the next two steps for enough times and output the best partition. 2. Randomized rounding: Generate a vector u from a multivariate normal distribution with 0 mean and covariance matrix X ∗ + (1 − )I , where I is the identity matrix and 0 ¡  6 1; that is, generate u ∈ N(0; X ∗ + (1 − )I ); then assign xˆ = sign(u); i.e.,  1 if ui ¿ 0; xˆi = −1 if ui ¡ 0: The value of  will be speci;ed later in Section 4. 3. Perturbation: For each i ∈ U , we let  − xˆi with probability p; x˜i = with probability 1 − p: xˆi Again, the value of p 6 12 will be speci;ed later in Section 4. Note that the rounding method has been used by Zwick [18] and Ye [16], and generalized recently by Han et al. [8]. When  = 1, it is exactly the method of Goemans and Williamson [6]. 3. Analysis of rounding Let u(Sj ) = 1 if the subset Sj is split by the partition given by x, ˆ u(Sj ) = 0 otherwise. Then, the total weighted splitting n value from the partition is w j u(Sj ). Since u(Sj ) is a random variable, we are interested in the expected value of j=1 n n j=1 wj u(Sj ), i.e., j=1 wj E[u(Sj )]. Now, we consider any subset Sj ∈ C. We want to prove that, for some |Sj | ¿ 0, E[u(Sj )] ¿ |Sj | ·zj∗ . Recall that (X ∗ ; z ∗ ) is an optimal solution of (MSS-SDP). For simplicity, we drop subscript j in the rest of this section. We start by establishing a lower bound for u(S). Let  4  if S| is even;   |S|2 |S| =  4   if S| is odd: (|S| + 1)(|S| − 1) Lemma 2. With probability 1, the following two inequalities hold: (i) u(S) ¿ max i; k∈S {(1 − xˆi xˆk )=2}, (ii) u(S) ¿ |S| i; k∈S|i¡k (1 − xˆi xˆk )=2. Proof. We ;rst observe that S is split if any pair of nodes in S is split by x, ˆ i.e.,

1 − xˆi xˆk u(S) ¿ max i; k∈S 2 with probability 1. This proves (i). To prove (ii), we note that u(S) ∈ {0; 1}. By Corollary 1, we have  1 − xˆi xˆk 6 1: |S| 2 i; k∈S|i¡k

Therefore, if u(S) = 1, we have  1 − xˆi xˆk 6 1 = u(S): |S| 2 i; k∈S|i¡k

If u(S) = 0, then xˆi xˆk = 1 for all i; k ∈ S which implies that  1 − xˆi xˆk = 0 6 u(S): |S| 2 i; k∈S|i¡k

J. Zhang et al. / Discrete Applied Mathematics 142 (2004) 133 – 149

137

In order to prove a lower bound on E[u(S)], we need the following theorem which is due to Goemans and Williamson [6, Theorem 3.1]; see also [5, Proposition 1]. Theorem 1 (Goemans and Williamson [6]). Let X be a positive semide;nite matrix with Xii = 1 for all i, and choose u from the multivariate normal distribution with mean 0 and covariance matrix X . Let xˆ = sign(u). Then E[ xˆi xˆk ] = (2=")arcsin(Xik ) for all i = k. Then, we have Corollary 2. Let X be a positive semide;nite matrix with Xii = 1 for all i,  ∈ (0; 1], and choose u from the multivariate normal distribution with mean 0 and covariance matrix X + (1 − )I , where I is the identity matrix. Let xˆ = sign(u). Then E[ xˆi xˆk ] = (2=")arcsin(Xik ) for i = k. Now we are ready to analyze the (expected) quality of the partition given by x. ˆ For a given  ∈ (0; 1], let XS be an |S| × |S| real symmetric matrix and |S| := Minimize {XS ;y;z}

Subject to

y y¿ y¿

max

i; k|16i¡k6|S| |S|



1 − (2=")arcsin(Xik ) 2z

i; k|16i¡k6|S| (1

  1 z = min 1;  |S| − 1 Xik ¿ −

i; k|16i¡k6|S|

Xii = 1

;

− (2=")arcsin(Xik ))

2z  i; k|16i¡k6|S|





 

;

1 − Xik ¿ 0; 2 

(1)

 |S| ; 2

for i = 1; : : : ; |S|;

XS ¡ 0: Then, we have Lemma 3. E[u(S)] ¿ |S| z ∗ . Proof. Note that E[u(S)] ¿ 0. Therefore, the lemma is true if z ∗ = 0. Now we consider the case z ∗ ¿ 0. Let X ∗ be the optimal matrix solution of (MSS-SDP), and let XS∗ be the principle submatrix (of X ∗ ) consisting of all components Xik∗ such that i; k ∈ S. Note that XS∗ is |S| × |S|, symmetric and positive semide;nite, and its diagonal components equal 1. By (i) of Lemma 2 and Corollary 2, we have 

 1 − xˆi xˆk E[u(S)] ¿ E max i; k∈S 2

1 − E[ xˆi xˆk ] ¿ max i; k∈S 2

1 − (2=")arcsin(Xik∗ ) = max i; k∈S 2

1 − (2=")arcsin(Xik∗ ) ∗ z : = max i; k∈S 2z ∗

(2)

138

J. Zhang et al. / Discrete Applied Mathematics 142 (2004) 133 – 149

On the other hand, by (ii) of Lemma 2 and Corollary 2, we have    1 − xˆi xˆk  E[u(S)] ¿ |S| E  2 i; k∈S|i¡k

=

 |S| i; k∈S|i¡k

=

 |S| i; k∈S|i¡k

=

|S|

1 − E[ xˆi xˆk ] 2 1 − (2=")arcsin(Xik∗ ) 2



i; k∈S|i¡k (1

− (2=")arcsin(Xik∗ )) 2z ∗

z∗ :

(3)

Since (X ∗ ; z ∗ ) is a maximal solution of (MSS-SDP), we must have     1 − X∗ 1 ∗ ik z = min 1; :  |S| − 1 2  i; k∈S|i¡k

Then, one can verify that (X; y; z) = (XS∗ ; E[u(S)]=z ∗ ; z ∗ ) is a feasible solution of (1). Thus y=

E[u(S)] ¿ |S| z∗

which gives us the desired result. However, it is not easy to compute |S| directly from (1). What we are able to do is to provide a good lower bound for |S| . The following table is the list of lower bounds for |S| , |S| = 2; : : : ; 15, when  = 0:9795. The detail of how to compute these lower bounds will be presented in the appendix. 4. Perturbation First, we strengthen the approximation quality for size |Sj | = 2 by the following analysis. Let $(t) = arccos((1 − 2t))=", % (¿ 12 ) be the minimizer of $(t)=t in the interval (0; 1]. It is easy to see that 1 − (2=")arcsin(x) $(% ) = min : −16x¡1 % 1−x Let W2 =



 wj

and



B2 = 

j:|Sj |=2

 wj zj∗ 

W2 :

j:|Sj |=2

The following lemma is an extension of Theorem 3.1.1 of [6]. Lemma 4. If B2 ¿ % , then  $(B2 )  wj E[u(Sj )] ¿ wj zj∗ : B2 j:|Sj |=2

j:|Sj |=2

Proof. Let (j = wj =W2 , then B2 = j:|Sj |=2 (j zj∗ . Note that zj∗ = (1 − Xi∗j kj )=2 for Sj = {ij ; kj }. Then we get   1 − (2=")arcsin(Xi∗j kj ) wj E[u(Sj )] = wj 2 j:|Sj |=2

j:|Sj |=2

=

 j:|Sj |=2

wj

arccos(Xi∗j kj ) "

J. Zhang et al. / Discrete Applied Mathematics 142 (2004) 133 – 149

= W2 ·



(j

j:|Sj |=2

= W2 ·



139

arccos((1 − 2zj∗ )) "

(j $(zj∗ ):

j:|Sj |=2

Let

   $(t) ˜ = $(t) $(% )  t  %

if t ¿ % ; if 0 6 t 6 % :

˜ ˜ By an analysis similar to [6], one can show that $(t) is a convex function in [0; 1] and $(t) 6 $(t). Note that ( = 1, then j j:|Sj |=2      ˜ 2 ) = $(B2 ): ˜ j∗ ) ¿ $˜  (j $(zj∗ ) ¿ (j $(z (j zj∗  = $(B j:|Sj |=2

j:|Sj |=2

j:|Sj |=2

The last equality holds since B2 ¿ % . It follows that  $(B2 )  wj E[u(Sj )] ¿ W2 $(B2 ) = wj zj∗ : B2 j:|Sj |=2

j:|Sj |=2

By the analysis of last two sections, after the ;rst two steps of the algorithm the partition of u(Sj ) given by xˆ has the following property: E[u(Sj )] ¿ |Sj | zj∗ ; where k , k ¿ 2, is de;ned by (1). In particular, for subset Sj with size |Sj | = 2, 2 ¿ min

−16x¡1

1 − (2=")arcsin(x) ; 1−x

and when B2 ¿ % ;  $(B2 )  wj E[u(Sj )] ¿ wj zj∗ : B2 j:|Sj |=2

j:|Sj |=2

Now, let u(S ˜ j ) = 1 if the subset Sj has been split by the partition given by x˜ and u(S ˜ j ) = 0 otherwise (i.e., after the perturbation step). Recall that  − xˆi with probability p; x˜i = with probability 1 − p: xˆi The following lemma is due to [3]. Lemma 5. E[u(S ˜ j )] ¿ E[u(Sj )] · (1 − p(1 − p)|Sj |−1 − p|Sj |−1 (1 − p)) + (1 − E[u(Sj )])(1 − p|Sj | − (1 − p)|Sj | ): Then we analyze the ratio between Algorithm MSS and the SDP relaxation for subsets with diSerent sizes. Corollary 3. For k ¿ 2 and p 6 1=2,  wj E[u(S ˜ j )] j:|Sj |=k

¿

 j:|Sj |=k

wj zj∗ {k (1 − p(1 − p)k−1 − pk−1 (1 − p)) + (1 − k )(1 − pk − (1 − p)k )}:

140

J. Zhang et al. / Discrete Applied Mathematics 142 (2004) 133 – 149

For the subset with size |Sj | = 2, the ratio can be further improved. Corollary 4. 

wj E[u(S ˜ j )]

j:|Sj |=2

¿ min

% 6B2 61

$(B2 )(1 − 2p)2 + (2p − 2p2 )  wj zj∗ : B2 j:|Sj |=2

Proof. If B2 ¿ % , by Lemma 5 and recall the de;nition of B2 , we get 

wj E[u(S ˜ j )]

j:|Sj |=2

¿



wj {E[u(Sj )] · (1 − 2p(1 − p)) + (1 − E[u(Sj )])(1 − p2 − (1 − p)2 )}

j:|Sj |=2



=

j:|Sj |=2

¿



wj E[u(Sj )](1 − 2p)2 +

j:|Sj |=2

$(B2 )  1  wj zj∗ (1 − 2p)2 + wj zj∗ (2p − 2p2 ) B2 B2 j:|Sj |=2

=

wj (2p − 2p2 )

j:|Sj |=2

$(B2 )(1 − 2p)2 + (2p − 2p2 )  wj zj∗ : B2 j:|Sj |=2

If B2 ¡ % , we have 

wj E[u(S ˜ j )]

j:|Sj |=2

¿

 j:|Sj |=2

¿ 2



wj E[u(Sj )](1 − 2p)2 +

wj (2p − 2p2 )

j:|Sj |=2



wj zj∗ (1 − 2p)2 +

j:|Sj |=2

1  wj zj∗ (2p − 2p2 ) B2 j:|Sj |=2

   (2p − 2p2 ) wj zj∗ : = 2 (1 − 2p)2 + B2 j:|Sj |=2

But 2 (1 − 2p)2 + = min

−16x¡1

=

(2p − 2p)2 B2

1 − (2=")arcsin(x) (2p − 2p2 ) (1 − 2p)2 + 1−x B2

$(% ) (2p − 2p2 ) (1 − 2p)2 + % B2

J. Zhang et al. / Discrete Applied Mathematics 142 (2004) 133 – 149

¿

141

$(% ) (2p − 2p2 ) (1 − 2p)2 + % %

¿ min

% 6B2 61

$(B2 )(1 − 2p)2 + (2p − 2p2 ) : B2

For a given , we can run the Algorithm MSS for p ∈ {0; 0:005; 0:01; 0:015; 0:02; : : : ; 0:495; 0:5}; and then choose the best output. Let pi = 0:005i for i = 0; 1; 2; : : : 100. For each pi , we can get a ratio k(i) for the subset with size k, where for k ¿ 3, k(i) = k (1 − pi (1 − pi )k−1 − pik−1 (1 − pi )) + (1 − k )(1 − pik − (1 − pi )k ); and 2(i) = min

% 6B2 61

$(B2 ) (2pi − 2pi2 ) (1 − 2pi )2 + : B2 B2

It is easy to see the performance guarantee of Algorithm MSS for given 0 6  6 1 is at least maxi=0; 1; :::; 100 {mink=2; :::; T k(i) }. We can see that when pi increases, k(i) increases for a larger k, and decreases for a smaller k. This motivates us to analyze the performance guarantee of Algorithm MSS by considering the contribution of diSerent size (see [3]). We let ∗ j:|Sj |=k wj zj ; rk = ∗ 16j6n wj zj then the following minimization problem gives us a lower bound of the overall performance guarantee: Minimize subject to

R T 

k(i) rk 6 R

for i = 0; 1; : : : ; 100;

k=1 T 

(4) rk = 1;

k=1

0 6 rk 6 1

for k = 2; 3; : : : ; T:

If we only consider the subsets with size k 6 15, that is, T = 15, we have that R ¿ 0:74996 when we set  = 0:9795. The worst case is r2 = 0:543, r6 = 0:457 and rk = 0 for other k(6 15). Note that for k ¿ 15, k(i) ¿ 0:79 ¿ 0:7499 even if we set k = 0. Therefore it is suTcient to consider k 6 15 for analyzing the performance guarantee of our algorithm. To summarize, we have Theorem 2. The worst-case performance ratio of Algorithm MSS is at least 0.7499.

5. Max NAE SAT As we mentioned in the introduction, Max Set Splitting is a special case of Max NAE SAT in which all literals appear unnegated. A small modi;cation of Algorithm MSS described in Section 2 will also give a 0.7499-approximation algorithm for Max NAE SAT. This can be seen as follows. If a variable xi occurs negated in a clause, we de;ne a new variable xm+i which equals to −xi [11]. This means that we will have 2m variables and then the corresponding SDP relaxation has an unknown variable

142

J. Zhang et al. / Discrete Applied Mathematics 142 (2004) 133 – 149

which is a 2m × 2m matrix. The SDP relaxation for Max NAE SAT is: n  Maximize wj zj wSDP := j=1

(MNS-SDP) 1 |Sj | − 1

subject to



 i; k∈Sj |i¡k

1 − Xik ¿ zj 2 

Xik ¿ −

i; k∈Sj |i¡k

Xii = 1

|Sj | 2

for any Sj ∈ C;

 for any Sj ∈ C;

for i = 1; : : : ; 2m;

Xi; m+i = −1

for i = 1; : : : ; m;

X ¡ 0; 0 6 zj 6 1

for i = 1; : : : ; n:

The algorithm for Max NAE SAT is the same as Algorithm MSS except the rounding procedure. When we round the optimal solution X ∗ of (MNS-SDP) to {−1; 1} solution x, ˆ to make sure xˆi = − xˆm+i , we have to use the covariance matrix X ∗ + (1 − )NI instead of X ∗ + (1 − )I , where   I −I NI = : −I I Note that NI is also semide;nite, and 2 2 E[ xˆi xˆm+i ] = arcsin(Xi;∗m+i + (1 − ) · (−1)) = arcsin(−1) = −1: " " Therefore, with probability 1, xˆi xˆm+i = −1. Using the same analysis as that for Max Set Splitting, we get a 0.7499-approximation algorithm for Max NAE SAT. 6. Satisable instances In this section, we consider the satis;able instances of Max NAE SAT problem, i.e., all the clauses in C can be satis;ed in an optimal assignment. Our main result is Theorem 3. The satis;able instances of Max NAE SAT can be approximated with a factor of (at least) 0.8097 in polynomial time. We ;rst consider the case in which each clause contains at least 3 literals. Recall the de;nition of |S| and rk . For solving the satis;able instances, we have every zj∗ = 1 in the optimal solution of (MNS-SDP), and thus both of them can be strengthened as: |S| := Minimize X;y;z

Subject to

y y¿ y¿

max

i; k|16i; k6|S| |S|



1 − (2=")arcsin(Xik ) 2

i; k|16i¡k6|S| (1

(|S| − 4)(|S| − 1) ¿ 2 Xii = 1 X ¡0

;

− (2=")arcsin(Xik ))

2  i; k|16i¡k6|S|

for i = 1; : : : ; |S|;



;

 |S| Xik ¿ − ; 2

(5)

J. Zhang et al. / Discrete Applied Mathematics 142 (2004) 133 – 149

143

Table 1 The lower bounds of |S| when  = 0:9795 |S| |S| ¿

2 0.8711

3 0.9085

4 0.6952

5 0.6489

6 0.5778

7 0.5382

8 0.4940

|S| |S| ¿

9 0.4646

10 0.4297

11 0.4087

12 0.3845

13 0.3687

14 0.3510

15 0.3388

Table 2 The lower bounds of |S| for satis;able instances when  = 0:90

and

|S| |S| ¿

3 0.8954

4 0.7203

5 0.6876

6 0.6203

7 0.5962

8 0.5545

9 0.5400

10 0.5150

|S| |S| ¿

11 0.5008

12 0.4848

13 0.4752

14 0.4617

15 0.4547

16 0.4447

17 0.4381

18 0.4299

|S| |S| ¿

19 0.4250

20 0.4182

21 0.4139

22 0.4086

23 0.4048

24 0.4001

25 0.3969

j:|Sj |=k

rk =

wj zj∗

∗ 16j6n wj zj

j:|Sj |=k

=

wj

16j6n wj

:

In particular, the lower bound of 3 can be improved by the following lemma. Lemma 6. For the satis;able instances, 3 is bounded below by the optimal value of the following minimization problem:   2 2 2 1 3 − arcsin(Xij ) − arcsin(Xil ) − arcsin(Xjl ) Minimize 4 " " " (6) Subject to Xij + Xil + Xjl = −1; −1 6 Xij ; Xil ; Xjl 6 1: Proof. Note that, for the satis;able instances, z ∗ = 1, which implies that Xij∗ + Xil∗ + Xjl∗ = −1. By the second constraint of (1) and the fact that 3 = 12 , we conclude the proof. The optimal solution of (6) can be easily computed. See Lemma 12 in the appendix and note that minimization problem (6) is a special case of minimization problem (8) in which 0 = −1 and 1 = −1. We have veri;ed that, if  = 0:9795, then 3 ¿ 0:9088 which is slightly better than the lower bound provided by (1) (see Table 1). However, for |S| ¿ 4, we will simply use the lower bound of |S| as de;ned in (1), since we have found that the lower bound is always achieved at z ∗ = 1. Table 2 is the list of lower bounds for |S| ; |S| = 3; : : : ; 25, when  = 0:90 for satis;able instances. In order to give a lower bound for the performance guarantee of our algorithm, we consider LP (4) with one more constraint r2 = 0 (since we are dealing with the case in which all clauses contain at least 3 literals). We choose  = 0:90 and need to consider k 6 25. Then, we get that the optimal R is at least 0:8097 and the worst case is r3 = 0:5417, r4 = 0:4381, r14 = 0:0085, r15 = 0:0019, r16 = 0:0060, r17 = 0:0012, r18 = 0:0026 and rk = 0 for other k(6 25). Lemma 7. For the satis;able instances of Max NAE SAT with every clause containing at least 3 literals, the algorithm MSS has a performance guarantee at least 0.8097. The following lemma, combing with Lemma 7, enables us to prove Theorem 3.

144

J. Zhang et al. / Discrete Applied Mathematics 142 (2004) 133 – 149

Lemma 8. For the satis;able instances of Max NAE SAT, if there exists a R-approximation for the case that every clause contains at least 3 literals, then there exists a R-approximation for the case that every clause contains at least 2 literals. Proof. Suppose that there exists at least one clause that contains 2 literals. If the clause is yi ∨ yj , then we must have yi = yO j ; if the clause is yi ∨ yO j , then we have yi = yj . Then we can substitute yi by either yj or yO j such that all the clauses are still satis;able. By this substitution, we reduced one literal and one clause that contains 2 literals. Note that after the substitution, yj may appear twice in a single clause, or both yj and yO j appear in a single clause. In the former case, we can simply delete one yj ; in the latter case, we can simply remove the clause. By keeping doing this substitution, we get a reduced satis;able instance where each clause contains at least 3 literals, and each literal appear at most once in a single clause. Since there are at most m literals, and n clauses, the substitution can be done by at most mn times. Now we apply the R-approximation algorithm for the reduced satis;able instance, which give us a true and false assignment for the reduced literals. We can easily extend this assignment to the original literals such that all the clauses that contain two literals are still satis;able. Then we get a R-approximation for the original satis;able instance. Since Max Set Splitting is a special case of Max NAE SAT, we also obtained a 0.8097-approximation algorithm for the satis;able Max Set Splitting.

Acknowledgements We would like to thank the referees for their helpful comments that improved the presentation of the paper. In particular, Lemma 8 was suggested by one of the referees, which led to the 0:8097-approximation for the satis;able instances of Max NAE SAT.

Appendix A. A lower bound for |S| In this appendix, we show how to numerically compute the lower bound for |S| that is de;ned by the minimum value of the optimization problem (1). We ;rst parameterize the problem by introducing a parameter of X in (1):  Xik : 0 := i; k∈S|i¡k

Noting the total number of variables N|S| := |{Xik : 1 6 i ¡ k 6 |S|}| =

|S|(|S| − 1) ; 2

every variable −1 6 Xik 6 1 for i ¡ k, and by the third and fourth constraints of (1), we have   |S| − 6 0 ¡ N|S| : 2 Secondly, based on the range of 0 we can construct a close form for z in (1): Lemma A.1.     1 z=  N|S| − 0   ¿0  2(|S| − 1)

    |S| (|S| − 4)(|S| − 1) if and only if 0 ∈ − ; ; 2 2   (|S| − 4)(|S| − 1) if and only if 0 ∈ ; N|S| : 2

Proof. From the third constraint of (1),    1 − Xik 1 z = min 1; ¿ 0: |S| − 1 2 i¡k

(A.1)

J. Zhang et al. / Discrete Applied Mathematics 142 (2004) 133 – 149

145

If 0 ∈ [− |S|=2 ; (|S|−4)(|S|−1)=2], then [1=|S|−1] i¡k (1−Xik )=2 ¿ 1, and thus z =1. If 0 ∈ [(|S|−4)(|S|−1)=2; N|S| ), ∗ then [1=(|S| − 1)] i¡k (1 − Xik )=2 6 1, and thus z = (N|S| − 0)=2(|S| − 1) ¿ 0. One can verify that the reverse is also true. Now we can construct a lower bound for |S| by relaxing the minimization problem (1) to a relatively “easier” ; N|S| ). parameterized problem for any given constants  ∈ (0; 1] and 1 ∈ [ − 1; 1), and for any given parameter 0 ∈ [ − |S| 2 Let f(0; ; 1) := Minimize





1−

i¡k

Subject to



2 arcsin(Xik ) "

(A.2)

Xik = 0;

i¡k

1 6 Xik 6 1

for all 1 6 i ¡ k 6 |S|:

Note that the constraints of (A.2) are linear. Furthermore, via Lemma A.1, let

%; 1 (0) :=

        

|S|

2

f(0; ; 1)

f(0; ; 1) |S| (|S| − 1) N|S| − 0

    |S| (|S| − 4)(|S| − 1) if 0 ∈ − ; ; 2 2   (|S| − 4)(|S| − 1) if 0 ∈ ; N|S| 2

(A.3)

and, for given constants  ∈ (0; 1] and 1 ∈ [ − 1; 1), consider the one-dimensional minimization problem: %∗; 1 :=

min

−|S|=260¡N|S|

%; 1 (0):

(A.4)

Then, we have Lemma A.2. For any given  ∈ (0; 1] and 1 ∈ [ − 1; 1], |S| ¿ min

1 − (2=")arcsin(1) ∗ ; %; 1 : 2

Proof. Consider any minimal solution (y∗ ; X ∗ ; z ∗ ) of (1). If there exist i ¡ k such that Xik∗ ¡ 1, |S| ¿ y∗ ¿

1 − (2=")arcsin(1) 2

by the ;rst inequality constraint of (1). Otherwise Xik∗ ¿ 1 for all i ¡ k, and then X ∗ is a feasible solution of (A.2) for some 0(X ∗ ) ∈ [ − |S|=2 ; N|S| ). From the second inequality constraint of (1), and then from Lemma A.1, (A.3) and (A.4), we must have |S| ¿ y∗ ¿

|S|



i¡k (1

− (2=")arcsin(Xik∗ )) |S| ¿ ∗ f(0(X ∗ ); ; 1) = %; 1 (0(X ∗ )) ¿ %∗; 1 : 2z ∗ 2z

In order to provide a good lower bound of |S| , for given  ∈ (0; 1] we can choose a suitable 1 ∈ [ − 1; 1), such that (1−(2=")arcsin(1))=2 and %∗; 1 are numerically close. For example, for |S|=6 and =0:9795, we choose 1=−0:2471. It is easy to verify that (1 − (2=")arcsin(1))=2 ¿ 0:5778. If we can show that %∗; 1 ¿ 0:5778, then we can bound |S| ¿ 0:5778 for |S| = 6 and  = 0:9795. (We choose  to maximize the overall approximation bound R). Since (1 − (2=")arcsin(1))=2 is easy to compute for any given  and 1, we focus on how to numerically calculate %∗; 1 . Note that %∗; 1 is computed by one-dimensional search of 0 on %; 1 (0) of (A.3), and (A.2). Thus, the key is to construct an analytical form of f(0; ; 1) of (A.2) for any given parameter 0 and constants  and 1. We present a minimal solution structure for linearly constrained minimization problem (A.2).

146

J. Zhang et al. / Discrete Applied Mathematics 142 (2004) 133 – 149

Lemma A.3. There is a minimal solution X ∗ of (A.2) such that every variable Xik∗ takes one of the three values, {1; ; 1}, for some  ∈ (1; 1). Proof. First, in any minimal solution X ∗ of (A.2), each variable Xik∗ could take value 1 (the lower bound), 1 (the upper bound), or some value in (1; 1) (the interior of [1; 1]). And every xik∗ in (1; 1) must satisfy the ;rst order necessary, called the Karush–Kuhn–Tucker (KKT), condition. That is, there exists a Lagrange multiplier u for the only equality linear constraint in (A.2) such that every interior Xik∗ meets 2 = u: −  " 1 − (Xik∗ )2

(A.5)

Therefore, every interior Xik∗ (if exists) satis;es (Xik∗ )2 =

1 − ((2="u))2 : 2

 Let  = ( 1 − (2="u)2 )=. Then  ¿ 0 and by the constraint 1 6 Xik 6 1, we must have 1 ¡  ¡ 1. Now we complete the proof by considering the following cases: Case 1: If all the interiors equal to  , then we must have 1 ¡  ¡ 1. In this case, the lemma is true by letting  =  . Case 2: If all the interiors equal to − , then we must have 1 ¡ −  ¡ 1. In this case, the lemma is also true by letting  = − . ∗ ∗ ∗ ∗ Case 3: There exist at least two interiors, say X12 and X13 , such that 1 ¿ X12 =  and X13 = − ¿ 1. Suppose that  ¿ 0. Since 2−

2 2 2 arcsin( ) − arcsin((− )) = 2 − 2 arcsin(0); " " "

∗ ∗ =X13 =0 while keeping all other variables unchanged (note we can construct a new minimal solution by switching both X12 that all the constraints of (A.2) are still satis;ed). In the new minimal solution, there should be no more interior variable equal  or − , because, otherwise, we have a minimal solution whose interior variables take both 0, and  (¿ 0) or − (¡ 0), which contradicts (A.5) that all interior variables’ square, (Xik∗ )2 , are equal. Therefore,  = 0 and then Case 3 can be reduced to either Case 1 or Case 2.

Let the mixed integer and continuous function 2 2 arcsin() − N arcsin(1) " "   2 0 − N − N1 − (N|S| − N − N ) arcsin  : " N|S| − N − N

f; 1 (0; N ; N ) = N|S| − N ·

(A.6)

Lemma A.4. For any 0 ∈ [ − |S|=2 ; N|S| ), there exist two non-negative integers N (0) and N (0), such that (i) N (0) + N (0) 6 N|S| − 1; (ii) N|S| 1 + N (0)(1 − 1) 6 0 6 N|S| − N (0)(1 − 1); (iii) The optimal value of (A.2) f(0; ; 1) = f; 1 (0; N (0); N (0)).

Proof. Suppose that, in the optimal solution of (A.2), the number of Xik∗ ’s that equal to 1 is N , and the number of Xik∗ ’s that equal to 1 is N . It is easy to see that N and N are non-negative integers and N + N 6 N|S| . If N + N = N|S| , we must have N ¡ N|S| and N − 1 ¿ 0 since 0 ¡ N|S| . In this case, let N (0) = N ¿ 0 and N (0) = N − 1 ¿ 0 such that N (0) + N (0) 6 N|S| − 1. Then N|S| − N (0) − N (0) = 1, and by the linear constraint of (A.2) 0 − N (0) − N (0)1 = 1: N|S| − N (0) − N (0) Therefore, the optimal value of (A.2) is f(0; ; 1) = N|S| − N



2 2 arcsin() − N arcsin(1) = f; 1 (0; N (0); N (0)): " "

J. Zhang et al. / Discrete Applied Mathematics 142 (2004) 133 – 149

147



If N + N 6 N|S| − 1, then by Lemma A.3, all other Xik∗ ’s must equal to

=

0 − N − N 1 ∈ (1; 1): N|S| − N − N

In this case, let N (0) = N ¿ 0 and N (0) = N ¿ 0 such that N (0) + N (0) 6 N|S| − 1. Therefore, the minimal objective value function of (A.2) also has the form f(0; ; 1) = f; 1 (0; N (0); N (0)): We have proved (i) and (iii). In order to prove (ii), note that in either case we have 16

0 − N (0) − N (0)1 61 N|S| − N (0) − N (0)

and thus N|S| 1 + N (0)(1 − 1) 6 0 6 N|S| − N (0)(1 − 1). Let

%; 1 (0; N ; N ) :=

    

|S|

2

   

f; 1 (0; N ; N ) f; 1 (0; N ; N ) N|S| − 0

|S| (|S| − 1)

    |S| (|S| − 4)(|S| − 1) if 0 ∈ − ; ; 2 2   (|S| − 4)(|S| − 1) if 0 ∈ ; N|S| : 2

(A.7)

Furthermore, for any given nonnegative integers N and N , de;ne a one-dimensional minimization problem %; 1 (N ; N ) :=

min

0:∈[−|S|=2;N|S| )∩[N|S| 1+N (1−1);N|S| −N (1−1)]

%; 1 (0; N ; N ):

(A.8)

Lemma A.4 leads to the following: ∗





Corollary A.1. There exist two nonnegative integers N and N ∗ , such that N + N ∗ 6 N|S| − 1 and %∗; 1 ¿ %; 1 (N ; N ∗ ). Proof. Let 0∗ be the minimizer of the minimization problem (10), i.e., %∗; 1 = %; 1 (0∗ ). By Lemma A.4, there exist two ∗ nonnegative integers N := N (0∗ ) and N ∗ := N (0∗ ) such that ∗

f(0∗ ; ; 1) = f; 1 (0∗ ; N ; N ∗ ): ∗

Then by the de;nitions of %; 1 (0; N ; N ) and %; 1 (N ; N ), we get %∗; 1 ¿ %; 1 (N ; N ∗ ). ∗

Remarks. For our purpose, we need to prove only that %; 1 (N ; N ∗ ) is a lower bound of %∗; 1 . However, one can easily ∗ verify that %∗; 1 = %; 1 (N ; N ∗ ). ∗

Now, suppose we know N and N ∗ . Then the minimization problem (A.4) is reduced to a easier problem (A.8). Of ∗ ∗ course, we do not know N and N ∗ in advance. However, we can enumerate all pairs of nonnegative integers (N ; N ∗ ) ∗ ∗ 4 with N + N 6 N|S| − 1, which should be no more than O(|S| ) many. For each pair, we numerically solve (A.8) and ∗ obtain %; 1 (N ; N ∗ ). Then, we select %∗; 1 among all candidate pairs as %∗; 1 =



min

(N ∗ ;N ∗ )|N ∗ ¿0; N ∗ ¿0; N ∗ +N ∗ 6N|S| −1

%; 1 (N ; N ∗ ):



For any ;xed pair (N ; N ∗ ), solving problem of (A.8) is straightforward. If 0∗ ∈ [ − |S|=2 ; (|S| − 4)(|S| − 1)=2], then from (A.7) ∗

%; 1 (0; N ; N ∗ ) = ∗

|S|

2



f; 1 (0; N ; N ∗ ): ∗



Since f; 1 (0; N ; N ∗ ) and thus %; 1 (0; N ; N ∗ ) are monotonely decreasing in 0 from (A.6), the minimum of %; 1 (0; N ; N ∗ ) is obtained when 0∗ reaches its upper limit, i.e.,

(|S| − 4)(|S| − 1) ; N|S| − N ∗ (1 − 1) : 0∗ = min 2

148

J. Zhang et al. / Discrete Applied Mathematics 142 (2004) 133 – 149 Table 3 The values of 1 used in the computation of the lower bounds in Table 1 |S| 1

2 −1

3 −0.9791

4 −0.5876

5 −0.4604

6 −0.2471

7 −0.1223

8 0.0192

|S| 1

9 0.1133

10 0.2236

11 0.2888

12 0.3623

13 0.4092

14 0.4606

15 0.4952

In the case 0∗ ∈ [(|S| − 4)(|S| − 1)=2; N|S| ), from (A.7) ∗



%; 1 (0; N ; N ∗ ) = ∗

N|S| − N ·

2 "

|S| (|S|

− 1)

f; 1 (0; N ; N ∗ ) = N|S| − 0

arcsin() − N ∗ ·

2 "

|S| (|S|

− 1) · ∗

arcsin(1) − (N|S| − N − N ∗ ) ·

2 "

arcsin(

0−N ∗ −N ∗ 1 ) N|S| −N ∗ −N ∗

N|S| − 0

;

where again variable 0 satis;es ∗

N|S| 1 + N (1 − 1) 6 0 6 N|S| − N ∗ (1 − 1): De;ne a new variable ∗

x=

0 − N − N ∗1 ∈ [1; 1]: ∗ N|S| − N − N ∗

Then ∗

%; 1 (0; N ; N ∗ ) = h(x) :=

|S| (|S|

− 1) ·

A−

2 "

arcsin(x) ; B−x

where ∗

A=

N|S| − N ·

2 "

arcsin() − N ∗ · ∗

N|S| − N − N ∗

2 "

arcsin(1)



and

B=

N|S| − N − N ∗ 1 : ∗ N|S| − N − N ∗ ∗

Note that A; B ¿ 1. Therefore, minimizing %; 1 (0) over 0 ∈ [N|S| 1 +N (1−1); N|S| −N ∗ (1−1)]∩[(|S|−4)(|S|−1)=2; N|S| ) is equivalent to minimizing h(x) over  ∗ ∗ (|S| − 4)(|S| − 1)=2 − N − N ∗ 1 N|S| − N − N ∗ 1 : ; x ∈ [1; 1] ∩ ∗ ∗ N|S| − N − N ∗ N|S| − N − N ∗ But minimizing continuous h(x) over the above interval can be done by computing its 2 stationary points and 2 boundary points, and then by choosing the optimal one among these points. In fact, for A = B = 1 and |S| = 2, h(x) has been numerically calculated by Goemans and Williamson [6] for proving their 0:878 Max-Cut approximation result. To summarize, for any given |S|, we can construct a lower bound for |S| by the following procedure: Step 1: Choose suitable  ∈ (0; 1], and 1 ∈ [ − 1; 1). ∗ ∗ Step 2: For every nonnegative integer pair (N ; N ∗ ) such that N +N ∗ 6 N|S| −1, numerically solve the one-dimensional ∗ continuous minimization problem (A.8) to obtain its minimal value. Among all the minimal values %; 1 (N ; N ∗ ), choose ∗ the minimal one as %; 1 . Step 3: Choose min{1 − (2=")arcsin(1)=2; %∗; 1 } as the lower bound for |S| . The lower bounds presented in Tables 1 and 2 are generated by this procedure. We have explained how to choose the values of  and 1. Table 3 is the list of the values of 1 used in the computation of the lower bounds of |S| in Table 1 where  = 0:9795.

References [1] A.A. Ageev, M.I. Sviridenko, An approximation algorithm for hypergraph Max k-Cut with given sizes of parts, Proceedings of ESA’2000, Lecture Notes in Computer Science 1879, Springer, Berlin, 2000, pp. 32–41.

J. Zhang et al. / Discrete Applied Mathematics 142 (2004) 133 – 149

149

[2] P. Alimonti, Non-oblivious local search for graph and hypergraph coloring problems, in: Proceedings of 21st International Workshop on Graph-Theoretic Concepts in Computer Science, Lecture Notes in Computer Sciences, Vol. 1017, Springer, Berlin, 1995, pp. 167–180. [3] G. Andersson, L. Engebrestsen, Better approximation algorithms for set splitting and not-all-equal SAT, Inform. Process. Lett. 65 (1998) 305–311. [4] S. Arora, D. Karger, M. Karpinski, Polynomial time approximation schemes for dense instances of NP-hard problems, J. Comput. System Sci. 58 (1999) 193–210. [5] D. Bertsimas, Y. Ye, Semide;nite relaxations, multivariate normal distributions, and order statistics, in: D.-Z. Du, P.M. Pardalos (Eds.), Handbook of Combinatorial Optimization, Vol. 3, Kluwer Academic Publishers, Dordrecht, 1998, pp. 1–19. [6] M.X. Goemans, D.P. Williamson, Improved approximation algorithms for maximum cut and satis;ability problems using semide;nite programming, J. ACM 42 (1995) 1115–1145. [7] V. Guruswami, Inapproximability results for set splitting and satis;ability problems with no mixed clauses, Algorithmica, to appear. [8] Q. Han, Y. Ye, J. Zhang, An improved rounding method and semide;nite programming relaxation for graph partition, Math. Programming 92 (2002) 509–535. [9] J. Hastad, Some optimal inapproximability results, J. ACM 48 (2001) 798–859. [10] V. Kann, J. Lagergren, A. Panconesi, Approximability of maximum splitting of k-sets and some other APX-complete problems, Inform. Process. Lett. 58 (1996) 105–110. [11] H. KarloS, U. Zwick, A 7/8-approximation algorithm for MAX 3SAT? 38th FOCS, 1997, pp. 406–415. [12] L. LovBasz, Coverings and colorings of hypergraphs, in: Proceedings of the 4th Southeastern Conference on Combinatorics, Graph Theory, and Computing, Utilitas Mathematica Publishing, Winnipeg, 1973, pp. 3–12. [13] Yu.E. Nesterov, Semide;nite relaxation and nonconvex quadratic optimization, Optimization Methods Software 9 (1998) 141–160. [14] E. Petrank, The hardness of approximation: gap location, Comput. Complex. 4 (1994) 133–157. [15] L. Trevisan, G.B. Sorkin, M. Sudan, D.P. Williamson, Gadgets, approximation and linear programming, SIAM J. Comput. 29 (6) (2000) 2074–2097. [16] Y. Ye, A .699-approximation algorithm for Max-bisection, Math. Programming 90 (2001) 101–111. [17] U. Zwick, Approximation algorithms for constraint satisfaction problems involving at most three variables per constraint, Ninth SODA, 1998, pp. 201–210. [18] U. Zwick, Outward rotations: a tool for rounding solutions of semide;nite programming relaxations, with application to MAX CUT and other problems, 31th STOC, 1999, pp. 679–687.