Accepted Manuscript A gap for PPT entanglement
D. Cariello
PII: DOI: Reference:
S0024-3795(17)30238-0 http://dx.doi.org/10.1016/j.laa.2017.04.013 LAA 14122
To appear in:
Linear Algebra and its Applications
Received date: Accepted date:
27 September 2016 11 April 2017
Please cite this article in press as: D. Cariello, A gap for PPT entanglement, Linear Algebra Appl. (2017), http://dx.doi.org/10.1016/j.laa.2017.04.013
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
A GAP FOR PPT ENTANGLEMENT D. CARIELLO
Abstract. Let W be a finite dimensional vector space over a field with characteristic not equal to 2. Denote by VS and VA the subspaces of symmetric and antisymmetric tensors of a subspace V of W ⊗ W , respectively. In this paper we show that if V is generated by tensors with tensor rank 1, V = VS ⊕ VA and W is the smallest vector space such that V ⊂ W ⊗ W then dim(VS ) ≥ dim(VA ) dim(W ) max{ 2dim(W }. ) , 2 This result has a straightforward application to the separability problem in Quantum Information Theory: If ρ ∈ Mk ⊗ Mk Mk2 is separable then rank((Id + F )ρ(Id + F )) ≥ max{ 2r rank((Id − F )ρ(Id − F )), 2r }, where Mn is the set of complex matrices of order n, F ∈ Mk ⊗ Mk is the flip operator, Id ∈ Mk ⊗ Mk is the identity and r is the marginal rank of ρ + F ρF . We prove the sharpness of this inequality. This inequality is a necessary condition for separability. Moreover, we show that if ρ ∈ Mk ⊗ Mk is positive under partial transposition (PPT) and rank((Id + F )ρ(Id + F )) = 1 then ρ is separable. This result follows from Perron-Frobenius theory. We also present a large family of PPT matrices satisfying rank(Id + F )ρ(Id + F ) ≥ r ≥ 2 r−1 rank(Id − F )ρ(Id − F ). There is a possibility that a PPT matrix ρ ∈ Mk ⊗ Mk satisfying 1 < rank(Id + F )ρ(Id + F ) < 2r rank(Id − F )ρ(Id − F ) exists. In this case ρ is entangled. This is a gap where we can look for PPT entanglement.
1. Introduction Let W be a finite dimensional vector space over a field with characteristic not equal to 2. Let V be a subspace of W ⊗ W and denote by VS and VA the subspaces of symmetric and antisymmetric tensors of V , respectively. If V = VS ⊕ VA and V is generated by tensors with tensor rank 1 then dim(VS ) = 0, since the tensor rank of every antisymmetric tensor is not 1. Thus, we can ask the following question: How small can the dim(VS ) be compared with dim(VA ), if V is generated by tensors with tensor rank 1 and V = VS ⊕ VA ? This question is quite interesting for Quantum Information Theory. Let us identify Mk ⊗ Mk 2 with Mk2 and Ck ⊗ Ck with Ck via Kronecker product, where Mn is the set of complex matrices of order n. One of the main problems in Quantum Information theory is discovering whether a positive semidefinite Hermitian matrix ρ ∈ Mk ⊗ Mk Mk2 is separable or not (see definition 3.1). Several necessary conditions for separability are known ([1–5]). One of these conditions is the socalled range criterion ([2]), i.e., the range (or the image) of a separable matrix ρ ∈ Mk ⊗Mk Mk2 must be generated by tensors with tensor rank 1. Observe that if ρ ∈ Mk ⊗ Mk is separable then the range of 2(ρ + F ρF ) = (Id + F )ρ(Id + F ) + (Id − F )ρ(Id − F ) 2010 Mathematics Subject Classification. 15A69, 15B48, 15B57. Keywords and phrases: Symmetric Tensors, Tensor Rank 1, PPT matrix, Entanglement, Separability. 1
2
CARIELLO
has the same properties of V in the previous question, where F ∈ Mk ⊗ Mk is the flip operator (see definition 2.1). Thus, a solution for the previous question provides a necessary condition for the separability of ρ. Here, we show that if V ⊂ W ⊗ W , V = VS ⊕ VA and V is generated by tensors with tensor rank 1 then dim(VS ) ≥
2 dim(VA ) dim(W )
dim(VA ) (theorem 2.5). For every W , we give an example of V such that dim(VS ) = 2dim(W satisfying these ) two conditions (theorem 2.6). Moreover, if W is the smallest vector space such that V ⊂ W ⊗ W ) then dim(VS ) ≥ dim(W . Therefore, 2 2 dim(VA ) dim(W ) dim(VS ) ≥ max , (1.1) dim(W ) 2
(theorem 2.7). Let ρ ∈ Mk ⊗ Mk and r denote the marginal rank of ρ + F ρF (see definition 3.2). Inequality (1.1) implies the following necessary condition for separability: If the range of a positive semidefinite Hermitian matrix ρ is generated by tensors with tensor rank 1 then 2 r rank((Id + F )ρ(Id + F )) ≥ max rank((Id − F )ρ(Id − F )), (1.2) r 2 (theorem 3.4 and definition 3.2). We prove the sharpness of this inequality (corollary 3.7). The range criterion is usually used when the range of a matrix does not contain tensors with tensor rank 1 ([6]). This inequality provides a very easy way to construct matrices whose range contains tensors with tensor rank 1, but by them (example 3.6). isn not generated t2 t Weshall denote by A the matrix i=1 Ai ⊗ Bi , which is called the partial transposition of A = ni=1 Ai ⊗ Bi ∈ Mk ⊗ Mm Mkm . Another necessary condition for the separability of ρ is to be positive under partial transposition, i.e., ρ and ρt2 are positive semidefinite Hermitian matrices ([1]). We can wonder if inequality (1.2) holds for matrices that are positive under partial transposition (PPT matrices). We are only able to prove this inequality for PPT matrices ρ such that the marginal rank of ρ + F ρF is smaller or equal to 3 (corollary 4.6), but we obtain some partial results, which are of independent interest . Firstly, we prove that if ρ is positive under partial transposition and rank((Id+F )ρ(Id+F )) = 1 then ρ is separable (theorem 4.5). The proof of this theorem is quite technical and requires a theorem from the Perron-Frobenius theory and some properties of the realignment map (definition 4.2). Next, one possible approach to prove that (1.2) holds for every PPT matrix is to find a lower bound for the rank of (Id + F )ρ(Id + F ). For example, we know that the marginal ranks of a PPT matrix (definition 3.2) are lower bounds for its rank ([7, Theorem 1]). Unfortunately, (Id + F )ρ(Id + F ) does not need to be PPT, if ρ is PPT. Nevertheless, we can impose some natural conditions on ρ, in order to obtain the PPT property for (Id + F )ρ(Id + F ). Notice that the range of (Id + F )ρ(Id + F ) is a subspace of the symmetric tensors of Ck ⊗ Ck . In order to be PPT, this matrix must have a symmetric Schmidt decomposition with positive coefficients (see [8, Section 3]). Here, we follow the nomenclature of ([9–11]) and we denote these matrices that have symmetric Schmidt decomposition with positive coefficients by symmetric with positive coefficients, or simply SPC matrices (definition 5.2). These papers showed that SPC matrices have strong connections with PPT matrices even if their ranges are not subspaces of the symmetric tensors.
A GAP FOR PPT ENTANGLEMENT
3
Now, if we assume that ρ is PPT and SPC then (Id + F )ρ(Id + F ) is PPT and 2 rank((Id + F )ρ(Id + F )) ≥ r ≥ rank((Id − F )ρ(Id − F )) r−1 (theorem 5.4 and corollary 5.6). Thus, there are plenty of non-trivial examples of PPT matrices satisfying (1.2). Finally, since we don’t know if (1.2) holds for PPT matrices, there is a possibility that a PPT matrix ρ satisfying 2 1 < rank((Id + F )ρ(Id + F )) < rank((Id − F )ρ(Id − F )) r exists. In this case ρ is PPT and not separable. So this is a gap where we can look for PPT entanglement. The interested reader can find more information concerning PPT matrices and matrices with symmetric Schmidt decomposition in [14–16]. This paper is organized as follows: dim(VA ) (theorem 2.5), if a subspace V of W ⊗ W is • In Section 2, we show that dim(VS ) ≥ 2dim(W ) such that V = VS ⊕ VA and V is generated by tensors with tensor rank 1. We also prove the sharpness of this inequality (theorem 2.6). Furthermore, if W is the smallest vector space such dim(VA ) dim(W ) that V ⊂ W ⊗ W then dim(VS ) ≥ max{ 2dim(W , 2 } (theorem 2.7). )
• In Section 3, we show that if ρ ∈ Mk ⊗Mk Mk2 is separable then rank((Id+F )ρ(Id+F )) ≥ max{ 2r rank((Id − F )ρ(Id − F )), 2r } (corollary 3.5). This inequality is also sharp (corollary 3.7). • In Section 4, we demonstrate the separability of ρ, if ρ is positive under partial transposition and rank((Id + F )ρ(Id + F )) = 1 (theorem 4.5). Moreover, if r is smaller or equal to 3 and ρ is PPT then rank((Id + F )ρ(Id + F )) ≥ max{ 2r rank((Id − F )ρ(Id − F )), 2r } (corollary 4.6). • In Section 5, we prove that (Id + F )ρ(Id + F ) is PPT and rank((Id + F )ρ(Id + F )) ≥ r ≥ − F )ρ(Id − F )), if ρ is PPT and ρ + F ρF is SPC (theorem 5.4).
2 rank((Id r−1
2. Main Results Let us begin this section with the following definition: Definition 2.1. Let W be a finite dimensional vector space over a field with characteristic not equal to 2. (1) Let F : W ⊗ W → W ⊗ W be the flip operator, i.e., F ( i ai ⊗ bi ) = i bi ⊗ ai . If W = Ck then F = ki,j=1 ei etj ⊗ ej eti ∈ Mk ⊗ Mk , where {e1 , . . . , ek } is the canonical basis of Ck . (2) Let V ⊂ W ⊗ W and define VA = {w ∈ V | F (w) = −w}, VS = {w ∈ V | F (w) = w}. (3) Let v ⊗ W = {v ⊗ w, w ∈ W }, W ⊗ v = {w ⊗ v, w ∈ W }, v ⊗ W + W ⊗ v = {v ⊗ w1 + w2 ⊗ v| w1 , w2 ∈ W }. (4) Let v ∧ w = v ⊗ w − w ⊗ v and v ⊗ W − W ⊗ v = {v ∧ w| w ∈ W }. In this section, we show that if a subspace V of W ⊗ W is invariant under flip operator (i.e, dim(VA ) V = VS ⊕ VA ) and generated by tensors with tensor rank 1 then dim(VS ) ≥ 2dim(W (theorem 2.5). ) We also show that this inequality is sharp (theorem 2.6). Moreover, if W is the smallest vector dim(VA ) dim(W ) , 2 } (theorem 2.7). In the next space such that V ⊂ W ⊗ W then dim(VS ) ≥ max{ 2dim(W ) section, we provide applications to Quantum Information Theory.
4
CARIELLO
In order to obtain our main theorem, we need the following two lemmas: Lemma 2.2. Let V be a subspace of W ⊗ W , where W is a finite dimensional vector space over a field with characteristic not equal to 2. Let us assume that F (V ) ⊂ V and V has a generating subset formed by tensors with tensor rank 1. If w1 ⊗ W − W ⊗ w1 ⊂ V , for some w1 = 0, then dim(VS ) ≥ dim(W ) − 1. Proof. Since F (V ) ⊂ V then V = VS ⊕ VA . If dim(VS ) = 0 then every element of V is antisymmetric. Therefore, the tensor rank of every element of V would not be 1, which is absurd. Thus, dim(VS ) ≥ 1. So if dim(W ) = 2 then dim(VS ) ≥ 2 − 1 and the result follows. By induction, let us assume that this lemma is valid when dim(W ) ∈ {2, . . . , n − 1}. Let dim(W ) = n. Since dim(W ) > 2 then w1 ⊗ W − W ⊗ w1 = {0} . There is a⊗b ∈ V \{0} such that neither a nor b is a multiple of w1 , otherwise V = span{w1 ⊗w1 } and w1 ⊗ W − W ⊗ w1 would not be a subset of V . Since b ⊗ a = F (a ⊗ b) ∈ V , we may assume that a is not a multiple of w1 . Let P : W → W be a linear transformation such that rank(P ) = n − 1, P a = 0 and P w1 = 0. Let P ⊗ P : W ⊗ W → W ⊗ W be the linear transformation such that P ⊗ P (v ⊗ w) = P v ⊗ P w. Now, P ⊗ P (V ) ⊂ P (W ) ⊗ P (W ), dim(P (W )) = n − 1 and P ⊗ P (V ) is also generated by tensors with tensor rank 1, since V is generated by tensors with tensor rank 1. Next, notice that F (P ⊗ P (V )) = F (P ⊗ P )F (F (V )) = P ⊗ P (F (V )) ⊂ P ⊗ P (V ) and P w1 ⊗ P (W ) − P (W ) ⊗ P w1 ⊂ P ⊗ P (w1 ⊗ W − W ⊗ w1 ) ⊂ P ⊗ P (V ). Thus, by induction hypothesis, dim(P ⊗ P (V )S ) ≥ (n − 1) − 1 = n − 2. Since P ⊗ P (VS ) ⊂ P ⊗ P (V )S , P ⊗ P (VA ) ⊂ P ⊗ P (V )A and V = VS ⊕ VA then the map P ⊗ P : VS → P ⊗ P (V )S is surjective and dim(VS ) = dim(P ⊗ P (V )S ) + dim(ker(P ⊗ P ) ∩ VS ). Finally, a ⊗ b + b ⊗ a ∈ (VS \ {0}) ∩ ker(P ⊗ P ), therefore dim(VS ) ≥ n − 2 + 1 = n − 1.
Lemma 2.3. Let V be a subspace of W ⊗ W , where W is a finite dimensional vector space over a field K with characteristic not equal to 2 and F (V ) ⊂ V . Let G be a generating subset of V such that F (G) = G, the tensor rank of every element of G is 1 and span{v| v ⊗ w ∈ G} = W . Moreover, assume that there exists w1 ⊗ w2 ∈ G such that if c ⊗ d ∈ G then w1 ∧ c ∈ V \ {0} or there exists wc ∈ W such that w1 ∧ wc ∈ V \ {0} and c ∧ wc ∈ V \ {0}. Then, dim(VS ) ≥ dim(W ) − 1. Proof. Since F (V ) ⊂ V then V = VS ⊕ VA . If dim(VS ) = 0 then every element of V is antisymmetric. Therefore, the tensor rank of every element of V would not be 1, which is absurd. Thus, dim(VS ) ≥ 1. If dim(W ) = 2 then dim(VS ) ≥ 2 − 1 and the result follows. By induction, let us assume that this lemma is valid when dim(W ) ∈ {2, . . . , n − 1}. Let dim(W ) = n. There is e ⊗ f ∈ G such that e ∈ / span{w1 }, since span{v| v ⊗ w ∈ G} = W . Therefore, w1 ∧e ∈ V \{0} or there exists we ∈ W such that w1 ∧we ∈ V \{0}. So (w1 ⊗W −W ⊗w1 )∩V = {0}. Now, if w1 ⊗ W − W ⊗ w1 ⊂ V then dim(VS ) ≥ n − 1, by lemma 2.2. Next, let us assume that w1 ⊗ W − W ⊗ w1 is not contained in V , i.e., (w1 ⊗ W − W ⊗ w1 ) ∩ V = span{w1 ∧ m1 , . . . , w1 ∧ ml } and span{w1 , m1 , . . . , ml } = W .
A GAP FOR PPT ENTANGLEMENT
5
Thus, there is a ⊗ b1 ∈ G such that a ∈ / span{w1 , m1 , . . . , ml }. Let P : W → W be a linear transformation such that P |span{w1 ,m1 ,...,ml } ≡ Id and ker(P ) = span{a}. Consider P ⊗ P : W ⊗ W → W ⊗ W and notice that P ⊗ P (z) = z, for every z ∈ (w1 ⊗ W − W ⊗ w1 ) ∩ V. Let V = P ⊗P (V ), G = {P v⊗P w| v⊗w ∈ G\ker(P ⊗P )} and W = span{P v| P v⊗P w ∈ G }. Notice that G is a generating subset of V formed by tensors with tensor rank 1, F (G ) = G and V ⊂ W ⊗ W . Moreover, F (V ) = F (P ⊗ P )(V ) = (P ⊗ P )F (V ) ⊂ P ⊗ P (V ) = V . Recall that W is a subset of the image of P then dim(W ) ≤ n − 1. In order to complete this proof, we must show that (1) G satisfies the last property of G and (2) if (W ⊗ a) ∩ V = {w ⊗ a| w ∈ W and w ⊗ a ∈ V } has dimension s then dim(W ) ≥ n − s. By item (1), the hypotheses of this theorem hold for V ⊂ W ⊗ W and dim(W ) ≤ n − 1. By induction hypothesis, dim(VS ) ≥ dim(W ) − 1. Let {b1 ⊗ a, . . . , bs ⊗ a} be a basis of (W ⊗ a) ∩ V . By item (2), dim(W ) ≥ n − s. Notice that {b1 ⊗ a + a ⊗ b1 , . . . , bs ⊗ a + a ⊗ bs } is a linear independent subset of ker(P ⊗ P ) ∩ VS . Thus, dim(ker(P ⊗ P ) ∩ VS ) ≥ s . Since P ⊗ P (VS ) ⊂ VS , P ⊗ P (VA ) ⊂ VA and V = VS ⊕ VA then P ⊗ P : VS → VS is surjective. Therefore, dim(VS ) = dim(ker(P ⊗ P ) ∩ VS ) + dim(VS ) ≥ s + dim(W ) − 1 ≥ s + n − s − 1 = n − 1. Proof of (1) : Since w1 ⊗ w2 ∈ V then w1 ∧ w2 ∈ (w1 ⊗ W − W ⊗ w1 ) ∩ V . Thus, w2 ∈ span{w1 , m1 , . . . , ml } and P (w2 ) = w2 . So P w1 ⊗ P w2 = w1 ⊗ w2 ∈ G . Let P c ⊗ P d ∈ G , where c ⊗ d ∈ G. Notice that P c = 0, by definition of G . Now, we must show that w1 ∧ P c ∈ V \ {0} or there exists r ∈ W such that w1 ∧ r ∈ V \ {0} and P c ∧ r ∈ V \ {0}. Thus, G satisfies the last property of G. First, if P w1 ∧ P c = 0 then P c = λP w1 = λw1 , λ ∈ K \ {0} (since P c = 0). Since P ⊗ P (z) = z for all z ∈ (w1 ⊗ W − W ⊗ w1 ) ∩ V then (w1 ⊗ W − W ⊗ w1 ) ∩ V ⊂ V . Thus, there is r ∈ W such that w1 ∧ r ∈ V \ {0}. So, λ(w1 ∧ r) = P c ∧ r ∈ V \ {0}. Notice that w1 ∧ r ∈ V \ {0} ⊂ W ⊗ W . Hence, r ∈ W . Second, if P w1 ∧ P c ∈ / V = P ⊗ P (V ) then w1 ∧ c ∈ / V . By the last property of G, there exists wc ∈ W such that w1 ∧ wc ∈ V \ {0} and c ∧ wc ∈ V \ {0}. Notice that w1 ∧ wc = P ⊗ P (w1 ∧ wc ) ∈ V \ {0} ⊂ W ⊗ W . Hence, P wc ∈ W \ {0}. Now, if 0 = P c ∧ P wc then P c = μP wc , μ ∈ K \ {0} (since P c = 0 and P wc = 0). So P w1 ∧ P c = μ(P w1 ∧ P wc ) and P w1 ∧ P c ∈ V , which is a contradiction. Therefore, P c ∧ P wc ∈ V \ {0} (since c ∧ wc ∈ V ) and w1 ∧ P wc ∈ V \ {0} (since w1 ∧ wc ∈ V and P w1 = w1 ). The proof of (1) is complete. Proof of (2) : Let {b1 ⊗a, . . . , bs ⊗a} be a basis of (W ⊗a)∩V and recall that a⊗b1 ∈ G, so b1 ⊗a = F (a⊗b1 ) ∈ G too. Since span{v| v ⊗ w ∈ G} = W then there exists {bs+1 ⊗ as+1 , . . . , bn ⊗ an } ⊂ G such that {b1 , . . . , bn } is basis for W .
6
CARIELLO
Observe that if v ⊗ w ∈ ker(P ⊗ P ) then v ∈ span{a} or w ∈ span{a}. Furthermore, if v ⊗ w ∈ G and v ∈ span{a} then w ⊗ v = F (v ⊗ w ) ∈ (W ⊗ a) ∩ V and w ∈ span{b1 , . . . , bs }. Now, if w ∈ span{a} then v ⊗ w ∈ (W ⊗ a) ∩ V and v ∈ span{b1 , . . . , bs }. Thus, if v ⊗ w ∈ G ∩ ker(P ⊗ P ) then v ∈ span{a} and w ∈ span{b1 , . . . , bs } or v ∈ span{b1 , . . . , bs } and w ∈ span{a}. Next, if bi ⊗ ai ∈ ker(P ⊗ P ), for some i > s, and ai ∈ span{a} then bi ∈ span{b1 , . . . , bs }, which is a contradiction (since {b1 , . . . bn } is linear independent). Hence, if bi ⊗ ai ∈ ker(P ⊗ P ), for i > s, then bi ∈ span{a}. Now, we consider three cases: Case I: a ∈ / span{bs+1 , . . . , bn }. Notice that if ni=s+1 λi P bi = 0, for λi ∈ K then ni=s+1 λi bi ∈ span{a}. Thus, 0 = λs+1 = . . . = λn and {P bs+1 , . . . , P bn } is a linear independent set. In this case, every bi ∈ / span{a}, for i > s, thus P bi ⊗ P ai = 0. So {P bs+1 ⊗ P as+1 , . . . , P bn ⊗ P an } ⊂ G , span{P bs+1 , . . . , P bn } ⊂ W and dim(W ) ≥ n − s. Case II: a ∈ span{bs+1 , . . . , bn } and ∃ p ⊗ q ∈ G \ ker(P ⊗ P ) s.t. {p, q} ⊂ span{bs+1 , . . . , bn }. Since q ⊗ p = F (p ⊗ q) ∈ G then we can assume that p ∈ / span{bs+1 , . . . , bn }. Since ker(P ) ∩ span{p, bs+1 , . . . , bn } = span{a} then dim(span{P p, P bs+1 , . . . , P bn }) = dim(span{p, bs+1 , . . . , bn }) − dim(ker(P ) ∩ span{p, bs+1 , . . . , bn }) = (n − s + 1) − 1 = n − s. Recall that if bi ⊗ ai ∈ ker(P ⊗ P ), for i > s, then bi ∈ span{a} and P bi = 0 ∈ W . If b i ⊗ ai ∈ / ker(P ⊗ P ), for i > s, then P ⊗ P (bi ⊗ ai ) ∈ G and P bi ∈ W . In any case, span{P bs+1 , . . . , P bn } ⊂ W . Recall that P p ∈ W , since p ⊗ q ∈ G \ ker(P ⊗ P ). Thus, span{P p, P bs+1 , . . . , P bn } ⊂ W and dim(W ) ≥ n − s. Case III: a ∈ span{bs+1 , . . . , bn } and ∀p ⊗ q ∈ G \ ker(P ⊗ P ), {p, q} ⊂ span{bs+1 , . . . , bn }. We demonstrate the impossibility of case III considering three scenarios: w1 ∧ b1 = 0, w1 ∧ b1 ∈ V \ {0} and w1 ∧ b1 ∈ / V. • If w1 ∧ b1 = 0 then b1 = δw1 , δ ∈ K \ {0}, and δ(w1 ∧ a) = b1 ∧ a ∈ V . Now, we have either w1 ∧ a = 0 or w1 ∧ a ∈ V \ {0}. It is easy to see that either case implies a ∈ {w1 , m1 . . . , ml }, which contradicts the choice of a. • If w1 ∧ b1 ∈ V \ {0} then w1 ∧ b1 = j λj vj ⊗ wj + m μm cm ⊗ dm , where λj ∈ K, μm ∈ K, vj ⊗ wj ∈ G ∩ ker(P ⊗ P ) and cm ⊗ dm ∈ G \ ker(P ⊗ P ). By the assumption of case III, {cm , dm } ⊂ span{bs+1 , . . . , bn }, for every m. Recall that if vj ⊗wj ∈ G∩ker(P ⊗P ) then either vj ∈ span{a} and wj ∈ span{b1 , . . . , bs } or vj ∈ span{b1 , . . . , bs } and wj ∈ span{a}. Let Q : W → W be a linear transformation such that Qbi = 0, for 1 ≤ i ≤ s, and Qbi = bi , for s + 1 ≤ i ≤ n. Thus, 0 = Q ⊗ Q(w1 ∧ b1 ) = m μm cm ⊗ dm and w1 ∧ b1 = j λj vj ⊗ wj , where vj ∈ span{a} or wj ∈ span{a}. So 0 = w1 ∧ b1 = a ⊗ r + s ⊗ a. Since a ⊗ r + s ⊗ a is antisymmetric then s = −r and 0 = w1 ∧ b1 = a ∧ r. Thus, a = λ w1 + λ b1 , where {λ , λ } ⊂ K. If λ = 0 then a = λ w1 , which is a contradiction (a ∈ / span{w1 , m1 , . . . , ml }). Therefore λ (w1 ∧ b1 ) = w1 ∧ (λ w1 + λ b1 ) = w1 ∧ a. So w1 ∧ a ∈ V \ {0}. But this is impossible, since
A GAP FOR PPT ENTANGLEMENT
7
a∈ / {w1 , m1 . . . , ml }. / V then there is wb1 ∈ W such that w1 ∧ wb1 ∈ V \ {0} and b1 ∧ wb1 ∈ V \ {0}, • If w1 ∧ b1 ∈ since b1 ⊗ a ∈ G and of G. by the last property Let b1 ∧ wb1 = j αj vj ⊗ wj + m βm cm ⊗ dm , where {αj , βm } ⊂ K, vj ⊗ wj ∈ G ∩ ker(P ⊗ P ) and cm ⊗ dm ∈ G \ ker(P ⊗ P ). We can repeat the argument used in the previous scenario to obtain a = δ wb1 + δ b1 , where {δ , δ } ⊂ K. If δ = 0 then δ b1 = a ∈ span{bs+1 , . . . , bn }, which is a contradiction ({b1 , . . . , bn } is linear independent). If δ = 0 then a = δ wb1 and δ (w1 ∧ wb1 ) = w1 ∧ a. Hence, w1 ∧ a ∈ V \ {0}, which is a contradiction (a ∈ / span{w1 , m1 , . . . , ml }). So δ = 0 and δ = 0. Next, δ wb1 = a − δ b1 and δ (w1 ∧ wb1 ) = w1 ∧ (a − δ b1 ). Hence, w1 ∧ (a − δ b1 ) ∈ V \ {0}. Let w1 ∧ (a − δ b1 ) = j j vj ⊗ wj + m γm cm ⊗ dm , where {j , γm } ⊂ K, vj ⊗ wj ∈ G ∩ ker(P ⊗ P ) and cm ⊗ dm ∈ G \ ker(P ⊗ P ). Since P ⊗ P (z) = z for every z ∈ (w1 ⊗ W − W ⊗ w1 ) ∩ V then 0 = w1 ∧ (a − δ b1 ) = P ⊗ P (w1 ∧ (a − δ b1 )) = −δ (P w1 ∧ P b1 ) = m γm P cm ⊗ P dm . , dm } ⊂ span{bs+1 , . . . , bn }, therefore Recall that {cm n{P cm , P dm } ⊂ span{P bs+1 , . . . , P bn }. n Thus, P b1 = i=s+1 ζi P bi , ζi ∈ K. Hence, b1 − i=s+1 ζi bi ∈ ker(P ) = span{a} and
b1 ∈ span{a, bs+1 , . . . , bn } = span{bs+1 , . . . , bn }, which is our final contradiction ({b1 , . . . , bn } is linear independent).
Corollary 2.4. Let V be a subspace of W ⊗ W , where W is a finite dimensional vector space over a field with characteristic not equal to 2. Let us assume that F (V ) ⊂ V , V has a generating subset formed by tensors with tensor rank 1. If span{v| v ⊗ w ∈ V \ {0}} = W and dim((a ⊗ W − W ⊗ a ) ∩ V ) >
dim(W ) 2
for every a ⊗ b ∈ V \ {0} then dim(VS ) ≥ dim(W ) − 1. Proof. Let G be the set of all tensors in V with tensor rank 1. Notice that F (G) = G, G generates V and span{v| v ⊗ w ∈ G} = span{v| v ⊗ w ∈ V \ {0}} = W . Let w1 ⊗ w2 ∈ G and c ⊗ d ∈ G. ) then there First, if w1 ∧ c = 0 then c = λw1 , λ = 0. Since dim((w1 ⊗ W − W ⊗ w1 ) ∩ V ) > dim(W 2 is w1 ∧ wc ∈ V \ {0}. So there is wc ∈ W such that c ∧ wc = λ(w1 ∧ wc ). Thus, w1 ∧ wc ∈ V \ {0} and c ∧ wc ∈ V \ {0}. / V then let {w1 ∧ m1 , . . . , w1 ∧ mu } be a basis of (w1 ⊗ W − W ⊗ w1 ) ∩ V Second, if w1 ∧ c ∈ and {c ∧ n1 , . . . , c ∧ nt } be a basis of (c ⊗ W − W ⊗ c) ∩ V . Notice that w1 , m1 , . . . , mu are linear independent and c, n1 , . . . , nt are linear independent. ) ) and t > dim(W . Thus, span{m1 , . . . , mu } ∩ span{n1 , . . . , nt } = {0}. By assumption u > dim(W 2 2 Let wc ∈ (span{m1 , . . . , mu } ∩ span{n1 , . . . , nt }) \ {0}. Since wc and w1 are linear independent / span{m1 , . . . , mu }) then w1 ∧ wc ∈ V \ {0}. Analogously we obtain c ∧ wc ∈ V \ {0}. (w1 ∈ Finally, G satisfies the hypotheses of lemma 2.3, therefore dim(VS ) ≥ dim(W ) − 1. Theorem 2.5. Let V be a subspace of W ⊗ W , where W is a finite dimensional vector space over a field with characteristic not equal to 2. Let us assume that F (V ) ⊂ V , V has a generating subset dim(VA ) formed by tensors with tensor rank 1. Then, dim(VS ) ≥ 2dim(W . ) Proof. Since F (V ) ⊂ V then V = VS ⊕ VA . If dim(VS ) = 0 then every element of V is antisymmetric. Therefore the tensor rank of every element of V is not 1, which is absurd. Thus, dim(VS ) ≥ 1. If dim(W ) = 2 then 1 ≥ dim(VA ) and dim(VS ) ≥ 22 dim(VA ) .
8
CARIELLO
By induction, let us assume that this theorem is valid when dim(W ) ∈ {2, . . . , n − 1} and let dim(W ) = n. Observe that if R = span{v| v ⊗ w ∈ V \ {0}} = W then dim(R) ≤ n − 1. Since F (V ) ⊂ V and V is generated by tensors with tensor rank 1 then V ⊂ R ⊗ R. By induction hypothesis, dim(VS ) ≥
2 2 dim(VA ) ≥ dim(VA ). dim(R) n
Now, let us assume that span{v| v ⊗ w ∈ V \ {0}} = W . If dim((a ⊗ W − W ⊗ a ) ∩ V ) > n2 , for every a ⊗ b ∈ V \ {0}, then dim(VS ) ≥ n − 1, by corollary 2.4. Since dim(VA ) ≤ n(n−1) then dim(VS ) ≥ n2 dim(VA ). 2 Next, let us assume that there is a ⊗ b ∈ V \ {0} such that dim((a ⊗ W − W ⊗ a) ∩ V ) ≤ n2 . Let P : W → W be a linear transformation such that ker P = span{a}. Recall that a ⊗ W = {a ⊗ w| w ∈ W }, W ⊗ a = {w ⊗ a| w ∈ W } and ker(P ⊗ P ) = a ⊗ W + W ⊗ a. Let us consider two cases: Case I: V ⊂ ker(P ⊗ P ). Since V is generated by tensors with tensor rank 1 then V is generated by ((a⊗W )∪(W ⊗a))∩V . Moreover, since F (V ) ⊂ V then the following linear transformations are surjective: P1 : (a ⊗ W ) ∩ V → VS P1 (a ⊗ w) = a ⊗ w + w ⊗ a
P2 : (a ⊗ W ) ∩ V → VA P2 (a ⊗ w) = a ∧ w.
Note that P1 is also injective, since the characteristic of the underlying field is not 2. Thus, 2 dim(VS ) = dim((a ⊗ W ) ∩ V ) ≥ dim(VA ) ≥ dim(VA ), since n > 2. n Case II: V ⊂ ker(P ⊗ P ). Let W = span{P v| v ⊗ w ∈ V \ ker(P ⊗ P )}. Since V ⊂ ker(P ⊗ P ) then dim(W ) > 0. Observe that F (P ⊗ P (V )) = P ⊗ P (F (V )) ⊂ P ⊗ P (V ) and P ⊗ P (V ) is generated by tensors with tensor rank 1. Therefore, P ⊗ P (V ) ⊂ W ⊗ W and 0 < dim(W ) ≤ rank(P ) = n − 1. 2 By induction hypothesis, dim(P ⊗ P (V )S ) ≥ dim(W ) dim(P ⊗ P (V )A ). So 2 dim(P ⊗ P (V )A ). n Since a ⊗ b + b ⊗ a ∈ (ker(P ⊗ P ) ∩ VS ) \ {0} then dim(P ⊗ P (V )S ) ≥
dim(ker(P ⊗ P ) ∩ VS ) ≥ 1. Observe that (a ⊗ W − W ⊗ a) ∩ V = ker(P ⊗ P ) ∩ VA , hence n dim(ker(P ⊗ P ) ∩ VA ) ≤ . 2 Since P ⊗ P (VS ) ⊂ P ⊗ P (V )S , P ⊗ P (VA ) ⊂ P ⊗ P (V )A and V = VS ⊕ VA then P ⊗ P : VS → P ⊗ P (V )S and P ⊗ P : VA → P ⊗ P (V )A are surjective. Thus, • dim(VS ) = dim(ker(P ⊗ P ) ∩ VS ) + dim(P ⊗ P (V )S ) ≥ 1 + n2 dim(P ⊗ P (V )A ), • dim(VA ) = dim(ker(P ⊗ P ) ∩ VA ) + dim(P ⊗ P (V )A ) ≤ n2 + dim(P ⊗ P (V )A ). Finally,
2 dim(VA ) n
≤1+
2 n
dim(P ⊗ P (V )A ) ≤ dim(VS ).
A GAP FOR PPT ENTANGLEMENT
9
Theorem 2.6. Let W be a k−dimensional vector space over a field K with characteristic not equal to 2. There is a subspace V of W ⊗ W such that (1) F (V ) ⊂ V , (2) span{v| v ⊗ w ∈ V \ {0}} = W , (3) V has a generating subset formed by tensors with tensor rank 1, (4) dim(VS ) = k − 1 and VA = (W ⊗ W )A . Thus, dim(VS ) = k2 dim(VA ) and the inequality in theorem 2.5 is sharp. Proof. Let w1 , . . . , wk be a basis of W and let e1 , . . . , ek be the canonical basis of Kk . Let G : W → Kk be the linear transformation such that G(w) is the vector of the coordinates of w in the basis w 1 , . . . , wk . Observe that G ⊗ G : W ⊗ W → Kk ⊗ Kk , G ⊗ G( i ci ⊗ di ) = i G(ci ) ⊗ G(di ), is an isomorphism and the tensor rank of m ∈ W ⊗ W is the tensor rank of G ⊗ G(m) ∈ Kk ⊗ Kk . Notice also that G ⊗ G : (W ⊗ W )S → (Kk ⊗ Kk )S and G ⊗ G : (W ⊗ W )A → (Kk ⊗ Kk )A are also isomorphisms. Thus, if we can find a subspace V of Kk ⊗ Kk satisfying the required properties then (G ⊗ G)−1 (V ) is a subspace of W ⊗ W satisfying the same properties. Now, let us construct this V inside Kk ⊗ Kk . Let Mk (K) be the set of matrices of order k with coefficients in K. Consider the linear transformation T : Kk ⊗ Kk → Mk (K), T ( i fi ⊗ gi ) = i fi git . Observe that T (F (v)) = T (v)t , where F : Kk ⊗ Kk → Kk ⊗ Kk is the flip operator (definition 2.1). Define • Wi = span{e1 , . . . , ei }, • a2 = e2 ⊗ (e1 + e2 ), a3 = e3 ⊗ (e1 + 2e2 + e3 ), . . . , ak = ek ⊗ (e1 + 2e2 + . . . + 2ek−1 + ek ), • Vi = span{a2 , F (a2 ), . . . , ai , F (ai )} + (Wi ⊗ Wi )A . Notice that F (Vi ) ⊂ Vi and (Vi )S = span{a2 + F (a2 ), . . . , ai + F (ai )} ⊂ Wi ⊗ Wi . / Wi−1 ⊗ Wi−1 then {a2 + F (a2 ), . . . , ai + F (ai )} is a linear independent set, Since ai + F (ai ) ∈ for i ≥ 2. Hence, 2 2 dim((Vi )S ) = i − 1 = dim((Wi ⊗ Wi )A ) = dim((Vi )A ). i i Furthermore, span{v| v ⊗ w ∈ Vi \ {0}} = span{e1 + e2 , e2 , . . . , ei } = Wi , for i ≥ 2. In order to complete this proof, we must show, by induction on j, that each Vj contains a generating subset formed by tensors with tensor rank 1 and then we choose V = Vk . Notice that (W2 ⊗ W2 )A = span{e1 ∧ e2 } ⊂ span{a2 , F (a2 )}. So span{a2 , F (a2 )} is a generating subset of V2 . By induction, let us assume that there is a generating subset of Vn−1 formed by tensors with tensor rank 1. For 2 ≤ i ≤ n, let • si = ai + F (ai ) + . . . + an + F (an ) • ri = (e1 + 2e2 + . . . + 2ei−1 + ei + . . . + en ) ⊗ (ei + . . . + en ). In order to obtain a generating subset for Vn (and complete the induction), we must prove that Vn = Vn−1 + span{an , F (an ), r2 , r3 , . . . , rn }.
(2.1)
Since Vn−1 contains a generating subset formed by tensors with tensor rank 1, by induction hypothesis, then Vn contains a generating subset formed by tensors with tensor rank 1. Proof of equation (2.1): In order to prove this equation, we need the following two facts: (1) si = ri + F (ri ), for 2 ≤ i ≤ n,
10
CARIELLO
(2) (Wn ⊗ Wn )A = (Wn−1 ⊗ Wn−1 )A + span{r2 − F (r2 ), r3 − F (r3 ), . . . , rn − F (rn )} By item (2), Vn = (Wn ⊗ Wn )A + span{a2 , F (a2 ), . . . , an , F (an )} = (Wn−1 ⊗ Wn−1 )A + span{r2 − F (r2 ), r3 − F (r3 ), . . . , rn − F (rn )} + span{a2 , F (a2 ), . . . , an , F (an )}. By item (1), ri + F (ri ) ∈ span{a2 , F (a2 ), . . . , an , F (an )}, for 2 ≤ i ≤ n. Thus, span{r2 − F (r2 ), r3 − F (r3 ), . . . , rn − F (rn )} + span{a2 , F (a2 ), . . . , an , F (an )} = span{a2 , F (a2 ), . . . , an−1 , F (an−1 ), an , F (an ), r2 , r3 , . . . , rn }. Therefore, Vn = (Wn−1 ⊗ Wn−1 )A + span{a2 , F (a2 ), . . . , an−1 , F (an−1 ), an , F (an ), r2 , r3 , . . . , rn } = Vn−1 + span{an , F (an ), r2 , r3 , . . . , rn }. Now, let us prove (1) and (2). Proof of (1): ⎛ 0 ⎜0 ⎜. ⎜ .. ⎜ ⎜0 ⎜ ⎜1 ⎜ T (si ) = ⎜ .. ⎜. ⎜ ⎜1 ⎜ ⎜0 ⎜. ⎝ .. 0
Observe that 0 ··· 0 ··· .. . . . . 0 ··· 2 ··· .. . . . .
0 0 .. . 0 2 .. .
2 ··· 2 0 ··· 0 .. . . .. . . . 0 ··· 0
1 ··· 2 ··· .. . . . . 2 ··· 2 ··· .. . . . .
1 2 .. .
0 ··· 0 ··· .. . . . . 0 ··· 0 ··· .. . . . . 0 ··· 0 ··· .. . . . .
2 2 .. . 2 ··· 2 0 ··· 0 .. . . .. . . . 0 ··· 0 0 ···
⎛ ⎞ 0 0 0⎟ ⎜0 ⎜. .. ⎟ ⎜ .. .⎟ ⎜ ⎟ ⎜0 ⎟ 0⎟ ⎜ ⎜0 0⎟ ⎜ ⎟ ) = , T (r ⎜ .. .. ⎟ i ⎜. ⎟ .⎟ ⎜ ⎜0 0⎟ ⎜ ⎟ ⎜0 0⎟ ⎜. ⎟ .. ⎠ ⎝ .. . 0 0
0 ··· 0 ··· .. . . . . 0 ··· 0 ··· .. . . . .
0 0 .. . 0 0 .. .
0 ··· 0 0 ··· 0 .. . . .. . . . 0 ··· 0
1 ··· 2 ··· .. . . . . 2 ··· 1 ··· .. . . . .
1 2 .. .
2 1 .. . 1 ··· 1 0 ··· 0 .. . . .. . . . 0 ··· 0
⎞ 0 0⎟ .. ⎟ .⎟ ⎟ 0⎟ ⎟ 0⎟ ⎟ .. ⎟ , .⎟ ⎟ 0⎟ ⎟ 0⎟ .. ⎟ .⎠ 0 ··· 0
0 ··· 0 ··· .. . . . . 0 ··· 0 ··· .. . . . . 0 ··· 0 ··· .. . . . .
where the first i − 1 rows and columns of T (si ) are multiples of (0, . . . , 0, 1, . . . , 1, 0 . . . , 0), the next n − i + 1 rows and columns of T (si ) are equal to (1, 2, . . . , 2, . . . , 2, 0 . . . , 0) and the last k − n rows and columns of T (si ) are zero. Notice that T (si ) = T (ri ) + T (ri )t = T (ri + F (ri )). Thus, si = ri + F (ri ), for 2 ≤ i ≤ n. Proof of (2): Notice that • the nth row of T (r2 − F (r2 )) = T (r2 ) − T (r2 )t is (−1, 0, . . . , 0), • the nth row of T (r3 − F (r3 )) = T (r3 ) − T (r3 )t is (−1, −2, 0, . . . , 0), .. . • the nth row of T (rn − F (rn )) = T (rn ) − T (rn )t is (−1, −2, . . . , −2, 0, . . . , 0). Hence, {A ∈ Mk (K)| A = −At , aij = 0 if i > n or j > n} = {A ∈ Mk (K)| A = −At , aij = 0 if i > n − 1 or j > n − 1}+ span{T (r2 − F (r2 )), . . . , T (rn − F (rn ))}. Since T ((Ws ⊗ Ws )A ) = {A ∈ Mk (K)| A = −At , aij = 0 if i > s or j > s} then (Wn ⊗ Wn )A = (Wn−1 ⊗ Wn−1 )A + span{r2 − F (r2 ), r3 − F (r3 ), . . . , rn − F (rn )}. We complete this section adding one assumption to theorem 2.5. We prove that if V satisfies the hypotheses of theorem 2.5 and span{v| v ⊗ w ∈ V \ {0}} = W then dim(VS ) ≥
A GAP FOR PPT ENTANGLEMENT dim(VA ) dim(W ) max{ 2dim(W , 2 }. Moreover, we analyse both cases: dim(VS ) = )
11 dim(W ) 2
and dim(VS ) =
2 dim(VA ) . dim(W )
Theorem 2.7. Let V be a subspace of W ⊗ W , where W is a finite dimensional vector space over a field K with characteristic not equal to 2. Let us assume that F (V ) ⊂ V , V has a generating subset formed by tensors with tensor rank 1. If span{v| v ⊗ w ∈ V \ {0}} = W then dim(VS ) ≥ dim(VA ) dim(W ) , 2 }. Furthermore, max{ 2dim(W ) a) If dim(VS ) = b) If dim(VS ) =
dim(W ) then dim(VA ) = dim(VS ). 2 2 dim(VA ) then dim(VS ) = dim(W ) dim(W )
− 1 and VA = (W ⊗ W )A .
) dim(VA ) Proof. First, let us prove dim(VS ) ≥ dim(W . The inequality dim(VS ) ≥ 2dim(W was proved in 2 ) theorem 2.5. Let {a1 ⊗ b1 , . . . , an ⊗ bn } be a basis of V . Thus, {a1 ⊗ b1 + b1 ⊗ a1 , . . . , an ⊗ bn + bn ⊗ an } is a generating subset of VS . Without loss of generality, assume {a1 ⊗ b1 + b1 ⊗ a1 , . . . , at ⊗ bt + bt ⊗ at } is a basis of VS . Let v ⊗ w ∈ V \ {0}. Since v ⊗ w + w ⊗ v ∈ span{a1 ⊗ b1 + b1 ⊗ a1 , . . . , at ⊗ bt + bt ⊗ at } then v ∈ span{a1 , . . . , at , b1 , . . . , bt }. Therefore,
W = span{v| v ⊗ w ∈ V \ {0}} ⊂ span{a1 , . . . , at , b1 , . . . , bt }. So dim(W ) ≤ 2t = 2 dim(VS ). Now, let us prove item a). If dim(W ) = 2 dim(VS ) = 2t then W = span{a1 , . . . , at , b1 , . . . , bt } and {a1 , . . . , at , b1 , . . . , bt } is a basis of W . Let v ⊗ w + w ⊗ v = sj=1 αj (aij ⊗ bij + bij ⊗ aij ), where αj = 0 for every j. {ai1 , . . . , ais , bi1 , . . . , bis } is a linear Since {ai1 , . . . , ais , bi1 , . . . , bis } ⊂ {a1 , . . . , at , b1 , . . . , bt } then independent set. Since αj = 0, for every j, the tensor rank of sj=1 αj (aij ⊗ bij + bij ⊗ aij ) is 2s. But the tensor rank of v ⊗ w + w ⊗ v is 1 or 2. Thus, s = 1, the tensor rank of v ⊗ w + w ⊗ v is 2 and v ⊗ w + w ⊗ v = α1 (ai1 ⊗ bi1 + bi1 ⊗ ai1 ). Therefore, for any v ⊗ w ∈ V \ {0}, there is i1 ∈ {1, . . . , t} such that span{v, w} = span{ai1 , bi1 }. So β(ai1 ∧ bi1 ) = v ∧ w, for some β ∈ K. Since V is generated by {v ⊗ w| v ⊗ w ∈ V \ {0}} and v ∧ w is equal to some ai ∧ bi , 1 ≤ i ≤ t, then {a1 ∧ b1 , . . . , at ∧ bt } is a generating subset of VA . Since {a1 , . . . , at , b1 , . . . , bt } is a linear independent set then {a1 ∧ b1 , . . . , at ∧ bt } is also a linear independent set. Therefore, dim(VA ) = t = dim(VS ). Next, let us prove item b). If dim(W ) = 1 then dim(VA ) = dim((W ⊗ W )A ) = 0. dim(VA ) = dim(W ) implies dim(W ) ≥ 2 and dim(VA ) ≥ dim(VS ). So the equality 2dim(V S) Let a ⊗ b ∈ V \ {0} and P : W → W be a linear transformation such that ker(P ) = span{a }. Denote by a ⊗ W = {a ⊗ w| w ∈ W } and W ⊗ a = {w ⊗ a | w ∈ W }. Thus, ker(P ⊗ P ) = a ⊗ W + W ⊗ a . Now, let us consider two cases: Case I: V ⊂ ker(P ⊗ P ). Since V is generated by tensors with tensor rank 1 then V is generated by ((a ⊗W )∪(W ⊗a ))∩V . Thus, the following linear transformations are surjective
12
CARIELLO
P1 : (a ⊗ W ) ∩ V → VS P1 (a ⊗ w) = a ⊗ w + w ⊗ a
P2 : (a ⊗ W ) ∩ V → VA P2 (a ⊗ w) = a ∧ w.
Notice that P1 is also injective, since the characteristic of K is not 2. Thus, dim(VS ) = dim((a ⊗ dim(VA ) = 2. Hence, W ) ∩ V ) ≥ dim(VA ). Therefore, dim(VA ) = dim(VS ) and dim(W ) = 2dim(V S) dim(VA ) = 1, VA = (W ⊗ W )A and dim(Vs ) = 1 = dim(W ) − 1. Case II: V ⊂ ker(P ⊗ P ). Notice that, P ⊗ P (V ) ⊂ P (W ) ⊗ P (W ), P ⊗ P (V ) is generated by tensors with tensor rank 1 and is invariant under flip operator then dim(P ⊗ P (V )S ) ≥ 1 and, by theorem 2.5, 2 dim(P ⊗P (V )A ) dim(P ⊗P (V )S )
≤ dim(P (W )) = dim(W ) − 1.
Notice also that dim(ker(P ⊗ P ) ∩ VS ) ≥ 1, since a ⊗ b + b ⊗ a ∈ ker(P ⊗ P ) ∩ (VS \ {0}). Next, since P ⊗ P (VS ) ⊂ P ⊗ P (V )S , P ⊗ P (VA ) ⊂ P ⊗ P (V )A , V = VS ⊕ VA then P ⊗ P : VS → P ⊗ P (V )S and P ⊗ P : VA → P ⊗ P (V )A are surjective and
Therefore,
2 dim(VA ) dim(VS )
dim(VS ) = dim(P ⊗ P (V )S ) + dim(ker(P ⊗ P ) ∩ VS ), dim(VA ) = dim(P ⊗ P (V )A ) + dim(ker(P ⊗ P ) ∩ VA ). 2 dim(VA ∩ker(P ⊗P )) dim(P ⊗P (V )S ) 2 dim(P ⊗P (V )A ) S ∩ker(P ⊗P )) = dim(Vdim(V + dim(VS ∩ker(P ⊗P )) dim(VS ) dim(P ⊗P (V )S ) S) and
So
2 dim(VA ) dim(VS )
dim(VS ∩ker(P ⊗P )) dim(VS )
is a non-trivial convex
Now, if dim(VA ∩ ker(P ⊗ P )) ≤
dim(P ⊗P (V )S ) = 1. dim(VS ) dim(P ⊗P (V )A ) combination of 2dim(P ≤ ⊗P (V )S ) dim(W ) 2 dim(VA ∩ker(P ⊗P )) then dim(VS ∩ker(P ⊗P )) ≤ 2
+
dim(W )−1 and
2 dim(VA ∩ker(P ⊗P )) . dim(VS ∩ker(P ⊗P ))
dim(W ) and
2 dim(VA ) < dim(W ). dim(VS ) Hence,
2 dim(VA ) dim(VS )
= dim(W ) implies dim(VA ∩ ker(P ⊗ P )) >
dim(W ) . 2
Thus, dim(VA ∩ (a ⊗ W +
) W ⊗ a )) = dim((a ⊗ W − W ⊗ a ) ∩ V ) > dim(W , for every a ⊗ b ∈ V \ {0}. 2 By corollary 2.4, dim(VS ) ≥ dim(W ) − 1. Finally,
dim((W ⊗ W )A ) ≥ dim(VA ) =
dim(W ) dim(VS ) 2
Therefore, (W ⊗ W )A = VA and dim(VS ) =
≥
dim(W )(dim(W )−1) 2
2 dim(VA ) dim(W )
= dim(W ) − 1.
= dim((W ⊗ W )A ).
3. Applications to Quantum Information Theory In this section, we show that if ρ ∈ Mk ⊗ Mk Mk2 is separable and r is the marginal rank of ρ + F ρF then
rank((Id + F )ρ(Id + F )) ≥ max 2r rank((Id − F )ρ(Id − F )), 2r (corollary 3.5). Moreover, this inequality is sharp (corollary 3.7). Let Mk denote the set of complex matrices of order k and Ck be the set of column vectors with k complex entries. We shall identify the tensor product space Ck ⊗ Cm with Ckm and the tensor product space Mk ⊗ Mm with Mkm , via Kronecker product (i.e., if A = (aij ) ∈ Mk and B ∈ Mm then A ⊗ B = (aij B) ∈ Mkm . If v = (vi ) ∈ Ck and w ∈ Cm then v ⊗ w = (vi w) ∈ Ckm ).
A GAP FOR PPT ENTANGLEMENT
13
The identification of the tensor product space Ck ⊗ Cm with Ckm and the tensor product space Mk ⊗ Mm with Mkm , via Kronecker product, allow us to write (v ⊗ w)(r ⊗ s)t = vrt ⊗ wst , where v ⊗ w is a column, (v ⊗ w)t its transpose and v, r ∈ Ck and w, s ∈ Cm . Therefore if x, y ∈ Ck ⊗ Cm Ckm we have xy t ∈ Mk ⊗ Mm Mkm . The image (or the range) of the matrix ρ ∈ Mk ⊗ Mm Mkm in Ck ⊗ Cm Ckm shall be denoted by (ρ). Definition 3.1. (Separable Matrices) Let ρ ∈ Mk ⊗ Mm . We say that ρ is separable if n ρ = i=1 Ci ⊗ Di such that Ci ∈ Mk and Di ∈ Mm are positive semidefinite Hermitian matrices for every i. If ρ is not separable then ρ is entangled. m m A Definition 3.2. Let ρ = = i=1 Ai ⊗ Bi ∈ Mk ⊗ Mk . Define ρ i=1 Ai tr(Bi ) ∈ Mk and m B A B ρ = i=1 Bi tr(Ai ) ∈ Mk . The matrices ρ , ρ are usually called the marginal or local matrices. The marginal ranks of ρ are the ranks of ρA and ρB . If they are equal, we shall call them the marginal rank of ρ. Remark 3.3. It is well known that if ρ ∈ Mk ⊗ Mk Mk2 is a positive semidefinite Hermitian matrix then ρA ∈ Mk and ρB ∈ Mk are too. Furthermore, (F ρF )A = ρB , (F ρF )B = ρA and (ρ) ⊂ (ρA ) ⊗ (ρB ). Theorem 3.4. Let ρ ∈ Mk ⊗ Mk Mk2 be a positive semidefinite hermitian matrix. If (ρ) is generated by tensors with tensor rank 1 and r is the marginal rank of ρ + F ρF then rank((Id + F )ρ(Id + F )) ≥ max{ 2r rank((Id − F )ρ(Id − F )), 2r },
where F ∈ Mk ⊗ Mk is the flip operator, Id ∈ Mk ⊗ Mk is the identity .
Proof. Let σ = (ρ + F ρF )A . By remark 3.3, (ρ + F ρF )A = ρA + ρB = (ρ + F ρF )B . Furthermore, (ρ + F ρF ) ⊂ (σ) ⊗ (σ) and, by hypothesis, rank(σ) = r. Second, the range of B = 2(ρ + F ρF ) = (Id + F )ρ(Id + F ) + (Id − F )ρ(Id − F ) is generated by tensors with tensor rank 1, is invariant under flip operator and is a subset of (σ) ⊗ (σ). Moreover, dim((B)S ) = rank((Id + F )ρ(Id + F )) and dim((B)A ) = rank((Id − F )ρ(Id − F )). Therefore, by theorem 2.5, rank((Id + F )ρ(Id + F )) ≥ 2r rank((Id − F )ρ(Id − F )). Now, let W = span{v| v ⊗w ∈ (B) \{0}}. Since (B) is generated by tensors with tensor rank 1 and tr(B(mmt ⊗ Id)) = 2tr(σmmt ) then m ∈ W ⊥ if and only if m ∈ ker(σ). Thus, W = (σ). Finally, by theorem 2.7, rank((Id + F )ρ(Id + F )) ≥ 2r . Corollary 3.5. If ρ ∈ Mk ⊗ Mk Mk2 is separable and r is the marginal rank of ρ + F ρF then rank((Id + F )ρ(Id + F )) ≥ max{ 2r rank((Id − F )ρ(Id − F )), 2r }. Proof. By the range criterion ([2]), (ρ) has a generating subset formed by tensors with tensor rank 1. Now, use theorem 3.4. Example 3.6. Let B ∈ Mk ⊗ Mk Mk2 be a positive semidefinite Hermitian matrix with rank smaller than k − 1. The matrix ρ = B + (Id − F ) is not separable since rank((Id + F )ρ(Id + F )) = rank((Id + F )B(Id + F )) < k − 1 = k2 rank((Id − F )ρ(Id − F )). Corollary 3.7. For every k, there is a separable matrix ρ ∈ Mk ⊗ Mk such that the marginal rank of ρ + F ρF is k and rank((Id + F )ρ(Id + F )) = k − 1 = k2 rank((Id − F )ρ(Id − F )). Therefore the inequality of theorem 3.4 is sharp.
14
CARIELLO
Proof. Let V ⊂ Ck ⊗ Ck be the vector space described in theorem 2.6. Let {v1 ⊗ w1 , . . . , vm ⊗ wm } t t be a basis for V and consider the separable matrix: ρ = m i=1 vi vi ⊗ wi wi . m t Notice that (Id+F )ρ(Id+F ) = i=1 si si , where si = vi ⊗wi +wi ⊗vi . Furthermore, {s1 , . . . , sm } is a generating subset for VS and for the image of (Id + F )ρ(Id + F ). So rank((Id + F )ρ(Id + F )) = dim(VS ). Analogously, we have rank((Id − F )ρ(Id − F )) = dim(VA ). Thus, by theorem 2.6, 2 rank((Id + F )ρ(Id + F )) = k − 1 = rank((Id − F )ρ(Id − F )). k 2 Notice that k rank((Id − F )ρ(Id − F )) = rank((Id + F )ρ(Id + F )) ≥ 2r rank((Id − F )ρ(Id − F )), where r is the marginal rank of ρ + F ρF , by theorem 3.4. Thus, r ≥ k. Since ρ + F ρF ∈ Mk ⊗ Mk then r ≤ k. Therefore, r = k. 4. A Gap for PPT Entanglement We saw in corollary 3.5 that if ρ ∈ Mk ⊗ Mk is separable then r 2 rank((Id − F )ρ(Id − F )), , rank((Id + F )ρ(Id + F )) ≥ max r 2 where r is the marginal rank of ρ + F ρF . Since every separable matrix (definition 3.1) is positive under partial transposition (definition 4.1) then it is natural to ask if the inequality above holds for PPT matrices. We cannot answer this question, but in this section we obtain the following partial results: • This inequality holds for any PPT matrix ρ ∈ Mk ⊗ Mk such that r ≤ 3 (corollary 4.6). • If ρ ∈ Mk ⊗ Mk Mk2 is PPT and rank((Id + F )ρ(Id + F )) = 1 then r ≤ 2 and ρ is separable (theorem 4.5). The first result follows from the second. The second is a very technical result, which requires a theorem from Perron-Frobenius theory ([12, Theorem 2.5]). There is a possibility that a PPT matrix ρ satisfying 2 1 < rank((Id + F )ρ(Id + F )) < rank((Id − F )ρ(Id − F )) r exists. In this case ρ is entangled. So this is a gap where we can look for PPT entanglement. In the next section, we provide several non-trivial examples of PPT matrices ρ ∈ Mk ⊗ Mk 2 rank((Id − F )ρ(Id − F )). Mk2 such that rank((Id + F )ρ(Id + F )) ≥ r ≥ r−1 Definition 4.1. (PPT matrices) Let A = ni=1 Ai ⊗ Bi ∈ Mk ⊗ Mm Mkm be a positive semidefinite Hermitian matrix. We say that A is positive under partial transposition or simply PPT, if At2 = Id ⊗ (·)t (A) = ni=1 Ai ⊗ Bit is positive semidefinite. n n k t Definition 4.2. Let V : Mk → Ck ⊗ C be defined by V( a b ) = i i i=1 i=1 ai ⊗ bi and let R : n n t Mk ⊗Mk → Mk ⊗Mk be defined by R( i=1 Ai ⊗Bi ) = i=1 V(Ai )V(Bi ) , where V(Ai ) ∈ Ck ⊗Ck is a column vector and V(Bi )t is a row vector. This map R : Mk ⊗ Mk → Mk ⊗ Mk is usually called the “realignment map” (See [3–5]). Lemma 4.3. (Properties of the Realignment map) Let R : Mn ⊗ Mn → Mn ⊗ Mn be the realignment map of definition 4.2 and F ∈ Mn ⊗ Mn the flip operator of definition 2.1 . Let vi , wi ∈ Cn ⊗ Cn and C ∈ Mn ⊗ Mn . Then, m −1 t (1) R( m (vi ) ⊗ V −1 (wi ) i=1 vi wi ) = i=1 V (2) R(CF )F = C t2 (3) R(CF ) = R(C)t2
A GAP FOR PPT ENTANGLEMENT
15
(4) R(C t2 ) = R(C)F (5) R(C t2 ) = (CF )t2 Proof. See ([11, Lemma 23]) for the items 1 to 4. For the last item, notice that it is sufficient to prove this formula for C = abt ⊗ cdt , where {a, b, c, d} ⊂ Ck and the proof is straightforward. n n Cn , where {e1 , . . . , en } is the canonical basis of Cn . Remark 4.4. Let u = i=1 ei ⊗ ei ∈ C ⊗ n Observe that Id = i,j=1 ei eti ⊗ ej etj , uut = ni,j=1 ei etj ⊗ ei etj , R(Id) = ni,j=1 V(ei eti )V(ej etj )t = k n k t t t t t t t t t i,j=1 ei ej ⊗ ei ej = uu and R(uu ) = i,j=1 V(ei ej )V(ei ej ) = i,j=1 ei ei ⊗ ej ej = Id. Theorem 4.5. Let ρ ∈ Mk ⊗ Mk Mk2 be a positive semidefinite hermitian matrix, Id ∈ Mk ⊗ Mk Mk2 the identity and F ∈ Mk ⊗Mk the flip operator. Suppose the rank of (Id+F )ρ(Id+F ) is 1. If ρ is positive under partial transposition then the marginal rank of ρ + F ρF is smaller or equal to 2 and ρ is separable. Proof. First, notice that (Id + F )ρ(Id + F ) = wwt , for some w ∈ (Ck ⊗ Ck )S \ {0}. Define B = 2(ρ + F ρF ). Since ρ is PPT then B is also PPT. Moreover, t B = 2(ρ + F ρF ) = (Id + F )ρ(Id + F ) + (Id − F )ρ(Id − F ) = wwt + m j=1 bj bj , where w ∈ (Ck ⊗ Ck )S and bj ∈ (Ck ⊗ Ck )A , 1 ≤ j ≤ m. First, let n be the tensor rank of w and let us show that n is smaller or equal to 2. k k k Since n w ∈ (C ⊗ C )S then there are linear independent vectors s1 , . . . , sn in C such that w = i=1 si ⊗ si . Let Mn×k be the set of complex matrices with n rows and k columns and let T ∈ Mn×k be such that T si = ei , where e1 , . . . , en is the canonical basis of Cn . t Notice that C = (T ⊗ T )B(T ∗ ⊗ T ∗ ) ∈ Mn ⊗ Mn is also PPT and C = uut + m j=1 aj aj , where n u = (T ⊗ T )w = i=1 ei ⊗ ei and aj = (T ⊗ T )bj ∈ (Cn ⊗ Cn )A , 1 ≤ j ≤ m. Let R : Mn ⊗ Mn → Mn ⊗Mn be the realignment map (definition 4.2). t Notice that R(C) = Id + m j=1 Aj ⊗ Aj ∈ Mn ⊗ Mn , where Id = R(uu ) and Aj ⊗ Aj = V −1 (aj ) ⊗ V −1 (aj ), by remark 4.4 and by property 1 in lemma 4.3. Moreover, each Aj = V −1 (aj ) is a complex antisymmetric matrix, since aj ∈ (Cn ⊗ Cn )A . Thus, R(C)t2 = Id − m j=1 Aj ⊗ Aj . Let Aj = Aj + iAj , where Aj , Aj are real antisymmetric matrices in Mn . By properties 2 and 3 in lemma 4.3, we have C t2 = (R(C)t2 )F . Furthermore, since Aj ⊗ Aj = Aj ⊗ Aj + Aj ⊗ Aj + i(Aj ∧ Aj ) then C t2 = (R(C)t2 )F = (Id − (
m
Aj ⊗ Aj + Aj ⊗ Aj ))F − i(
j=1
m
Aj ∧ Aj )F.
j=1
Notice that C t2 = P + iQ, where P = (Id − (
m j=1
Aj
⊗
Aj
+
Aj
⊗
Aj ))F
and
Q = −(
m
Aj ∧ Aj )F
j=1
are real matrices, since F is also a real matrix. C is PPT then C t2 is a positive semidefinite Hermitian matrix. Therefore, P = (Id − Since m ( j=1 Aj ⊗ Aj + Aj ⊗ Aj ))F is also a positive semidefinite symmetric matrix. t t Next, notice that L(X) = m j=1 Aj XAj +Aj XAj is a positive map acting on Mn . By ([12, Theorem 2.5]), there is a positive semidefinite Hermitian matrix Y , which is an eigenvector associated to the spectral radius of L(X). Let Y = S + iA , where S is a real symmetric matrix and A is
16
CARIELLO
a real antisymmetric matrix and notice that S = 0. Notice that the sets of symmetric and antisymmetric matrices are left invariant by L(X). Thus, S is also an eigenvector of L(X) associated to the spectral radius. k k Now, V ◦ L ◦ V −1 (v) = ( m j=1 Aj ⊗ Aj + Aj ⊗ Aj )v, for every v ∈ C ⊗ C , where V is defined n in definition 4.2. Therefore, there exists a symmetric tensor V(S ) = s ∈ (C ⊗ Cn )S such that (
m
Aj ⊗ Aj + Aj ⊗ Aj )s = λs ,
j=1
where λ is the spectral radius of this matrix. Notice that m j=1 Aj ⊗Aj +Aj ⊗Aj is a real symmetric matrix, so the spectral radius is the biggest eigenvalue. So m P s = (Id − ( Aj ⊗ Aj + Aj ⊗ Aj ))F s = (1 − λ)s . j=1
Since P is a positive semidefinite symmetric matrix then 1 − λ ≥ 0. Therefore, the biggest eigenvalue of m j=1 Aj ⊗ Aj + Aj ⊗ Aj is smaller or equal to 1 and P F = Id − (
m
Aj ⊗ Aj + Aj ⊗ Aj )
j=1
is also a positive semidefinite symmetric matrix. Notice that, since P and P F are positive semidefinite then P = P F . By properties 2 and 4 in lemma 4.3, we have (P F )t2 = P t2 = R(P F )F = R((P F )t2 ). Therefore, Id +
m
Aj
⊗
Aj
+
Aj
⊗
Aj
t2
t2
t
= (P F ) = R((P F ) ) = uu +
j=1
m
(aj )(aj )t + (aj )(aj )t ,
(4.1)
j=1
where R(Id) = uut , R(Aj ⊗ Aj ) = V(Aj )V(Aj )t = (aj )(aj )t , R(Aj ⊗ Aj ) = V(Aj )V(Aj )t = (aj )(aj )t . From equation (4.1), we have t
2Id − uu − (
m
(aj )(aj )t
+
(aj )(aj )t )
= Id − (
j=1
V(Aj )
aj
m
Aj ⊗ Aj + Aj ⊗ Aj ) = P F.
j=1 n
n
V(Aj )
Notice that = ∈ (C ⊗ C )A and = aj ∈ (Cn ⊗ Cn )A , for every j. Therefore, (aj )t u = (aj )t u = 0 and P F u = (2 − ut u)u. Now, ut u = n and n is the tensor rank of w. Since P F is a positive semidefinite symmetric matrix then the tensor rank of w is smaller or equal to 2. t k k k k Second, recall that B = wwt + m j=1 bj bj is PPT, w ∈ (C ⊗ C )S and bj ∈ (C ⊗ C )A , 1 ≤ j ≤ m. Let us consider two cases. Case I: The tensor rank of w is 2. Let v1 , v2 be linear independent vectors in Ck such that w = v1 ⊗ v1 + v2 ⊗ v2 . Let M ∈ Mk be such that ker(M ) = span{v1 + iv2 }.Notice that (M ⊗ M )w = 0. t k k Thus, (M ⊗ M )B(M ∗ ⊗ M ∗ ) = m j=1 cj cj , where cj = (M ⊗ M )bj ∈ (C ⊗ C )A . Notice that (M ⊗ M )B(M ∗ ⊗ M ∗ ) is PPT, therefore ∗
∗
t2
t
0 ≤ tr(((M ⊗ M )B(M ⊗ M )) vv ) = tr((
m j=1
t t2
t
cj cj ) vv ) = tr(
m j=1
cj cj t (vv t )t2 ),
A GAP FOR PPT ENTANGLEMENT
for every v ∈ Ck ⊗ Ck . If we choose v0 = Ck then (v0 v0 t )t2 = F (definition 2.1). So 0 ≤ tr((
m
t t2
t
k
i=1
fi ⊗ fi , where {f1 , . . . , fk } is the canonical basis of
cj cj ) v0 v0 ) = tr(
j=1
17
m j=1
t
cj cj F ) = −tr( m
m
cj cj t ) ≤ 0,
j=1
since cj t F = −cj t (since cj ∈ (Ck ⊗ Ck )A ). Thus, tr( j=1 cj cj t ) = 0 and every cj = 0. Thus, bj ∈ ker(M ⊗ M ) ∩ (Ck ⊗ Ck )A = (v1 + iv2 ) ⊗ Ck − Ck ⊗ (v1 + iv2 ), for every j. Now, let M ∈ Mk be such that ker(M ) = span{v1 − iv2 }. Notice that (M ⊗ M )w = 0. We can replace M by M in the argument above to obtain bj ∈ ((v1 + iv2 ) ⊗ Ck − Ck ⊗ (v1 + iv2 )) ∩ ((v1 − iv2 ) ⊗ Ck − Ck ⊗ (v1 − iv2 )) = span{(v1 + iv2 ) ∧ (v1 − iv2 )} = span{v1 ∧ v2 }, for every j. t t t t Therefore, m j=1 bj bj = ss , where s ∈ span{v1 ∧ v2 }, and B = ww + ss . Case II: The tensor rank of w is 1. Let w = v1 ⊗ v1 . Let N ∈ Mk be such that ker(N ) = span{v1 }. Notice that (N ⊗ N )w = 0. We can repeat the argument above to conclude that bj ∈ ker(N ⊗ N ) ∩ (Ck ⊗ Ck )A = v1 ⊗ Ck − Ck ⊗ v1 . Next, if there is l ∈ {1, . . . , m} such that 0 = bl then there exists v3 ∈ Ck such that bl = v1 ∧ v3 and v1 , v3 are linear independent. Let O ∈ M2×k be such that Ov1 = g1 , Ov3 = g2 , where {g1 , g2 } is the canonical basis of C2 . Thus, (O ⊗ O)r = g1 ⊗ g1 , (O ⊗ O)bl = g1 ∧ g2 and (O⊗O)bj ∈ (C2 ⊗C2 )A = span{g1 ∧g2 }, for 1 ≤ j ≤ m. Therefore, the image of (O⊗O)B(O∗ ⊗O∗ ) is generated by g1 ⊗ g1 and g1 ∧ g2 . So the only tensor with tensor rank 1 in this image is g1 ⊗ g1 and (O ⊗ O)B(O∗ ⊗ O∗ ) ∈ M2 ⊗ M2 is not separable by the range criterion (see [2]). This is a contradiction, since (O ⊗ O)B(O∗ ⊗ O∗ ) is PPT and every PPT matrix is separable in M2 ⊗ M2 (see [13]). Therefore, every bj = 0 and B = wwt . Finally, in both cases (B = wwt + sst or B = wwt ) the ranges of the marginal matrices of B = 2(ρ + F ρF ) are subspaces of span{v1 , v2 }. Thus, the marginal rank of ρ + F ρF is smaller or equal to 2. Hence, the marginal ranks of ρ are also smaller or equal to 2. Since ρ is PPT then ρ is separable, by Horodecki theorem (see [13]). Corollary 4.6. Let ρ ∈ Mk ⊗ Mk Mk2 be a positive semidefinite hermitian matrix, Id ∈ Mk ⊗ Mk Mk2 the identity and F ∈ Mk ⊗ Mk the flip operator. If ρ is positive under partial transposition, r is the marginal rank of ρ + F ρF and r ≤ 3 then rank((Id + F )ρ(Id + F )) ≥ max{ 2r rank((Id − F )ρ(Id − F )), 2r }. Proof. If r ≤ 2 then the marginal ranks of ρ are also smaller or equal to 2. Since ρ is PPT then ρ is separable, by Horodecki theorem (see [13]). The theorem follows by theorem 3.4. Now, if r = 3 then rank((Id+F )ρ(Id+F )) ≥ 2, by theorem 4.5. Since rank((Id−F )ρ(Id−F )) = rank((Id − F )(ρ + F ρF )(Id − F )) ≤ dim((C3 ⊗ C3 )A ) = 3×2 = 3 then rank((Id + F )ρ(Id + F )) ≥ 2 max{ 23 rank((Id − F )ρ(Id − F )), 32 }. 5. SPC matrices Let r be the marginal rank of ρ + F ρF . Here, we provide several examples of PPT matrices 2 ρ ∈ Mk ⊗ Mk such that rank((Id + F )ρ(Id + F )) ≥ r ≥ r−1 rank((Id − F )ρ(Id − F )) (see corollary 5.6). So it is not trivial to find PPT entanglement in the gap discussed at the beginning of the last section. The main result of this section is the following: If ρ is PPT and ρ+F ρF is symmetric with
18
CARIELLO
2 positive coefficients (definition 5.2) then rank((Id+F )ρ(Id+F )) ≥ r ≥ r−1 rank((Id−F )ρ(Id−F )) (theorem 5.4). n Definition 5.1. A decomposition of a matrix A ∈ Mk ⊗ Mm , i=1 λi γi ⊗ δi , is a Schmidt decomposition if {γi | 1 ≤ i ≤ n} ⊂ Mk , {δi | 1 ≤ i ≤ n} ⊂ Mm are orthonormal sets with respect to the trace n inner product, λi ∈ R and λi > 0. Also, if γi and δi are Hermitian matrices for every i then i=1 λi γi ⊗ δi is a Hermitian Schmidt decomposition of A.
Definition 5.2. (SPC matrices) Let A ∈ Mk ⊗Mk Mk2 be a positive semidefinite Hermitian matrix. We say that A is symmetric with positive coefficients or simply SPC, if A has the following symmetric Hermitian Schmidt decomposition with positive coefficients: ni=1 λi γi ⊗ γi , with λi > 0, for every i. Remark 5.3. The following description of SPC matrices can be found in ([11, Corollary 25]) : A ∈ Mk ⊗ Mk Mk2 is SPC if and only if A and R(At2 ) are positive semidefinite Hermitian matrices, where R is the realignment map (definition 4.2). Theorem 5.4. If ρ ∈ Mk ⊗ Mk Mk2 is a PPT matrix, ρ + F ρF is a SPC matrix and r is the marginal rank of ρ + F ρF then (Id + F )ρ(Id + F ) is also a PPT matrix and 2 rank((Id − F )ρ(Id − F )). rank((Id + F )ρ(Id + F )) ≥ r ≥ r−1 Proof. Let C = ρ + F ρF , ξ = 12 (Id + F )ρ(Id + F ) and η = 12 (Id − F )ρ(Id − F ). Notice that ξ, η are positive semidefinite Hermitian matrices and C = ξ + η. In order to prove this theorem, we must show the following two facts: (1) rank(ξ A ) = rank(C A ) = r, (2) ξ is PPT. By remark 3.3, C A = ρA + ρB = C B . So (C A ) = (C B ). Again, by remark 3.3, (C) ⊂ (C A ) ⊗ (C B ) = (C A ) ⊗ (C A ). Thus, (η) ⊂ (C) ∩ (Ck ⊗ Ck )A ⊂ ((C A ) ⊗ (C A ))A . . So rank(η) ≤ r(r−1) 2 By item (2), ξ is PPT. Therefore, by ([7, Theorem 1]), rank(ξ) ≥ rank(ξ A ) . 2 rank(η). Finally, by item (1), rank(ξ) ≥ r ≥ r−1 Proof of (1) : Since ξ A +η A = C A and ξ A , η A are positive semidefinite (remark 3.3) then rank(ξ A ) ≤ rank(C A ). Next, if v ∈ ker(ξ A ) then 0 = tr(ξ A vv t ) = tr(ξ(vv t ⊗ Id)). Thus, tr(ξ(vv t ⊗ vv t )) = 0, since ξ is positive semidefinite. t = 0. So tr(C(vv t ⊗ vv t )) = 0. Since v ⊗ v ∈ (Ck ⊗ Ck )S ⊂ ker(η) then tr(η(vv t ⊗ vv )) m By hypothesis, C is a SPC matrix, therefore C = i=1 λi γi ⊗ γi , where γi is Hermitian and λi > 0, for 1 ≤ i ≤ m (see definition 5.2). Thus, m t t λi tr(γi vv t )2 = 0. tr(C(vv ⊗ vv )) = i=1
Hence, tr(γi vv t ) = 0 , for 1 ≤ i ≤ m, since λi > 0, for 1 ≤ i ≤ m.A t A Since C A is positive semidefinite, C A = m i=1 λi tr(γi )γi and tr(C vv ) = 0 then v ∈ ker(C ). Therefore, rank(C A ) ≤ rank(ξ A ) and rank(C A ) = rank(ξ A ).
A GAP FOR PPT ENTANGLEMENT
19
Proof of (2) : Since ξF = ξ and ηF = −η then 2ξ = C + CF . Thus, 2ξ t2 = C t2 + (CF )t2 = C t2 + R(C t2 ), by item 5 in lemma 4.3. Now, C t2 is positive semidefinite, since ρ and F ρF are PPT and R(C t2 ) is positive semidefinite, by remark 5.3. Therefore, ξ is PPT. Remark 5.5. The next two examples show that the hypothesis, ρ+F ρF is SPC, cannot be dropped in theorem 5.4. The first example is the separable matrix ρ ∈ Mk ⊗ Mk of corollary 3.7, which 2 rank((Id − F )ρ(Id − F )). The satisfies rank((Id + F )ρ(Id + F )) = k2 rank((Id − F )ρ(Id − F )) < k−1 k second example is the matrix ρ = B + C, where B = k( i=1 ei eti ⊗ ei eti ) − uut , u = ki=1 ei ⊗ ei , {e1 , . . . , ek } is the canonical basis of Ck and C = Id − F . This matrix ρ is a positive semidefinite Hermitian matrix and invariant under partial transposition, since the partial transposition of F is uut , and vice versa, therefore ρ is PPT. Notice that rank((Id + F )ρ(Id + F )) = rank(B) = k − 1 and rank((Id − F )ρ(Id − F )) = rank(C) = k(k−1) . Thus, rank((Id + F )ρ(Id + F )) = k2 rank((Id − 2 2 rank((Id − F )ρ(Id − F )). F )ρ(Id − F )) < k−1 Corollary 5.6. Let ρ ∈ Mk ⊗ Mk Mk2 be a PPT and SPC matrix then rank((Id + F )ρ(Id + 2 rank((Id − F )ρ(Id − F )), where r is the marginal rank of ρ + F ρF . F )) ≥ r ≥ r−1 Proof. Since ρ is SPC then ρ = m i=1 λi γi ⊗γi , by definition 5.2. Thus, F ρF = ρ and ρ+F ρF = 2ρ is SPC. Now, use theorem 5.4. Acknowledgements. The author was supported by CNPq-Brazil Grant 245277/2012-9. The author would like to thank Professor Otfried G¨ uhne for useful discussion and the referee for providing constructive comments and helping in the improvement of this manuscript. References [1] A. Peres, Separability criterion for density matrices, Physical Review Letters 77 (1996), no. 8, 1413. [2] P. Horodecki, Separability criterion and inseparable mixed states with positive partial transposition, Physics Letters A 232 (1997), no. 5, 333–339. [3] K. Chen and L.-A. Wu, A matrix realignment method for recognizing entanglement, Quantum Information & Computation 3 (2003), no. 3, 193–202. [4] O. Rudolph, Computable cross-norm criterion for separability, Letters in Mathematical Physics 70 (2004), no. 1, 57–64. , Further results on the cross norm criterion for separability, Quantum Information Processing 4 (2005), [5] no. 3, 219–239. [6] C. H. Bennett, D. P. DiVincenzo, T. Mor, P. W. Shor, J. A. Smolin, and B. M. Terhal, Unextendible product bases and bound entanglement, Physical Review Letters 82 (1999), no. 26, 5385. [7] P. Horodecki, J. A. Smolin, B. M. Terhal, and A. V. Thapliyal, Rank two bipartite bound entangled states do not exist, Theoretical Computer Science 292 (2003), no. 3, 589–596. [8] G. T´ oth and O. G¨ uhne, Separability criteria and entanglement witnesses for symmetric quantum states, Applied Physics B 98 (2010), no. 4, 617–622. [9] D. Cariello, Separability for weakly irreducible matrices, Quantum Information & Computation 14 (2014), no. 15-16, 1308–1337. , Does symmetry imply PPT property?, Quantum Information & Computation 15 (2015), no. 9-10, [10] 812–824. , Completely Reducible Maps in Quantum Information Theory, IEEE Transactions on Information The[11] ory 62 (2016), no. 4, 1721-1732. [12] D. E. Evans and R. Høegh-Krohn, Spectral properties of positive maps on C*-algebras, Journal of the London Mathematical Society 2 (1978), no. 2, 345–355.
20
CARIELLO
[13] M. Horodecki, P. Horodecki, and R. Horodecki, Separability of mixed states: necessary and sufficient conditions, Physics Letters A 223 (1996), no. 1, 1–8. [14] J. M. Leinaas, J. Myrheim, and P. Ø. Sollid, Numerical studies of entangled positive-partial-transpose states in composite quantum systems, Physical Review A 81 (2010), no. 6, 062329. [15] J. Tura, R. Augusiak, P. Hyllus, M. Ku´s, J. Samsonowicz, and M. Lewenstein, Four-qubit entangled symmetric states with positive partial transpositions, Physical Review A 85 (2012), no. 6, 060302. [16] R. Augusiak, J. Tura, J. Samsonowicz, and M. Lewenstein, Entangled symmetric states of N qubits with all positive partial transpositions, Physical Review A 86 (2012), no. 4, 042316. ´ tica, Faculdade de Matema ˆ ndia, Universidade Federal de Uberla ˆ 38.400-902 Uberlandia, Brazil. E-mail address:
[email protected] ´ lisis Matema ´ tico, Departamento de Ana ´ ticas, Facultad de Ciencias Matema Plaza de Ciencias 3, Universidad Complutense de Madrid, Madrid, 28040, Spain. E-mail address:
[email protected]