Checking strict positivity of Kraus maps is NP-hard

Checking strict positivity of Kraus maps is NP-hard

Information Processing Letters 118 (2017) 35–43 Contents lists available at ScienceDirect Information Processing Letters www.elsevier.com/locate/ipl...

381KB Sizes 2 Downloads 63 Views

Information Processing Letters 118 (2017) 35–43

Contents lists available at ScienceDirect

Information Processing Letters www.elsevier.com/locate/ipl

Checking strict positivity of Kraus maps is NP-hard ✩ Stéphane Gaubert a , Zheng Qu b,∗ a b

INRIA and CMAP UMR 7641 CNRS, École Polytechnique, 91128 Palaiseau Cedex, France Department of Mathematics, University of Hong Kong, Hong Kong

a r t i c l e

i n f o

Article history: Received 17 April 2014 Received in revised form 5 June 2016 Accepted 12 September 2016 Available online 14 September 2016 Communicated by A. Muscholl Keywords: Computational complexity Algorithms Analysis of algorithms

a b s t r a c t Basic properties in Perron–Frobenius theory are positivity, primitivity, and irreducibility. Whereas these properties can be checked in polynomial time for stochastic matrices, we show that for Kraus maps – the noncommutative generalization of stochastic matrices – checking positivity is NP-hard. This is in contrast with irreducibility and primitivity, which we show to be checkable in strongly polynomial time for completely positive maps – the noncommutative generalization of nonpositive matrices. As an intermediate result, we get that the bilinear feasibility problem over Q is NP-hard. © 2016 Elsevier B.V. All rights reserved.

1. Introduction Irreducibility, primitivity, and strict positivity are basic structural notions of Perron–Frobenius theory [3]. Recall that a linear map A leaving invariant a (closed, convex, and pointed) cone C of a vector space is said to be strictly positive if it sends the cone to its interior; primitive if it has a power that is positive, and irreducible if it does not leave invariant a non-trivial face of the cone. These notions help one to determine spectral or dynamical properties of the map. In particular, the strongest of the above notions, strict positivity, entails the strict contraction of A with respect to Hilbert’s projective metric (see Birkhoff’s theorem [2]), and so, the convergence of the rescaled iterates of A to a rank one linear map with a geometric rate. The latter property is of importance in a number of applications, including “consensus theory” for distributed systems or population

✩ This work was partially supported by the PGMO (Gaspard Monge) Program of FMJH (Fondation Mathématique Jacques Hadamard) and EDF. It was carried out when the second author was with CMAP, Ecole Polytechnique and INRIA, being supported by a doctoral fellowship of Ecole Polytechnique. Corresponding author. E-mail addresses: [email protected] (S. Gaubert), [email protected] (Z. Qu).

*

http://dx.doi.org/10.1016/j.ipl.2016.09.008 0020-0190/© 2016 Elsevier B.V. All rights reserved.

dynamics [6,26]. It is natural to ask how properties of this nature can be checked for various classes of cones. If C is the standard positive cone of Rn , A can be identified as a nonnegative n-by-n matrix. Then, strict positivity, primitivity, and irreducibility, can be easily checked. Indeed, a nonnegative matrix A is strictly positive if and only if all its entries are positive. Moreover, it is known 2 that A is primitive if and only if A n −2n+2 is strictly positive [16]. Finally, the matrix A is irreducible if and only if the associated directed graph is strongly connected. Note also that an efficient combinatorial algorithm is available to compute the period of an irreducible matrix, which allows one in particular to decide if it is primitive [7]. Therefore, primitivity and irreducibility for nonnegative matrices are equivalent to well known problems of graph theory, that can be solved in polynomial time. Another important class of maps arises when considering the cone of positive semidefinite matrices. Then, the noncommutative analogue of a stochastic matrix is a Kraus map, i.e., a completely positive and trace-preserving map on this cone. Kraus maps are fundamental objects in quantum control and information theory, as they represent quantum channels [19,25]. The notions of irreducibility, strict positivity and primitivity are of importance for Kraus maps, see in particular [11,30,28] and [31,23] for a discus-

36

S. Gaubert, Z. Qu / Information Processing Letters 118 (2017) 35–43

sion of the quantum analogue of consensus theory. It is natural to ask whether we can verify these properties for Kraus maps in polynomial time, as in the case of nonnegative matrices. The irreducibility of a completely positive map can be checked in strongly polynomial time. This follows from a classical lemma of Burnside on matrix algebras combined with a result of Farenick [11]. Moreover, a characterization given by Sanz, Pérez-García, Wolf and Cirac [30] also implies that the primitivity of a Kraus map can be checked in strongly polynomial time. See Corollaries 3.1 and 3.2 below for the derivation of these two facts. Note that in each of these results, we assume that the input – which determines the Kraus map – consists of the Kraus operators. Our main result, Theorem 5.1, asserts that checking the strict positivity of a Kraus map is NP-hard. It may come as a surprise that strict positivity, which is the simplest property in the case of nonnegative matrices, turns out to be the hardest one in the case of Kraus maps. We first show that the strict positivity of a completely positive map is equivalent to the non-feasibility of the bilinear system given by the associated Kraus operators, or equivalently the non-existence of a rank one matrix in the orthogonal complement of the subspace generated by the Kraus operators, see Lemma 4.1. Then, we prove that every 3SAT problem can be reduced in polynomial time to the problem of checking the feasibility of a bilinear system given by a set of Kraus operators, see Theorem 4.1. Finally we show Theorem 5.1 by reducing in polynomial time the bilinear system obtained from a 3SAT instance to a unital bilinear system. We note that several rank minimization problems have been extensively studied in the literature, see for example [12,29,13]. In particular, the problem of finding a matrix of minimal rank in a affine subspace is known to be NP-hard [1,27,8] and hard to approximate [24]. However, here the subspace of matrices is linear instead of affine, and rank minimization in a linear subspace is a trivial subproblem. Note also that Hillar and Lim [17] showed the NP-hardness of the bilinear feasibility problem over R or C by reducing the graph 3-Colorability problem to it. Theorem 4.2 shows that NP-hardness persists if the solution is required to be rational. 2. Irreducibility, primitivity and strict positivity of completely positive maps Throughout the paper, F denotes a field, which will be either Q, R or C. Unless otherwise specified, the vector (sub)space is spanned over the field C. The space of n-by-n matrices over the field F is denoted by M n (F). For a matrix V ∈ M n (C), V ∗ denotes the adjoint matrix (conjugate transpose) of V . The space of n-by-n Hermitian matrices is denoted by Sn , the (strict) Loewner order on the space Sn is denoted by  (≺), i.e., A  B ( A ≺ B) if and only if B − A is a positive semidefinite (definite) matrix. The cone of semidefinite positive matrices is denoted by Sn+ . A completely positive map  : Sn → Sn is characterized through a family of matrices { V 1 , . . . , V m } ⊂ M n (C) such that:

( X ) :=

m 

V i X V i∗ ,

∀ X ∈ Sn .

(1)

i =1

We refer the reader to [5] for more information on completely positive maps. In the context of quantum information theory, a completely positive map is said to be a Kraus map if m 

V i∗ V i = I n .

(2)

i =1

Kraus maps represent the effect of quantum operations on the density operators of quantum states, see [19,25]. The matrices V 1 , . . . , V m describing the Kraus map  are usually called Kraus operators [18]. By abuse of terminology, the matrices V 1 , . . . , V m defining a completely positive map  in (1) will still be called Kraus operators, even when the map is not trace preserving (meaning that condition (2) is not required). We next recall the definitions of irreducibility, strict positivity and primitivity for completely positive maps. Recall that a face F of Sn+ is a (closed, convex) cone contained in Sn+ such that if P ∈ F then Q ∈ F for all Q  P [4,15]. The faces other than {0} and Sn+ are called nontrivial faces of Sn+ . A face F of Sn+ is invariant by a completely positive map  if ( X ) ∈ F for every X ∈ F . Definition 2.1 (Irreducibility [4,11]). The map  is irreducible if there is no nontrivial face of Sn+ invariant by  . Definition 2.2 (Strict positivity [10]). The map  is strictly positive if ( X )  0 holds as soon as X  0. The following elementary result provides an alternative formulation which is useful to appreciate the notion of strict positivity. It is not needed in the sequel however. Proposition 2.1. The map  is strictly positive if and only if there is a positive constant α such that

( X )  α tr( X ) I ,

∀ X ∈ Sn+ .

(3)

Proof. Let  := { X ∈ S n+ | tr( X ) = 1}. The map which sends X to the minimal eigenvalue of ( X ) is continuous. If  is strictly positive, this map does not vanish on  , and so, by the compactness of  it must be bounded from below by a positive constant α , showing that (3) holds for all X ∈  . By homogeneity, (3) holds for all X ∈ S n+ . The converse implication is trivial. 2 Definition 2.3 (Primitivity [30]). The map  is primitive if there is an integer p > 0 such that  p is strictly positive. We prove a simple lemma which shows that primitivity implies irreducibility. Lemma 2.1. If a completely positive map  is primitive, then  is irreducible.

S. Gaubert, Z. Qu / Information Processing Letters 118 (2017) 35–43

Proof. Suppose that  is primitive. By definition, there is an integer p > 0 such that  p is strictly positive. Suppose in addition that  is not irreducible. Then, there is a nontrivial face F of Sn+ invariant by  , and, a fortiori, by  p . Using the definition of strict positivity, this implies that F has a nonempty intersection with the interior of Sn+ and thus F = Sn+ , whence the contradiction. 2 We shall study the following four problems. Problem 2.1 (Irreducibility of completely positive maps). Input: integers n, m, and matrices { V 1 , . . . , V m } ⊂ M n (C) with rational entries. Question: Is the map  defined by (1) irreducible? Problem 2.2 (Primitivity of completely positive maps). Input: integers n, m, and matrices { V 1 , . . . , V m } ⊂ M n (C) with rational entries. Question: Is the map  defined by (1) primitive? Problem 2.3 (Strict positivity of completely positive maps). Input: integers n, m, and matrices { V 1 , . . . , V m } ⊂ M n (C) with rational entries. Question: Is the map  defined by (1) strictly positive? Problem 2.4 (Strict positivity of Kraus maps). Input: integers n, m, and matrices { V 1 , . . . , V m } ⊂ M n (C) with rational entries, satisfying (2). Question: Is the Kraus map  defined by (1) strictly positive? Recall that an algorithm is said to run in strongly polynomial time if it is a polynomial space algorithm in the standard Turing model and if it performs a number of elementary arithmetic operations polynomially bounded in the number of input numbers, independently of their bit length [14]. In the present setting, there are 2mn2 input numbers, namely, the real and imaginary parts of the entries of the matrices V 1 , . . . , V m . So a strongly polynomial algorithm performs a number of arithmetic operations which can be bounded polynomially in terms of n and m, only. We next show that the first two problems can be solved in strongly polynomial time whereas the last two are NP-hard. 3. Checking irreducibility and primitivity is strongly polynomial 3.1. Preliminaries For every integer k, we denote by Sk ( V 1 , . . . , V m ) the vector space spanned by all the products of exactly k Kraus operators from { V 1 , . . . , V m }:

Sk ( V 1 , . . . , V m )

:= span{ V ik · · · V i 1 : ik , . . . , i 1 ∈ {1, . . . , m}}. We also denote by Dk ( V 1 , . . . , V m ) the vector space spanned by all the products of at most k Kraus operators:



Dk ( V 1 , . . . , V m ) :=

37

Sk ( V 1 , . . . , V m ).

1k

Finally, we denote by

A ( V 1 , . . . , V m ) = ∪k1 Dk ( V 1 , . . . , V m ) the algebra generated by the Kraus operators { V 1 , . . . , V m }. Lemma 3.1. There is an integer p  n2 such that A ( V 1 , . . . , V m ) = D p ( V 1 , . . . , V m ). Proof. The sequence (Dk ( V 1 , . . . , V m ))k1 constitutes a nondecreasing sequence of vector subspaces of M n (C), which is of dimension n2 . Hence, there is an integer p  n2 such that D p ( V 1 , . . . , V m ) = D p +1 ( V 1 , . . . , V m ). It follows that D p ( V 1 , . . . , V m ) is stable by matrix product, and so A ( V 1 , . . . , V m ) = D p ( V 1 , . . . , V m ). 2 3.2. Detection of irreducibility We shall need the following characterization of irreducibility. Proposition 3.1. The completely positive map  given by (1) is irreducible if and only if A ( V 1 , . . . , V m ) = M n (C). Proof. Farenick showed in [11, Theorem 2] that the reducibility of  is equivalent to the existence of a nontrivial (other than {0} or Cn ) common invariant subspace of all { V i }. By Burnside’s theorem on matrix algebra (see [22]), the latter property holds if and only if the algebra A ( V 1 , . . . , V m ) is not the whole space M n (C). 2 Observation 3.1. Without loss of generality, we may assume in the rest of this section that the entries of Kraus operators { V 1 , . . . , V m } are complex numbers with integer real and imaginary parts. Indeed we can always multiply each Kraus operator by the product of the denominators of its entries. This transformation can be performed in strongly polynomial time, and it does not change the dimension of the algebra A ( V 1 , . . . , V m ). By Proposition 3.1 and Lemma 3.1, to decide if a Kraus map  is irreducible, we shall compute the increasing sequence of subspaces Dk ( V 1 , . . . , V m ), k = 1, 2, . . . , n2 , and look for the first integer k  n2 such that Dk ( V 1 , . . . , V m ) = Dk+1 ( V 1 , . . . , V m ). For this we shall use Gaussian elimination to compute the dimension of Dk ( V 1 , . . . , V m ). Recall that for any p-by-d complex rational matrix H , Gaussian elimination runs in O ( pd min{ p , d}) elementary arithmetic operations. Moreover, Edmonds showed that there is a representation scheme for rationals so that the number of digits of the numbers occurring during the run of Gaussian elimination for the matrix H is bounded by a polynomial in H , the encoding length of H , see [14, Theorem 1.4.8]. Proposition 3.2. For any 1  k  n2 , a rational basis of the vector space Dk ( V 1 , . . . , V m ) can be computed in O (kn6 m) elementary arithmetic operations. The number of digits of the

38

S. Gaubert, Z. Qu / Information Processing Letters 118 (2017) 35–43

numbers occurring during the computation is bounded by a polynomial in the encoding length of the Kraus operators { V 1 , . . . , V m }.

Corollary 3.1. The irreducibility of a completely positive map (Problem 2.1) can be checked in O (n8 m) elementary arithmetic operations and using a polynomial space.

Proof. For simplicity we write Dk instead of Dk ( V 1 , . . . , V m ). We proceed recursively as follows. For k  1, given a rational basis Bk of the space Dk , a rational basis Bk+1 of the space

Proof. The result follows directly from Proposition 3.2, Proposition 3.1 and Lemma 3.1. 2

Dk+1 = span{{ V i M : i ∈ {1, . . . , m}, M ∈ Bk } ∪ Bk },

We shall need the following characterization of primitivity of completely positive maps, which is a consequence of a “quantum version of Wielandt inequality” established by Sanz, Pérez-García, Wolf and Cirac for Kraus maps.

(4)

can be extracted by applying Gaussian elimination on a (m + 1)|Bk |-by-n2 matrix each row of which corresponds to a matrix in the set {{ V i M : i ∈ {1, . . . , m}, M ∈ Bk } ∪ Bk }. The number of elementary arithmetic opera tions of the process is then bounded by O |Bk |(m + 1)n2  min{|Bk |(m + 1), n2 } and thus by O (n6 m) because |Bk | = dim(Dk )  n2 . Now by applying the bound on each k, we deduce the bound O (kn6 m) on the number of elementary arithmetic operations for computing Bk . We start from a basis B1 of D1 such that

B1 ⊂ { V 1 , . . . , V m }, and let

Bk+1 ⊂ {{ V i M : i ∈ {1, . . . , m}, M ∈ Bk } ∪ Bk }, ∀k  1. Then necessarily we have

3.3. Detection of primitivity

Theorem 3.1 (Corollary of [30]). Assume that the completely positive map  is irreducible. Then,  is primitive if and only if there is q  (n2 − m + 1)n2 such that the space Sq ( V 1 , . . . , V m ) coincides with Mn (C). Proof. Theorem 1 of [30] shows that if  is a Kraus map, then, it is primitive if and only if Sq ( V 1 , . . . , V m ) coincides with M n (C), for some q  (n2 − m + 1)n2 . We next show that this implies that the same property holds for all irreducible completely positive maps (not necessarily unital). Since  is irreducible, the adjoint map  ∗ defined by:

 ∗ ( X ) :=

m 

V i∗ X V i ,

∀ X ∈ Sn ,

i =1

Bk ⊂ ∪kj=1 { V i 1 · · · V i j : i 1 , . . . i j ∈ {1, . . . , m}}. It will be convenient to equip the space of matrices with the operator norm induced by the sup norm. This operator norm is defined, for each X = ( X i j ) ∈ Cn×n , by

 X z ∞ = max | Xi j | , 1i n z∈Cn \{0} z ∞

X := sup

1 j n

is also irreducible. This can be seen by applying Proposition 3.1 and using the fact that A ( V 1 , . . . , V m ) = M n (C) ∗ ) = M (C). It follows from the if and only if A ( V 1∗ , . . . , V m n Perron–Frobenius theorem ([10, Theorem 2.3]) that the adjoint map  ∗ has an eigenvector A in the cone of positive matrices such that the associated eigenvalue is the spectral radius of  , ρ (), i.e. m 

where z ∞ := max1i n | zi |, for z ∈ C . We have in particular that X Y  X Y holds for all X , Y ∈ Cn×n . Moreover, if X has integer entries, it can be checked that n

X = O (n log2 X ) and log2 X = O ( X )

V i∗ A V i = ρ () A .

(5)

i =1

Now, for all invertible matrices U , define U ( X ) := U X U ∗ . Then, the map  = ρ ()−1  A 1/2 ◦  ◦  A −1/2 satisfies m 

W i X W i∗ ,

where the big-O notation hides constants independent of the data. Following Observation 3.1, we assume that the entries of the Kraus operators are integers. Therefore,

( X ) =

  

V i 1 . . . V ik  O n log2 V i 1 . . . V ik      O nk max log2 V i  O nk max V i .

and it follows from (5) that it is a Kraus map. Moreover, since Sq ( V 1 , . . . , V m ) = A −1/2 Sq ( W 1 , . . . , W m ) A 1/2 , Sq ( V 1 , . . . , V m ) coincides with Mn (C) if and only if Sq ( W 1 , . . . , W m ) does. 2

It follows that the encoding length of Bk is upper bounded by a polynomial on the encoding length of all the Kraus operators. Then, it follows from [14, Theorem 1.4.8] that computing a basis Bk+1 by Gaussian elimination can be done in polynomial space, for all 1  k  n2 . 2

Corollary 3.2. The primitivity of a completely positive map (Problem 2.2) can be checked in O (n10 m) elementary arithmetic operations and using a polynomial space.

i

i

The following corollary shows that Problem 2.1 is strongly polynomial.

with

i =1

W i = ρ ()−1/2 A 1/2 V i A −1/2 ,

Proof. From Lemma 2.1, to determine the primitivity of a completely positive map  , we first check the irreducibility, which needs O (n8 m) elementary arithmetic operations and a polynomial space. If  is not irreducible, then  is

S. Gaubert, Z. Qu / Information Processing Letters 118 (2017) 35–43

not primitive. Otherwise, from Theorem 3.1, we only need to compute the sequence of subspaces Sk ( V 1 , . . . , V m ) and check the existence of the integer k  (n2 − m + 1)n2 such that Sk ( V 1 , . . . , V m ) = M n (C). Note that Sk ( V 1 , . . . , V m ) can also be computed inductively. The conclusion is finally obtained along the lines of the proof of Proposition 3.2. 2 Remark 3.1. We showed that Problems 2.1 and Problems 2.2 can be solved in strongly polynomial time. The bottleneck here consists in extracting a basis from a family of integer vectors. For this, we relied on Edmond’s modification of Gaussian elimination, which leads to a strongly polynomial bound. In practice, more sophisticated weakly polynomial algorithms may be more efficient. We refer the reader to the paper [9] and to the library LinBox (http://www.linalg.org/) for more information on these issues. 4. Checking strict positivity is NP-hard

39

Problem 4.2 (Existence of rank one matrix). Input: integers n, m and matrices { V 1 , . . . , V m } ⊂ M n (Q). Question: is there a rank one matrix in Fn×n in the orthogonal complement of the subspace spanned by { V 1 , . . . , V m }? Theorem 4.1. Let F = Q, R or C. The 3SAT problem is reducible in polynomial time to Problem 4.1. The proof is based on the following observation. An instance in conjunctive normal form of the 3SAT problem with N Boolean variables X 1 , . . . , X N and M clauses can be coded by the following system of polynomial equations with N variables x1 , . . . , x N :

⎧ ⎨ (1 + p i xk1i )(1 + qi xk2i )(1 + ri xk3i ) = 0, i = 1, · · · , M ⎩ 2 x i = 1, i = 1, · · · , N

(8)

where for each i ∈ {1, . . . , M }, k1i , k2i and k3i are three distinct integers such that X k1 , X k2 and X k3 are the three

In this section, we study the complexity of Problem 2.3: deciding if a completely positive map is strictly positive. First we show that the strict positivity of a completely positive map is equivalent to the bilinear feasibility problem.

variables used in the ith clause. In addition, for each i ∈ {1, . . . , M }, p i ∈ {±1} and p i = −1 (resp. p i = 1) if the literal corresponding to X k1 in the ith clause is a positive

Lemma 4.1. The completely positive map  is strictly positive if and only if we cannot find two nonzero vectors x, y ∈ Cn such that

(resp. negative) literal. The same holds for every q i and every r i . For instance, the clause X 1 ∨ ¬ X 2 ∨ X 4 is encoded by the polynomial (1 − x1 )(1 + x2 )(1 − x4 ) and the clause ¬ X 6 ∨ ¬ X 1 ∨ X 2 by the polynomial (1 + x6 )(1 + x1 )(1 − x2 ).

x∗ V i y = 0, ∀i = 1, . . . , m.

(6)

Proof. By definition, the map  is strictly positive if and only if for all nonzero vectors y ∈ Cn , the matrix

( y y ∗ ) =

m 

V i y y ∗ V i∗

i =1

is positive definite. This holds if and only if for all nonzero vectors x ∈ Cn , m  i =1

x∗ V i y y ∗ V i∗ x =

n 

|x∗ V i y |2 > 0.

i =1

Therefore  is not strictly positive if and only if we can find nonzero vectors x, y ∈ Cn such that (6) holds. 2 We study the complexity of the following bilinear feasibility problem. Recall that F denotes a field, which is either Q, R or C. Problem 4.1 (Bilinear feasibility). Input: integers n, m and matrices { V 1 , . . . , V m } ⊂ M n (Q). Question: is the following bilinear system

x T V i y = 0, ∀ i = 1 , . . . , m ,

(7)

has a solution x, y ∈ Fn \{0}. Problem 4.1 is trivially equivalent to the following problem on the existence of a rank one matrix in the orthogonal complement of the subspace generated by the Kraus operators { V 1 , . . . , V m }.

i

i

i

i

Lemma 4.2. Given an instance of the 3SAT problem, consider the corresponding system (8). Then the 3SAT problem is feasible if and only if there is a solution of (8) in {±1} N . Proof. The proof follows immediately from the construction of (8). If there is a solution of (8) in {±1} N , then all the clauses can be made true by assigning true to the Boolean variable X i if xi = 1 and false if xi = −1. Inversely if all the clauses can be made true, then the system (8) is satisfied with xi = 1 if the Boolean variable X i is assigned to be true and xi = −1 otherwise. 2 The following lemma follows directly from the second row in (8). Lemma 4.3. There is a solution of (8) in {±1} N if only if there is a solution of (8) in F N where F can be either Q, R or C. Using Lemma 4.2 and Lemma 4.3 we obtain the following lemma. Lemma 4.4. Given an instance of the 3SAT problem, consider the corresponding system (8). Then the 3SAT problem is feasible if and only if there is a solution of (8) in F N where F can be either Q, R or C. Therefore, to prove Theorem 4.1, it is sufficient to construct in a time polynomially bounded in N and M a set of Kraus operators { V 1 , . . . , V m } ⊂ Qn×n , such that there is a solution of (8) in F N if and only if there are two nonzero vectors x, y ∈ Fn \{0} such that (7) holds.

40

S. Gaubert, Z. Qu / Information Processing Letters 118 (2017) 35–43

We begin by the following basic lemma. Lemma 4.5. Let F = Q, R or C. Let ak (·, ·) : Fn × Fn → F, k ∈ {1, . . . , K } be a finite set of bilinear forms. The system

ak (x, x) = 0,

k = 1, . . . , K

has a solution x = (xi )1i n ∈ Fn with x1 = 0 if and only if there is a pair of vectors x = (xi )1i n , y = ( y i )1i n ∈ Fn satisfying x1 y 1 = 0 and



ak (x, y ) = 0, k = 1, . . . , K x i y j − x j y i = 0, 1i < j n .

(9)

Proof. The equations in the second row of (9) require that y be proportional to x. 2 The next lemma shows that system (8) can be transformed into a set of homogeneous equations. Lemma 4.6. Let F = Q, R or C. Let N , M ∈ N. Let {k1i , k2i , k3i }i ∈[ M ] ⊂ {1, · · · , N } and { p 1i , p 2i , r i3 }i ∈[ M ] ⊂ F. Consider the following system of equations on the variables (xi )1i  N :

⎧ ⎪ ⎨ (1 + p i xk1i )(1 + qi xk2i )(1 + ri xk3i ) = 0, i = 1, · · · , M ⎪ ⎩ 2 x i = 1, i = 1, · · · , N

(10)

⎧ (x0 + p i xk1 + qi xk2 + p i qi xN +i ) y N + M +i = 0, ⎪ ⎪ i i ⎪ ⎪ ⎪ ⎪ i = 1, · · · , M ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ xk1 yk2 − x0 y N +i = 0, i = 1, · · · , M i

i

i = 1, · · · , M ,

⎪ i ⎪ ⎪ ⎪ j = 0, . . . , N + 2M ⎪ ⎪ ⎪ ⎪ ⎪ xi y i − x0 y 0 = 0, i = 1, · · · , N + M ⎪ ⎪ ⎩ 0  i < j  N + 2M x i y j − x j y i = 0,

(11)

i

i

(12)

i

⎧ (1 + p i xk1 + qi xk2 + p i qi xN +i )xN + M +i = 0, ⎪ ⎪ i i ⎪ ⎪ ⎪ ⎪ ⎨ i = 1, · · · , M xk1 xk2 − x N +i = 0, i = 1, · · · , M i i ⎪ ⎪ ⎪ 1 ⎪ + ri xk3 − xN + M +i = 0, i = 1, · · · , M ⎪ i ⎪ ⎩ 2 x i = 1, i = 1, · · · , N + M

(14)

Then that there is a solution to (13) if and only if there is a solution x = (xi )0i  N +2M to (14) such that x0 = 0. By Lemma 4.5, we know that the system (14) has a solution x = (xi )0i  N +2M with x0 = 0 if and only if there is a pair of non-null vectors x = (xi )0i  N +2M and y = ( y i )0i  N +2M with x0 y 0 = 0 satisfying (11). So far, we proved that there is a solution to (10) if and only if there is a pair of nonzero vectors x, y ∈ F N +2M +1 \{0} satisfying (11) such that x0 y 0 = 0. We next prove by contradiction that all nonzero pairs of solutions to (11) satisfy x0 y 0 = 0. Let x = (xi )0i  N +2M and y = ( y i )0i  N +2M be a pair of nonzero solutions to (11) such that x0 y 0 = 0. Since by the last constraint in (11), x and y are proportional to each other, we know that x0 = y 0 = 0. Suppose that there is 1  i 0  N + M such that xi 0 = 0, then by the fourth equation of (11) we know that:

thus y i 0 = 0. This implies that y is a zero vector because x and y are proportional to each other. Hence xi = 0 for all 1  i  N + M. Now we apply this condition to the third equation in (11) to obtain:

i = 1, . . . , M , j = 0, . . . , N + 2M .

If x is a nonzero vector, necessarily there is 1  i 0  M such that x N + M +i 0 = 0, in which case y is a zero vector. Therefore we deduce that for all nonzero solution of (11), it is necessary that x0 y 0 = 0. 2 We now prove Theorem 4.1.

By introducing 2M extra variables, denoted by {x N +i }1i 2M , to replace the variables {xk1 xk2 , 1 + r i xk3 }1i  M , we rewrite the system (12) as:

i

x N + M + i y j = 0,

Proof. A simple rewriting of the system (10) is:

⎧ ⎨ (1 + p i xk1i + qi xk2i + p i qi xk1i xk2i )(1 + ri xk3i ) = 0, i = 1, · · · , M ⎩ 2 x i = 1, i = 1, · · · , N

i

⎪ (x0 + ri xk3 − xN + M +i )x j = 0, i = 1, · · · , M , ⎪ ⎪ i ⎪ ⎪ ⎪ j = 0, . . . , N + 2M ⎪ ⎪ ⎪ 2 ⎩ xi − x20 = 0, i = 1, · · · , N + M

x i 0 y i 0 = 0,

The system (10) has a solution x ∈ F N if and only if there is a pair of nonzero vectors x = (xi )0i  N +2M , y = ( y i )0i  N +2M ∈ FN +2M +1 \{0} satisfying the following system:

(x0 + ri xk3 − xN + M +i ) y j = 0,

⎧ (x + p i xk1 + qi xk2 + p i qi xN +i )xN + M +i = 0, ⎪ ⎪ i i ⎪ 0 ⎪ ⎪ ⎪ ⎪ i = 1, · · · , M ⎪ ⎪ ⎨ x 1 x 2 − x0 xN +i = 0, i = 1, · · · , M k k

Proof. Note that By Lemma 4.4 and Lemma 4.6, every 3SAT instance is equivalent to solving a bilinear system as described by (11). It is clear that we can construct in polynomial time (with respect to N and M) a sequence of n-by-n matrices { V i }1i m with entries in {0, ±1} to write (11) in the form of (7). We therefore proved the polynomial reduction of 3SAT problem to Problem 4.1.

2 From Theorem 4.1 we readily deduce the following result.

(13)

Theorem 4.2 (Compare with [17]). The bilinear feasibility problem, as described in Problem 4.1, is NP-hard when the field F is Q, R or C.

We next add an extra variable x0 to replace the affine term 1 to construct a system of homogeneous polynomial equations of degree 2:

Hillar and Lim showed in [17], with a different approach, that the bilinear feasibility problem is NP-hard when the field is R or C; the novelty here is to handle

S. Gaubert, Z. Qu / Information Processing Letters 118 (2017) 35–43

the case of solutions with rational entries, see Remark 5.1 for a comparison of the two approaches. We also derive the NP-hardness of Problem 4.2 and Problem 2.3. Theorem 4.3. (a) The rank one matrix existence problem, as described in Problem 4.2, is NP-hard when the field F is either Q, R or C. (b) Checking strict positivity of completely positive maps, as described in Problem 2.3, is NP-hard. Proof. To prove the first statement we only need to use the equivalence between Problem 4.2 and Problem 4.1. Finally using the characterization of strict positivity in Lemma 4.1, we obtain that Problem 2.3 is included in Problem 4.1 when F = C. 2 5. Checking strict positivity of Kraus maps is NP-hard

m 

41

V i∗ V i = (2N + 7M + 4)2 I n .

i =1

Proof. We denote by {e i }0i  N +2M the standard basis vectors in F N +2M +1 . We know from Lemma 4.6 that the system (10) admits a solution if and only if there is a pair of nonzero vectors x, y ∈ Fn \{0} satisfying

⎧  x (e 0 + p i ek1 + qi ek2 + p i qi e N +i )e  y = 0, ⎪ N + M +i ⎪ i i ⎪ ⎪ ⎪ ⎪ i = 1 , · · · , M ⎪ ⎪ ⎪    ⎪ ⎪ ⎪ x (ek1i ek2i − e 0 e N +i ) y = 0, i = 1, · · · , M ⎨ x (e 0 + r i ek3 − e N + M +i )e  i = 1, · · · , M , j y = 0, ⎪ i ⎪ ⎪ ⎪ j = 0, . . . , N + 2M ⎪ ⎪ ⎪ ⎪ ⎪ x (e i e  − e0 e i = 1, · · · , N + M ⎪ 0 ) y = 0, i ⎪ ⎪ ⎩ x (e e  − e e  ) y = 0, 0  i < j  N + 2M i j j i

(16) In this section, we study the complexity of Problem 2.4: deciding if a Kraus map is strictly positive. By Lemma 4.1, we shall only need to study the complexity of the following unital bilinear feasibility problem. Problem 5.1 (Unital bilinear feasibility). Input: integers n, m and matrices V 1 , . . . , V m ⊂ M n (Q), satisfying (2). Question: is the following bilinear system

x T V i y = 0, ∀ i = 1 , . . . , m ,

(15)

The system (16) has N + 3M + ( N + 2M + 1)(4M + N )/2 bilinear equations. Let m0 = N + 3M + ( N + 2M + 1)(4M + N )/2 and denote by { A i }1i m0 the matrices corresponding to the m0 bilinear forms in (16). Recall that ( p i )i , (qi )i , (r i )i are sequences of numbers in {1, −1}. Therefore we transformed the system (10) to the following bilinear system:

x T A i y = 0, i = 1 , . . . , m 0 ,

(17)

where each A i has entries in {0, 1, −1}. We check the five lines in (16) and obtain that

has a solution x, y ∈ Fn \{0}. Again, Problem 4.1 is trivially equivalent to the following problem on the existence of a rank one matrix in the orthogonal complement of the subspace generated by the Kraus operators { V 1 , . . . , V m } satisfying (2).

m0  i =1

A ∗i A i =

M  i =1

+

Problem 5.2. Input: integers n, m and matrices V 1 , . . . , V m ⊂ M n (Q) satisfying (2). Question: is there a rank one matrix in Fn×n in the orthogonal complement of the subspace spanned by { V 1 , . . . , V m }?

4e N + M +i e  N + M +i

M M N +2M   (ek2 ek2 + e N +i e  ) + 3e j e  N +i j i =1

+

N +M

i

i

i =1

 (e i e  i + e0 e0 ) +

i =1

Lemma 5.1. Let F = Q, R or C. Consider the system (10) in Lemma 4.6. We suppose in addition that k1i = k2i for all 1  i  M and that ( p i )i , (qi )i , (r i )i are sequences of numbers in {±1}. Let n = N + 2M + 1. There is a finite family of matrices { V i }1i m ⊂ Mn (Q) with entries in {0, ±1, ± 31 } such that the system (10) has a solution if and only if the bilinear system:

x T V i y = 0, i = 1 , . . . , m , has a nonzero solution x, y ∈ F \{0}. Besides, the integer m can be bounded by a polynomial in N and M and the matrices { V i }1i m satisfy: n



m0  i =1

⎜ ⎜ ⎝



k1

A ∗i A i = ⎜

(e j e j + e i e  i )

i< j

Therefore we have that In Section 4 we have shown how to reduce a 3SAT instance to a bilinear feasibility problem in polynomial time. The next lemma shows that by adding a polynomial number of redundant bilinear forms to system (11) one can construct a unital bilinear feasibility problem.



j =0

⎟ ⎟ ⎟ ⎠

k2

..

. kn

where ki  2N + 7M + 4 for all 1  i  n. Remark that due to the third line of equations in (16), for each 0  j  N + 2M, there is an integer 1  n j  m0 such that

A n∗ j A n j = 3e j e  j . By letting B j = A n j /3 we get that:

3B ∗j B j = e j e  j . For all  1  j  n let l j = (2N + 7M + 4)2 − n j . Let m = n m0 + 3 j =1 l j and { V i }1i m be the sequence of matrices

42

S. Gaubert, Z. Qu / Information Processing Letters 118 (2017) 35–43

containing { A i }1i m0 and 3l j times the matrix B j for all 1  j  n. Then we have m  i =1

V i∗ V i =

m0 

A ∗i A i +

i =1

n 

3l j B ∗j B j = (2N + 7M + 4)2 I n .

j =1

Since for all 1  j  n, B j is co-linear to a matrix in { A i }i m0 . The feasibility of the system

x T V i y = 0, i = 1 , . . . , m

(18)

is equivalent to that of (17). Thus the system (10) admits a solution if and only if there is a nonzero solution to (18). 2 We therefore deduce the NP-hardness of Problem 5.1, Problem 5.2 and Problem 2.4. Theorem 5.1. (a) The unital bilinear feasibility problem, as described in Problem 5.1, is NP-hard when the field F is either Q, R or C. (b) The rank one matrix existence problem, as described in Problem 5.2, is NP-hard when the field F is either Q, R or C. (c) Checking strict positivity of Kraus maps, as described in Problem 2.4, is NP-hard. Remark 5.1. When F = R or C, Hillar and Lim obtained in [17, Th. 3.7] the NP-hardness of Problem 4.1 by reducing graph 3-Colorability problems to it. Let { W 1 , . . . , W m } ⊂ M n (C) be arbitrary matrices and consider the bilinear system:

x T W i y = 0, ∀ i = 1 , . . . , m . Let U ∈ M n (C) be any matrix such that m 

W i∗ W i = U ∗ U .

(19)

i =1

If U is not invertible, than the intersection of the null spaces of { W 1 , . . . , W m } is not empty and the latter bilinear system is clearly feasible. If U is invertible, than the latter bilinear system is feasible if and only if the following bilinear system is feasible:

x T W i U −1 y = 0, ∀ i = 1, . . . , m . Hence every instance of Problem 4.1 can be “reduced” to an instance of Problem 5.1 by computing the matrix U ∈ M n (C) satisfying (19). However, in general such a matrix U does not have rational entries. Therefore, it is not obvious to deduce the complexity of Problem 5.1 in the bit model from the NP-hardness of bilinear feasibility over R or C. In this respect, the proof of Theorem 4.1 should be compared with the one of Hillar and Lim [17] proving the latter result. In order to reduce a 3-Colorability problem to a bilinear system, they use cubic roots of the unity to encode the three colors. Some auxiliary variables are also introduced in order to obtain a homogeneous system. However, their construction does not allow to obtain in polynomial time matrices satisfying the constraint (2). Moreover, our proof through 3SAT is insensitive to the restriction to rational variables: it entails that the existence

of a non-zero rational solution of bilinear systems is still NP-hard (Theorem 4.2). Remark 5.2. Let us finally point out an alternative route, suggested by one referee, to establish the NP-hardness of bilinear feasibility over Q (Theorem 4.2). We only give the idea, leaving the details to the reader. This route relies on a lemma of De Loera et al. [20, Lemma 2.2], building on a an approach of Lovász [21]. It shows that a digraph G = ( V , E ) has stability number at least k if and only if the following system of equations

x2i − xi = 0, ∀i ∈ V ; xi x j = 0, ∀(i , j ) ∈ E ;

n 

xi = k

i =1

has a solution. By adding an auxiliary variable x0 , this is equivalent to the existence of a non-zero solution to the homogeneous quadratic system

xi (xi − x0 ) = 0, ∀i ∈ V ; xi x j = 0, ∀(i , j ) ∈ E ;



xj

n 



xi − kx0

= 0, ∀ j ∈ V .

i =1

By using the “2 × 2 minors” gadget (Lemma 4.5), we can transform this to a bilinear system, and deduce Theorem 4.2 from the NP-hardness of the stability number. References [1] J.F. Buss, G.S. Frandsen, J.O. Shallit, The computational complexity of some problems of linear algebra, J. Comput. Syst. Sci. 58 (3) (1999) 572–596. [2] G. Birkhoff, Extensions of Jentzsch’s theorem, Trans. Am. Math. Soc. 85 (1957) 219–227. [3] A. Berman, R.J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, Class. Appl. Math., vol. 9, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1994. [4] G.P. Barker, H. Schneider, Algebraic Perron–Frobenius theory, Linear Algebra Appl. 11 (3) (1975) 219–233. [5] M.D. Choi, Completely positive linear maps on complex matrices, Linear Algebra Appl. 10 (1975) 285–290. [6] M. Cao, D.A. Spielman, A.S. Morse, A lower bound on convergence of a distributed network consensus algorithm, in: Proc. of the Joint 44th IEEE Conference on Decision and Control and European Control Conference, IEEE, 2005, pp. 2356–2361. [7] E.V. Denardo, Period of connected networks, Math. Oper. Res. 2 (1977) 20–24. [8] D.L. Donoho, Y. Tsaig, I. Drori, J.-L. Starck, Sparse solution of underdetermined systems of linear equations by stagewise orthogonal matching pursuit, IEEE Trans. Inf. Theory 58 (2) (2012) 1094–1121. [9] W. Eberly, M. Giesbrecht, P. Giorgi, A. Storjohann, G. Villard, Solving sparse rational linear systems, in: Proceedings of the 2006 International Symposium on Symbolic and Algebraic Computation, ISSAC ’06, ACM, New York, NY, USA, 2006, pp. 63–70. [10] D.E. Evans, R. Høegh-Krohn, Spectral properties of positive maps on C ∗ -algebras, J. Lond. Math. Soc. (2) 17 (2) (1978) 345–355. [11] D.R. Farenick, Irreducible positive linear maps on operator algebras, Proc. Am. Math. Soc. 124 (11) (1996) 3381–3390. [12] M. Fazel, H. Hindi, S. Boyd, Rank minimization and applications in system theory, in: Proc. of American Control Conference, 2004, pp. 3273–3278. [13] J.-C. Faugère, M. Safey El Din, P.-J. Spaenlehauer, On the complexity of the generalized MinRank problem, J. Symb. Comput. 55 (2013) 30–58. [14] M. Grötschel, L. Lovász, A. Schrijver, Geometric Algorithms and Combinatorial Optimization, second edition, Algorithms Comb., vol. 2, Springer-Verlag, Berlin, 1993.

S. Gaubert, Z. Qu / Information Processing Letters 118 (2017) 35–43

[15] U. Groh, Some observations on the spectra of positive operators on finite-dimensional c ∗ -algebras, Linear Algebra Appl. 42 (1982) 213–222. [16] R.A. Horn, C.R. Johnson, Matrix Analysis, second edition, Cambridge University Press, Cambridge, 2013. [17] C.J. Hillar, L.-H. Lim, Most tensor problems are NP-hard, J. ACM 60 (6) (November 2013) 45. [18] M. Keyl, Fundamentals of quantum information theory, Phys. Rep. 369 (5) (2002) 431–548. [19] K. Kraus, States, Effects, and Operations, Lect. Notes Phys., vol. 190, Springer-Verlag, Berlin, 1983. [20] J.A. De Loera, J. Lee, S. Margulies, S. Onn, Expressing combinatorial optimization problems by systems of polynomial equations and the nullstellensatz, Comb. Probab. Comput. 18 (04) (2009) 551–582. [21] L. Lovász, Stable sets and polynomials, Discrete Math. 124 (1994) 137–153. [22] V. Lomonosov, P. Rosenthal, The simplest proof of Burnside’s theorem on matrix algebras, Linear Algebra Appl. 383 (2004) 45–47. [23] L. Mazzarella, A. Sarlette, F. Ticozzi, Consensus for quantum networks: symmetry from gossip interactions, IEEE Trans. Autom. Control 60 (1) (Jan 2015) 158–172.

43

[24] B.K. Natarajan, Sparse approximate solutions to linear systems, SIAM J. Comput. 24 (2) (1995) 227–234. [25] M.A. Nielsen, I.L. Chuang, Quantum Computation and Quantum Information, Cambridge University Press, Cambridge, 2000. [26] A. Olshevsky, J.N. Tsitsiklis, Convergence speed in distributed consensus and averaging, SIAM J. Control Optim. 48 (1) (2009) 33–55. [27] B. Recht, M. Fazel, P.A. Parrilo, Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization, SIAM Rev. 52 (3) (2010) 471–501. [28] D. Reeb, M.J. Kastoryano, M.M. Wolf, Hilbert’s projective metric in quantum information theory, J. Math. Phys. 52 (8) (2011) 082201. [29] B. Recht, W. Xu, B. Hassibi, Null space conditions and thresholds for rank minimization, Math. Program., Ser. B 127 (1) (2011) 175–202. [30] M. Sanz, D. Pérez-García, M.M. Wolf, J.I. Cirac, A quantum version of Wielandt’s inequality, IEEE Trans. Inf. Theory 56 (9) (September 2010) 4668–4673. [31] R. Sepulchre, A. Sarlette, P. Rouchon, Consensus in noncommutative spaces, in: Proc. of the 49th IEEE Conference on Decision and Control, Atlanta, USA, Dec. 2010, pp. 6596–6601.