Iterative algorithm for solving a class of convex feasibility problem

Iterative algorithm for solving a class of convex feasibility problem

Accepted Manuscript Iterative algorithm for solving a class of convex feasibility problem Chunmei Li, Xuefeng Duan, Linzhang Lu, Qingwen Wang, Shuqian...

776KB Sizes 0 Downloads 83 Views

Accepted Manuscript Iterative algorithm for solving a class of convex feasibility problem Chunmei Li, Xuefeng Duan, Linzhang Lu, Qingwen Wang, Shuqian Shen

PII: DOI: Reference:

S0377-0427(18)30719-2 https://doi.org/10.1016/j.cam.2018.11.029 CAM 12037

To appear in:

Journal of Computational and Applied Mathematics

Received date : 13 July 2016 Revised date : 6 August 2018 Please cite this article as: C. Li, X.-F. Duan, L. Lu et al., Iterative algorithm for solving a class of convex feasibility problem, Journal of Computational and Applied Mathematics (2018), https://doi.org/10.1016/j.cam.2018.11.029 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

*Manuscript Click here to view linked References

Iterative algorithm for solving a class of convex feasibility problem ∗ Chunmei Li1

Xuefeng Duan

2 †

Linzhang Lu1

Qingwen Wang3

Shuqian Shen4

1

School of Mathematical Science, Guizhou Normal University, Guiyang 550001, P.R. China 2 College of Mathematics and Computational Science, Guangxi Colleges and Universities Key Laboratory of Data Analysis and Computation, Guangxi Key Laboratory of Crytography and Information Security, Guilin University of Electronic Technology, Guilin 541004, P.R. China 3 Department of Mathematics, Shanghai University, Shanghai 200444, P.R. China 4 College of Science, China University of Petroleum, Qingdao, 257061, P.R. China

Abstract In this paper, we consider a class of convex feasibility problem, which arises in quantum computation. Based on the matrix equation theory, the feasible sets are characterized by exploiting the special structure of the linear constraints, and its analytic expression is given. By making use of the nice structure properties and the KKT condition, we derive the projection formulas of a matrix onto the feasible sets. The relaxed alternating projection method is designed to solve the convex feasibility problem. Numerical experiments show that the new method is feasible and effective. AMS classification: 15A48; 49M30; 81P68 Keywords: Convex feasibility problem; Projection formula; Relaxed alternating projection algorithm; Quantum Computation

1. Introduction Throughout this paper, we use Cm×n to denote the set of m × n complex matrices. We write B ≥ 0 if the matrix B is Hermitian positive semidefinite. For the m × n matrix B = (b1 , b2 , · · · , bn ) = (bij ), (B)ij stands for the ij-th entry of B, that is, (B)ij = bij , vec(B) stands for a vector de−1 T T T fined by b = vec(B) = (bT 1 , b2 , · · · , bn ) and vec (b) stands for the inverse operator of vec(B), which transforms the vector b into the matrix B. The symbols rank(B), tr(B), λi (B), BT , B+ , and B∗ stand for the rank, trace, eigenvalue, transpose, Moore-Penrose generalized inverse and conjugate transpose of the matrix B, respectively. The symbol ∥α∥2 stands for the l2 −norm of ∗ The work was supported by the National Natural Science Foundation of China (grant numbers 11561015; 11671105; 11761024), the Natural Science Foundation of Guangxi Province (grant numbers 2016GXNSFFA380009; 2017GXNSFBA198082; 2016GXNSFAA380074), Guangxi Key Laboratory of Crytography and Information Security (grant number GCIS201616) and the talent program of Guilin University of Electronic Technology. † Corresponding author. E-mail address: [email protected] (X.-F. Duan).

1

2 1

the vector α, i.e. ∥α∥2 = (α∗ α) 2 . The symbol ∥B∥ stands for the Frobenius norm of the matrix B, which is induced by the inner product ⟨A, C⟩ = tr(C∗ A) for A, C ∈ Cn×m . The symbols Im and Om stand for the m × m identity matrix and zero matrix, respectively. The positive semidefinite matrix with trace one is called density matrix. For a differentiable function f, the notation ∇f refers to the gradient of f at X. In this paper, we consider the following problem. Problem I. Given the density matrices A1 , A2 , · · · , Ak ∈ C n×n and B1 , B2 , · · · , Bk ∈ C m×m , find a mn × mn matrix X = (Xij )i,j=1,2,··· ,n with the block matrices Xij ∈ C m×m such that n ∑

(Al )ij Xij = Bl , l = 1, 2, · · · , k,

(1.1)

i,j=1

tr(Xij ) = δij , i, j = 1, 2, · · · , n,

(1.2)

X ≥ 0.

(1.3)

Set Ω1 = {X = (Xij )|

n ∑

(Al )ij Xij = Bl , l = 1, 2, · · · , k, tr(Xij ) = δij , i, j = 1, 2, · · · , n},

i,j=1

Ω2 = {X = (Xij )|X ≥ 0}, { 1, i = j, where δij is the Kronecker delta, i.e., δij = then Problem I is equivalent to finding 0, i ̸= j, a mn × mn matrix X ∈ Ω1 ∩ Ω2 . It is worth mentioning that the set Ω1 is an affine subspace because it is determined by some matrix equations and trace constrains, and Ω2 is a closed convex set. So Problem I is the convex feasibility problem. Problem I arises in quantum computation, which can be stated as follows. Constructing a quantum channel is reduced to the following interpolation problem (see [10,24,25] and the references therein). Given density matrices A1 , A2 , · · · , Ak ∈ C n×n and B1 , B2 , · · · , Bk ∈ C m×m , find a trace preserving completely positive (TPCP) linear map φ such that φ(Al ) = Bl , l = 1, 2, · · · , k.

(1.4)

As mentioned in [25], it is difficult to characterize the TPCP map φ. Fortunately, this problem can be reformulated as a convex feasibility problem. By Theorem 2 of Choi [9], the map φ is completely positive if and only if its Choi matrix X = (Xij )i,j=1,2,··· ,n , Xij = φ(Eij ) ∈ C m×m

(1.5)

is positive semidefinite, where Eij is the n × n matrix whose the ij-th entry is one and others are zeros. Al can be written as Al =

n ∑

(Al )ij Eij , l = 1, 2, · · · , k,

i,j=1

(1.6)

3 where (Al )ij stands for the ij-th entry of the n × n matrix Al . Since the TPCP map φ satisfies (1.4), then combining (1.6) and (1.5) we obtain φ(Al ) = φ(

n ∑

(Al )ij Eij ) =

i,j=1

tr(φ(Al )) = tr(

n ∑

(Al )ij φ(Eij ) =

i,j=1 n ∑

(Al )ij Xij ) =

i,j=1

n ∑

(Al )ij Xij = Bl ,

(1.7)

i,j=1 n ∑

(Al )ij tr(Xij ) = tr(Al ).

(1.8)

i,j=1

Noting that (1.8) with l = 1, 2, · · · , k hold if and only if tr(Xij ) = δij , i, j = 1, 2, · · · , n. Therefore, combining (1.7), (1.8) and (1.5), we obtain that finding a TPCP map φ satisfies (1.4) is equivalent to Problem I. Although Problem I is a convex feasibility problem, it is difficult to solve Problem I. The most difficulty is how to characterize the set Ω1 , because it has highly structured constraints. It seems unwise to deal with these constraint conditions one by one. So we use the matrix equation theory and the special structure of the linear constraints to characterize the set Ω1 , and derive the analytic expression of the matrix X in Ω1 . In the last few years there has been a constantly increasing interest in developing the theory and numerical methods for the convex feasibility problems, due to their wide applications. The most common methods for solving them are as follows, subgradient projection methods [11,27,28,31,32], block-iterative method [1,4,30], string-averging projection method [6], viscosity method [33], non-interior continuation algorithm [21], Mann-type iteration [23] and HansenSengupta method [29]. For a complete survey on the convex feasibility problems see [2,15]. However, there are many numerical methods to solve convex feasibility problems, but the main tool to solve them is projection-like algorithms, including simultaneous projection algorithms [2,19,20,26], and alternating projection algorithms [3,8,13,22]. Of course, the two algorithms can also be used to solve Problem I. However, its convergence rate is very slow. In order to acceleate the convergence, the relaxed alternating projection algorithm is constructed to compute the solution of Problem I in this paper. The difficulty to realize the relaxed alternating projection algorithm lies in how to compute the analytic formulas of the projections PΩ1 (Z) and PΩ2 (Z) of Z onto Ω1 and Ω2 , respectively. Fortunately, this difficulty is overcome by using the nice structure properties and the KKT condition. Numerical examples show that the new method is feasible and has faster convergence rates than the simultaneous projection algorithms and the alternating projection algorithms. The paper is organized as follows. Some notations and preliminaries relevant to our study are briefly reviewed in Section 2. In Section 3, the set Ω1 is characterized by the matrix equation theory and the projections PΩ1 (K) and PΩ2 (K) of K onto the sets Ω1 and Ω2 are given in Section 4. The relaxed alternating projection algorithm is designed to solve Problem I in Section 5. Finally, some numerical examples are used to illustrate that the new algorithm is feasible and effective.

2. Notations and preliminaries In this section, we first give some preliminaries, and then review the alternating projection algorithms and simultaneous projection algorithms. We begin with some definitions and lemmas. Definition 2.1. [8] Let M be a closed convex subset in a Hilbert space H and u be a point in H, then the point in M nearest to u is called the projection of u onto M and denoted by PM (u),

4 that is to say, PM (u) is the unique solution of the following minimization problem min ∥x − u∥.

x∈M

(2.1)

The induced map PM : H → M is called the projected operator. The projection PM (u) is also characterized by Kolmogorov’s Criterion PM (u) ∈ M and ⟨y − PM (u), u − PM (u)⟩ ≤ 0, f or ∀ y ∈ M. Definition 2.2. [2] Suppose C is a closed convex nonempty set and {xn } is a sequence in M, we say that {xn } is Fejer monotone with regard to C, if ∥xn+1 − z∥ ≤ ∥xn − z∥ for all z ∈ C and every n ≥ 0.

Definition 2.3. [2] A mapping φ : D → M, where D is a closed convex nonempty subset of M, is called nonexpansive if ∥φ(x) − φ(y)∥ ≤ ∥x − y∥, ∀x, y ∈ D. A mapping φ : D → M is called firmly nonexpansive if ∥φ(x) − φ(y)∥2 ≤ ⟨φ(x) − φ(y), x − y⟩, ∀x, y ∈ D. Here we denoted the fixed point set of φ by F ixφ = {x ∈ D| x = T x}. A mapping φ : D → M is called quasi-nonexpansive if ∥φ(x) − z∥ ≤ ∥x − z∥, ∀x ∈ D, z ∈ F ixφ. Lemma 2.1. [2] Let D be the closed convex set of the Hilbert space H, {xn } be a sequence in D and x ∈ D. (a) If the operator T : D → H is nonexpansive, xn → x and ∥T (xn ) − xn ∥ → 0, then we have x ∈ F ixD. (b) If the sequence {xn } is Fejer monotone with regard to D and all weak cluster points of {xn } are in D, then {xn } converges weakly to some point x∗ in D.

Alternating projection is a simple algorithm for computing a point in the intersection of some convex sets, using a sequence of projections onto the sets. For example, suppose C and D are closed convex sets in Cn , and let PC and PD denote the projections onto C and D, respectively. The alternating projection method starts with any x0 ∈ Cn , and then alternately projects onto C and D yk = PC (xk ), xk+1 = PD (yk ), k = 0, 1, 2, . . . (2.2) ∩ According to Theorem 2 of Cheney and Goldstein [8], we ∩ know that if C D ̸= Φ, then the sequence {xk } generated by (2.2) converges to a point x∗ ∈ C D. That is to say, the alternating projections find a point in the intersection of the sets, provided they intersect. This can be illustrated in the figure 1 on the left.

5

Fig. 1: First few iterations of alternating projection algorithm and averaged projection algorithm.

Although the alternating projection method can find a point in the intersection of two close and convex sets, its convergence rate is very slow, especially when the angle of two sets is small. In order to accelerate the convergence of the alternating projection method (2.2), the relaxed alternating projection methods ∀x0 ∈ D, xk+1 = PD (xk + λk σk (PD PC (xk ) − xk ))

(2.3)

were developed in [5,7,14], where the relaxation parameter λk ∈ [0, 2] and the step size σk ≥ 0. Choosing different parameters λk and σk leads to different relaxed alternating projection methods, such as λk = 1, σk = ∥PD (xk )−xk ∥2 /⟨PD (xk −xk , PC PD (xk )−xk ⟩ in [7], and λk ∈ [ν, 2 − ν](ν > 0), σk = 1+(∥PC PD (xk )−PD (xk )∥− δek )2 /∥PC PD (xk )−xk ∥2 in [5]. If λk = σk = 1, then the relaxed alternating projection method (2.3) reduces to (2.2). Simultaneous projection is the other iterative algorithm to find a point in the intersection of two closed convex sets. Its iterative formula can be stated as follows (see [2,13,26] for more details) α x0 ∈ C n , xk+1 = (1 − α)Xk + (PC (Xk ) + PD (Xk )), k = 0, 1, 2, · · · , (2.4) 2 where α ∈ [0, 2]. If α = 1, then (2.4) is called the averaged projection algorithm, and its first few iterations can be illustrated in the figure 1 on the right. By Corollary 3 of Pierro and Iusem [26], the sequence {xk } generated by (2.4) converges to a point in the intersection of C and D, if C ∩ D ̸= ϕ.

3. Characterization of the feasible set Ω1 Based on the matrix equation theory, we first characterize (1.1) and (1.2) by exploiting the special structure of linear constraints, and then derive the analytic expression of X ∈ Ω1 in this section. We first characterize the constraint condition (1.1). Set X=

(

T T X11 X21 ···

T T Xn1 X12 ···

T Xn2 ···

T X1n ···

T Xnn

)T

∈ C mn

2 ×m

,

(3.1)

6 where Xij , i, j = 1, 2, · · · , n are the block matrices of X in Problem I, then n ∑

(Al )ij Xij = Bl ⇔

i,j=1

n ∑

(Al )ij Im Xij = Bl ,

i,j=1

⇔ ((Al )11 Im , · · · , (Al )n1 Im , (Al )12 Im , · · · , (Al )n2 Im , · · · , (Al )1n Im , · · · , (Al )nn Im )X = Bl . Therefore, for l = 1, 2, · · · , k, the constraint condition (1.1) can be equivalently rewritten as AX = B, where



  A= 

(A1 )11 Im · · · (A2 )11 Im · · · .. .. . . (Ak )11 Im · · ·

(A1 )n1 Im · · · (A2 )n1 Im · · · .. .. . . (Ak )n1 Im · · ·

(3.2)

(A1 )1n Im · · · (A2 )1n Im · · · .. .. . . (Ak )1n Im · · ·

(A1 )nn Im (A2 )nn Im .. . (Ak )nn Im





    , B =   

B1 B2 .. . Bk



  , 

that is to say, the solution X of the matrix equation (3.2) satisfies the constraint condition (1.1). Now we begin to compute the solution X. Lemma 3.1.([12]) Set W ∈ C m×n , N ∈ C m×p , then the matrix equation W X = N has a solution if and only if W W + N = N, and its general solutions can be expressed as X = W + N + (In − W + W )Z, where Z ∈ C n×p is an arbitrary matrix.

By Lemma 3.1 we obtain that the matrix equation (3.2) has a solution if and only if AA+ B = B, and its general solutions are as follows X = A+ B + (Imn2 − A+ A)Z,

(3.3)

2

where Z ∈ Cmn ×m is an arbitrary matrix. But in order to characterize the set Ω1 , we need the following expression of the solution X. Lemma 3.2. The general solutions X of the matrix equation (3.2) can be expressed as X = (Xij )mn2 ×m , Xij = Veij Γ−1 U1∗ B + Vbij Y, i, j = 1, 2, · · · , n,

(3.4)

2 where the matrix Y ∈ C (mn −r)×m are arbitrary, and the matrices Veij , Vbij , Γ and U1 are defined in (3.9) and (3.5), respectively.

Proof. Let the singular value decomposition of the matrix A be given by ( ) Γ 0 A=U V ∗, 0 0 2

2

(3.5)

where U = (U1 , U2 ) ∈ Ckm×km , V = (V1 , V2 ) ∈ Cmn ×mn are unitary matrices, U1 ∈ Ckm×r , 2 V1 ∈ Cmn ×r , r = rank(A), Γ = diag(t1 , t2 , · · · , tr ). Then substituting (3.5) into (3.3), it is easy to obtain that the general solutions of (3.2) can be rewritten as ( −1 ∗ ) Γ U1 B X=V , (3.6) Y

7 where Y ∈ C(mn

2 −r)×m

2

are arbitrary. Partitioned the matrix V as follows ( T )T T ··· V T V21 V = V11 , nn

(3.7)

where Vij ∈ Cm×mn , i, j = 1, 2, · · · , n, then combining (3.1), (3.6) and (3.7) we have ( −1 ∗ )   Γ U1 B V   11      ( −1Y ∗ )  V11 X11     X21   V21  ( −1 ∗ )  V21 Γ U1 B    Γ U1 B    Y = X= . = .  , Y    ..   ..  ..   .  ( −1 ∗ )  Vnn Xnn   Γ U1 B Vnn Y which implies

Xij = Vij Set

(

Γ−1 U1∗ B Y

Vij =

(

)

Veij

, i, j = 1, 2, · · · , n. Vbij

)

(3.8)

,

(3.9)

e ij ∈ Cm×r and V b ij ∈ Cm×(mn2 −r) . Substituting (3.9) into (3.8) we have where V ( ) ( Γ−1 U ∗ B ) 1 e b Xij = Vij Vij = Veij Γ−1 U1∗ B + Vbij Y, i, j = 1, 2, · · · , n. Y Hence the solutions of the matrix equation (3.2) can be expressed as (3.4).



In the following section we will characterize the constraint condition (1.2) under the constraint condition (1.1), that is, find the analytic expression of X ∈ Ω1 . Theorem 3.1. Set

If

c = I(mn2 −r)m − M + M, X eij = Veij Γ−1 U ∗ B + Vbij vec−1 (M + d), i, j = 1, 2, · · · , n. M 1 AA+ B = B, M M + d = d,

then the set Ω1 is nonempty, and the analytic expression of X ∈ Ω1 can be expressed as eij + Vbij vec−1 (M cz), i, j = 1, 2, · · · , n, X = (Xij )mn×mn , Xij = X

where the vector z ∈ C (mn and (3.13), respectively.

2 −r)m

(3.10)

is arbitrary, and the matrices M and d are defined by (3.12)

Proof. By Lemmas 3.1 and 3.2 we know that if AA+ B = B, then the blocks matrices Xij , i, j = 1, 2, · · · , n of X satisfying (1.1) can be expressed as (3.4). Substituting (3.4) into the constraint condition (1.2) we obtain

i.e.,

δij = tr(Xij ) = tr(Veij Γ−1 U1∗ B + Vbij Y ) = tr(Veij Γ−1 U1∗ B) + tr(Vbij Y ), tr(Vbij Y ) = δij − tr(Veij Γ−1 U1∗ B), i, j = 1, 2, · · · , n,

8 where the matrix Y = (yst ) ∈ C(mn

2 −r)×m

are arbitrary. Set

dij = δij − tr(Veij Γ−1 U1∗ B), i, j = 1, 2, · · · , n,

then tr(Vbij Y ) = tr(Vbij

2 mn m ∑−r ∑

s=1

yst Est ) =

t=1

2 mn m ∑−r ∑

s=1

t=1

which are equivalent to the matrix equation

yst tr(Vbij Est ) = dij , i, j = 1, 2, · · · , n,

M y = d, where



  M =  

tr(Vb11 E11 ) · · · tr(Vb21 E11 ) · · · .. .. . . b tr(Vnn E11 ) · · · d=

(

tr(Vb11 E(mn2 −r)1 ) · · · tr(Vb21 E(mn2 −r)1 ) · · · .. .. . . b tr(Vnn E(mn2 −r)1 ) · · · d11 d21 · · ·

and the unknown vector ( y = y11 y21 · · ·

dn1 · · ·

(3.11)

tr(Vb11 E1m ) · · · tr(Vb21 E1m ) · · · .. .. . . b tr(Vnn E1m ) · · ·

y(mn2 −r)1 · · ·

tr(Vb11 E(mn2 −r)m ) tr(Vb21 E(mn2 −r)m ) .. . b tr(Vnn E(mn2 −r)m )

)T

d1n · · ·

dnn

y1m · · ·

y(mn2 −r)m



  ,  

(3.12) (3.13)

,

)T

.

Here Eij is the matrix whose the ij−th entry is one and the others are zeros. By Lemma 3.1 we obtain that if M M + d = d,

then the matrix equation (3.11) has a solution and the analytic expression of the solution is y = M + d + (I(mn2 −r)m − M + M )z ∈ C (mn where the vector z ∈ C(mn

2 −r)m

2 −r)×m

,

(3.14)

is arbitrary.

Applying the inverse operator vec−1 (·) to (3.14) leads to Y = vec−1 (y) = vec−1 (M + d) + vec−1 ((I(mn2 −r)m − M + M )z).

(3.15)

Therefore, substituting (3.15) into (3.4), we obtain that the analytic expression of X ∈ Ω1 can be expressed by (3.10).  Remark 3.1. Since tr(Vbij Est ) = (Vbij )ts , i, j = 1, 2, · · · , n, t = 1, 2, · · · , m, s = 1, 2, · · · , mn2 − r,

then the matrix M defined by (3.12) can also be  (Vb11 )11 · · · (Vb11 )1(mn2 −r) · · ·  b  (V21 )11 · · · (Vb21 )1(mn2 −r) · · · M = .. .. .. ..  . . . .  b b (Vnn )11 · · · (Vnn )1(mn2 −r) · · ·

rewritten as

(Vb11 )m1 · · · (Vb21 )m1 · · · .. .. . . b (Vnn )m1 · · ·

(Vb11 )m(mn2 −r) (Vb21 )m(mn2 −r) .. . b (Vnn )m(mn2 −r)



  .  

(3.16)

9 The form (3.16) can be used in the numerical experiments. Its computational cost is fewer than (3.12). Although we can characterize the set Ω1 in Theorem 3.1, we can not derive the general analytic expression of a point in the intersection of Ω1 and Ω2 . The biggest obstacle is how to describe the Hermitian positive semidefinite matrix X = (Xij ), where the submatrices Xij , i, j = 1, 2, · · · , n are defined by (3.10). So we will designed the relaxed alternating projection method to find X ∈ Ω1 ∩ Ω2 in the following sections.

4. Computation of the projections PΩ1 (K) and PΩ2 (K) The key problems to use the relaxed alternating projection method (2.4) to solve Problem I are how to compute the projections PΩ1 (K) and PΩ2 (K) of K onto the sets Ω1 and Ω2 , respectively. Such problems are perfectly solved by using the nice structure porperties and the KKT condition in this section. c and M defined in the Theorem 3.1 and by (3.12) respectively, Lemma 4.1. For the matrices M we have c∗ = M c=M c2 , M + M M + = M + . M Proof. It is easy to verify these results by using the definition of M+ , so the proof is omitted here.  Lemma 4.2. If the mn2 × mn2 unitary matrix V  Ve11  .  ..   e  Vn1  . V =  ..  e  V1n  .  .  . Venn 2 where Veij ∈ C m×r , Vbij ∈ C m×(mn −r) , then

n ∑

i,j=1

Veij∗ Veij = Ir ,

n ∑

i,j=1

n ∑

i,j=1

Veij∗ Vbij = 0,

is partitioned as in (3.7) and (3.9), i.e.,  Vb11 ..  .    Vbn1  ..   . ,  Vb1n  ..   .  Vbnn Vbij∗ Vbij = Imn2 −r ,

(4.1)

Vbij∗ Veij = 0.

(4.2)

n ∑

i,j=1

Proof. Since V is a mn2 × mn2 unitary matrix, then  ∑ n Veij∗ Veij  i,j=1 Imn2 = V ∗ V =  n  ∑ Vbij∗ Veij i,j=1

 n ∑ Veij∗ Vbij  i,j=1 , n ∑  ∗ b b Vij Vij i,j=1

10 

which implies (4.1) and (4.2).

Theorem 4.1. For a given mn × mn matrix K = (Kij )i,j=1,2,··· ,n with the block matrices Kij ∈ C m×m , then the projection of K onto Ω1 is eij + Vbij vec−1 (M cvec( PΩ1 (K) = (Xij )i,j=1,2,··· ,n , Xij = X

n ∑

i,j=1

c, X eij and Vbij are defined in Theorem 3.1. where M

Vbij∗ Kij )),

(4.3)

Proof. From Definition 2.1 it follows that the projection PΩ1 (K) is the solution of the following optimization problem min

2

X=(Xij )mn×mn ∈Ω1

∥K − X∥ =

n ∑

i,j=1

2

∥Kij − Xij ∥ =

n ∑

i,j=1

∥vec(Kij ) − vec(Xij )∥22 .

(4.4)

Since the objective function of (4.4) is convex and the feasible set Ω1 is a convex set, then the KKT point is the solution of (4.4). Now we begin to look for the KKT point of (4.4). By (3.10) the objective function of (4.4) can be written as F (z) = = = =

n ∑

i,j=1 n ∑ i,j=1 n ∑

i,j=1 n ∑

i,j=1

Set

then

∥vec(Kij ) − vec(Xij )∥22 eij + Vbij vec−1 (M cz))∥2 ∥vec(Kij ) − vec(X 2

cz))∥2 ∥vec(Kij ) − vec(Veij Γ−1 U1∗ B) − vec(Vbij vec−1 (M + d)) − vec(Vbij vec−1 (M 2 cz∥2 . ∥vec(Kij ) − vec(Veij Γ−1 U1∗ B) − (Im ⊗ Vbij )M + d − (Im ⊗ Vbij )M 2 c, Gij = (Im ⊗ Vbij )M

rij = vec(Kij ) − vec(Veij Γ−1 U1∗ B) − (Im ⊗ Vbij )M + d, i, j = 1, 2, · · · , n, n ∑

F (z) =

i,j=1

(4.5) (4.6)

∥rij − Gij z∥22 .

By the optimality conditions ∇F (z) =

n ∑

(2G∗ij Gij z − 2G∗ij rij ) = 0,

i,j=1

we obtain that the KKT point of (4.4) satisfies n ∑

i,j=1

G∗ij Gij z

=

n ∑

i,j=1

G∗ij rij .

(4.7)

11 n ∑

Now we begin to simplify

i,j=1 n ∑

G∗ij Gij and

G∗ij Gij

i,j=1

=

n ∑

G∗ij rij . By (4.5), (4.1) and Lemma 4.1 we have

i,j=1

n ∑ c∗ (Im ⊗ Vb ∗ )(Im ⊗ Vbij )M c M ij

i,j=1

n c∗ [ ∑ (Im ⊗ Vb ∗ Vbij )]M c = M ij i,j=1

=

c∗ [(Im M



n ∑ c Vbij∗ Vbij )]M

(4.8)

i,j=1

c∗ M c = M c. = M

By (4.5), (4.6), (4.1), (4.2) and Lemma 4.1 we have n ∑

i,j=1

G∗ij rij

= =

=

n ∑ c∗ (Im ⊗ Vb ∗ )[vec(Kij ) − vec(Veij Γ−1 U ∗ B) − (Im ⊗ Vbij )M + d] M 1 ij

i,j=1 n ∑

i,j=1 n ∑

i,j=1 n ∑

i,j=1 n ∑

i,j=1

n c∗ (Im ⊗ Vb ∗ )vec(Kij ) − ∑ M c∗ (Im ⊗ Vb ∗ )vec(Veij Γ−1 U ∗ B)− M 1 ij ij i,j=1

c∗ (I M

m



Vbij∗ )(Im

⊗ Vbij )M + d

n c∗ vec(Vb ∗ Kij ) − ∑ M c∗ vec(Vb ∗ Veij Γ−1 U ∗ B)− M 1 ij ij i,j=1

c∗ (Im ⊗ Vb ∗ Vbij )M + d M ij

n n c∗ vec( ∑ Vb ∗ Kij ) − M c∗ vec( ∑ Vb ∗ Veij Γ−1 U ∗ B)− = M 1 ij ij

= =

i,j=1 n ∑

c∗ (Im ⊗ M

c∗ vec( M

i,j=1

Vbij∗ Vbij )M + d

(4.9)

n ∑ c∗ M + d Vbij∗ Kij ) − M

i,j=1 n ∑

c∗ vec( M

i,j=1

i,j=1

Vbij∗ Kij ) − (Imn2 −r − M + M )M + d

n c∗ vec( ∑ Vb ∗ Kij ) − (M + − M + M M + )d = M ij i,j=1

n c∗ vec( ∑ Vb ∗ Kij ) = M ij i,j=1

n cvec( ∑ Vb ∗ Kij ). = M ij i,j=1

Combining (4.7)-(4.9) we obtain that the KKT point of (4.4) satisfies cz = M cvec( M

n ∑

i,j=1

Vbij∗ Kij ).

Substituting (4.10) into (3.10) we obtain the KKT point of (4.4) is eij + Vbij vec−1 (M cvec( X = (Xij ), Xij = X

n ∑

i,j=1

Vbij∗ Kij )), i, j = 1, 2, · · · , n,

(4.10)

12 which is also the projection of Z onto Ω1 , i.e. eij + Vbij vec−1 (M cvec( PΩ1 (K) = (Xij )i,j=1,2,··· ,n , Xij = X

n ∑

i,j=1

Vbij∗ Kij )).



Now we begin to compute the projection PΩ2 (K) of K onto Ω2 . For any K ∈ Cmn×mn , as N = (K∗ + K)/2 is Hermitian there exists a spectral decomposition N = U diag(λ1 (N ), λ2 (N ), · · · , λmn (N ))U ∗ , U U ∗ = Imn . Then by Theorem 2.1 of Higham [18], we have Theorem 4.2. For a given mn × mn matrix K, we have PΩ2 (K) = U diag(max{λ1 (N ), 0}, max{λ2 (N ), 0}, · · · , max{λmn (N ), 0})U ∗ .

(4.11)

5. The relaxed alternating projection method for solving Problem I In this section, we use the relaxed alternating projection method (2.3) to solve Problem I. We first define the alternating projection (AP) operator R : Ω1 → Ω1 by R = PΩ1 PΩ2 , and the relaxed alternating projection (RAP) operator Rσ,λ : Ω1 → Ω1 by Rσ,λ (X) = PΩ1 (X + λσ(X)(R(X) − X)), where λ ∈ [0, 2], σ(X) = 1 + ∥R(X) − PΩ2 (X)∥2 /∥R(X) − X∥2 .

(5.1)

Noting that Ω1 is a closed affine subspace, then the RAP operator can also be rewritten as Rσ,λ (X) = PΩ1 (X + λσ(X)(R(X) − X)) = X + λσ(X)(R(X) − X), because X + λσ(X)(R(X) − X) ∈ Ω1 for arbitrary X ∈ Ω1 . According to (2.3) we design the following relaxed alternating projection method to solve Problem I ∀X0 ∈ Ω1 , Xk+1 = Rσ,λ (Xk ) = Xk + λσ(Xk )(R(Xk ) − Xk ),

(5.2)

where λ, σ(Xk ) are defined by (5.1) and R(Xk ) = PΩ1 PΩ2 (Xk ). This idea can also be seen in Cegielski and Suchocka [5]. In order to realize the iterative method (5.2), we establish the following numerical algorithm by making use of the projections PΩ1 (K) and PΩ2 (K) in Theorems 4.1 and 4.2. Algorithm 5.1 (This algorithm attempts to realize the iterative method (5.2)) Step 0. Input the density matrices A1 , A2 , · · · , Ak ∈ C n×n , B1 , B2 , · · · , Bk ∈ C m×m , the initial matrix X0 ∈ Ω1 and the iterative parameter λ ∈ (0, 2). Step 1. Computing the matrices A and B in (3.2). Step 2. Make the singular value decomposition of the matrix A and derive the matrices Veij , Vbij , Γ, U1 in (3.9) and (3.5). Step 3. Compute the matrices M and d in (3.11). c and X eij in Theorem 3.1. Step 4. Compute the matrices M

13 Step 5. Backtracking Step 5.1. Compute the projection PΩ2 (Xk ) by (4.11). Step 5.2. Compute the projection R(Xk ) = PΩ1 [PΩ2 (Xk )] by (4.3). Step 5.3. Compute the step size σ(Xk ) according to (5.1). Step 5.4. Compute Xk+1 according to (5.2). Step 6. Set k ← k + 1, and go to step 5.

The numerical analysis on the relaxed alternating projection algorithm 5.1 are as follows. 1. Computational complexity

From Algorithm 5.1 we see that steps 1-4 are computed only once and step 5 is the loop for deriving the solution of Problem I. The computational cost of each step is evaluated in Tables 1 and 2. Step 1 km3 n2

Step 2 k 2 m3 n2

Step 3 mn(mn2 − r)

2m2 n(mn2

Step 4 − r) + mr2 + m2 rk + m3 k

T able 1 : Computational cost of steps 1 − 4 Step 5.1 m3 n3

m3 n3

Step 5.2 − 3mn2 r + mn4 + r2

Step 5.3 2m2 n2

Step 5.4 + 2mn

m2 n2

T able 2 : Computational cost of step 5 Therefore, the total computational cost of Algorithm 5.1 is (km3 n2 + k 2 m3 n2 + m2 n3 − mnr + 2m3 n3 − 2m2 nr + mr2 + m2 rk + m3 k) +(m3 n3 + m3 n3 − 3mn2 r + mn4 + r2 + 3m2 n2 + 2mn)IT, where IT is the total number of iterative step. In a word, the computational cost of Algorithm 5.1 is determined by O(m3 n3 IT). 2. Stopping criteria We first define the residual error by 

(k)

  Err(Xk ) = ∥A   

(k)

X11 (k) X21 .. . (k)

Xnn



 n ∑  (k)  − B∥ + |tr(Xij ) − δij |,   i,j=1

(k)

where Xij is the ijth block matrix of the kth iterative value of (5.2), i.e., Xk = (Xij )i,j=1,2,··· ,n . Since the solutions of the matrix equation (3.2) satisfy the constraint condition (1.1), and noting (1.2) and (1.3), then it is easy to obtain that X∗ is a solution of Problem I if and only if  ∗  X11  X∗   21  ∗ ) − δij = 0, i, j = 1, 2, · · · , n A  .  − B = 0, tr(Xij  ..  ∗ Xnn

14 and λi (X∗ ) ≥ 0, i = 1, 2, · · · , mn, where X∗ij is the ijth block matrix of the solution X∗ of Problem I. So we use either the residual error Err(Xk ) ≤ 1.0 × 10−10 and λi (Xk ) ≥ 0, i = 1, 2, · · · , mn or the iteration step k has reached the upper limit 800 as the stopping criterion of Algorithm 5.1. 3. Evaluating the optimal iterative parameter λ We use the trial method to choose the optimal iterative parameter α. We first test the problem with different parameters, such as λ = 0.2, 0.5, 0.9, 1.4, 1.6, 1.9. Under the stopping criterion, we derive some data about the numerber of iteration and the computing time. Then we select the parameter, which leads to the fewest iterative steps and computing time, as the optimal iterative parameter. 4. Initial value In order to guarantee the initial iterative matrix X0 ∈ Ω1 , we select it as X0 = PΩ1 (rand(mn)), which is the projection of the random matrix rand(mn) generated by Matlab function rand(·) onto the feasible set Ω1 . In order to guarantee the completeness of numerical analysis, we use the similar method of Cegielski and Suchocka [5] to give the convergence theory of Algorithm 5.1. 5. Convergence analysis We first give some nice properties for the AP operator R and the RAP operator Rσ,λ , and then give the convergence theorem for Algorithm 5.1. By lemma 3.1 in [2] we obtain that the AP operator R is firmly nonexpansive, that is, for arbitrary X, Y ∈ Ω1 the following inequality ⟨R(X) − R(Y ), X − Y ⟩ ≥ ∥R(X) − R(Y )∥2 holds. We first give two lemmas. Lemma 5.1. For the AP operator R, we have ⟨R(X)−R(Y ), X−Y ⟩ ≥ ∥R(X)−R(Y )∥2 +(∥R(X)−PΩ2 (X)∥−∥R(Y )−PΩ2 (Y )∥)2 , ∀ X, Y ∈ Ω1 . (5.3) Proof. Since the projector operator PΩ1 (·) is firmly nonexpansive (see [2, Fact 1.5]), then for arbitrary K1 and K2 we have ⟨R(K1 ) − R(K2 ), PΩ2 (K1 ) − PΩ2 (K2 ))⟩ ≥ ∥R(K1 ) − R(K2 )∥2 .

(5.4)

Noting that the set Ω2 is closed convex and combining the Kolmogorov criteria, the following inequality ⟨PΩ2 (K1 ) − PΩ2 (K2 ), K2 − PΩ2 (K2 )⟩ ≤ 0 (5.5) holds. Futher, for any K1 , K2 ∈ Ω1 , we have by the affinity of Ω1 ⟨R(K1 ) − PΩ2 (K1 ), K1 − PΩ2 (K1 )⟩ = ∥R(K1 ) − PΩ2 (K1 )∥2 .

(5.6)

15 Combining (5.4)-(5.6) and Cauchy-Schwarz inequality, for arbitrary X, Y ∈ Ω1 we have ⟨R(X) − R(Y ), X − Y ⟩ = ⟨R(X) − R(Y ), PΩ2 (X) − PΩ2 (Y )⟩ + ⟨R(X) − R(Y ), [X − PΩ2 (X)] − [Y − PΩ2 (Y )]⟩ ≥ ∥R(X) − R(Y )∥2 + ⟨[R(X) − R(Y )] + [PΩ2 (X) − PΩ2 (Y )] +[PΩ2 (Y ) − R(Y )], [X − PΩ2 (X)] − [Y − PΩ2 (Y )]⟩ = ∥R(X) − R(Y )∥2 + ⟨R(X) − PΩ2 (X), X − PΩ2 (X)⟩ − ⟨R(X) − PΩ2 (X), Y − PΩ2 (Y )⟩ +⟨PΩ2 (X) − PΩ2 (Y ), X − PΩ2 (X)⟩ − ⟨PΩ2 (X) − PΩ2 (Y ), Y − PΩ2 (Y )⟩ +⟨PΩ2 (X) − R(Y ), X − PΩ2 (X)⟩ − ⟨PΩ2 (Y ) − R(Y ), Y − PΩ2 (Y )⟩ ≥ ∥R(X) − R(Y )∥2 + ∥R(X) − PΩ2 (X)∥2 + ∥R(Y ) − PΩ2 (Y )∥2 −⟨R(X) − PΩ2 (X), Y − PΩ2 (Y )⟩ − ⟨R(X) − PΩ2 (X), R(Y ) − PΩ2 (Y )⟩ +⟨PΩ2 (Y ) − R(Y ), X − R(X)⟩ − ⟨R(Y ) − PΩ2 (Y ), R(X) − PΩ2 (X)⟩ ≥ ∥R(X) − R(Y )∥2 + ∥R(X) − PΩ2 (X)∥2 + ∥R(Y ) − PΩ2 (Y )∥2 −2∥R(X) − PΩ2 (X)∥2 ∥R(Y ) − PΩ2 (Y )∥2 ≥ ∥R(X) − R(Y )∥2 + (∥R(X) − PΩ2 (X)∥ − ∥R(Y ) − PΩ2 (Y )∥)2 .  Lemma 5.2. For the RAP operator Rσ,λ , we have ∥Rσ,λ (X)−Z∥2 ≤ ∥X−Z∥2 −λ(2−λ)σ(X)2 ∥R(X)−X∥2 , ∀ X ∈ Ω1 , Z ∈ F ixR, λ ≥ 0. (5.7) Moreover, the RAP operator Rσ,λ is quasi-nonexpansive for λ ∈ [0, 2].

Proof. If X ∈ FixR, then Rσ,λ (X) = R(X) = X, so (5.7) holds, otherwise for any Z ∈ FixR we have R(Z) = Z, R(Z) = PΩ2 (Z). Combining Lemma 5.1 we have ∥Rσ,λ (X) − Z∥2 = ∥X + λσ(X)(R(X) − X) − Z∥2 = ∥X − Z∥2 + λ2 σ(X)2 ∥R(X) − X∥2 − 2λσ(X)⟨Z − X, R(X) − X⟩ = ∥X − Z∥2 + λ2 σ(X)2 ∥R(X) − X∥2 − 2λσ(X)(∥R(X) − X∥2 + ⟨Z − R(X), R(X) − X⟩) = ∥X − Z∥2 + λ2 σ(X)2 ∥R(X) − X∥2 − 2λσ(X)(∥R(X) − X∥2 + ⟨R(Z) − R(X), Z − X⟩ −∥R(Z) − R(X)∥2 ) ≤ ∥X − Z∥2 + λ2 σ(X)2 ∥R(X) − X∥2 − 2λσ(X)(∥R(X) − X∥2 + ∥R(X) − PΩ2 (X)∥ −∥R(Z) − PΩ2 (Z)∥)2

= ∥X − Z∥2 + λ2 σ(X)2 ∥R(X) − X∥2 − 2λσ(X)[(1 + = ∥X − Z∥2 − λ(2 − λ)σ(X)2 ∥R(X) − X∥2 .

∥R(X)−PΩ2 (X)∥2 )∥R(X) ∥R(X)−X∥2

− X∥2 ]

Especially, when λ ∈ [0, 2], λ(2 − λ) ≥ 0 and FixRσ,λ = FixR, by Definition 2.3 we obtain that the RAP operator Rσ,λ is quasi-nonexpansive.  Theorem 5.1. The sequence {Xk } generated by Algorithm 5.1 converges weakly to an element X ∗ ∈ F ixT = Ω1 ∩ Ω2 . Proof. If we set X = Xk in Equality (5.7), we obtain that for any Z ∈ FixR,

∥Xk+1 − Z∥2 ≤ ∥Xk − Z∥2 − λ(2 − λ)σ(X)2 ∥R(Xk ) − Xk ∥2 .

(5.8)

Hence the sequence {∥Xk − Z∥} is monotone decreasing and convergent. Noting that FixRσ,λ = FixR, then the sequence {Xk } is Fejer monotone with regard to FixR. From λ ∈ [ε, 2 − ε](ε > 0) it follows that λ(2 − λ) ≥ ε2 > 0. Then combining (5.8) and σ(Xk ) ≥ 1 we have ∥R(Xk ) − Xk ∥ → 0.

(5.9)

16 Let {Xnk } be any weakly convergent subsequence of {Xk }, and let X∗ be the weak limit of {Xnk }. Note that such a subsequence exists since {Xnk } is bounded. Since the AP operator R is nonexpansive and (5.9), then by Lemma 2.1 (a) we obtain that X∗ ∈ FixR. By the arbitrariness of the subsequence {Xnk }, all weak cluster point of {Xnk } lie in FixR. Furthermore, FixR is closed and convex. Since the sequence {Xk } is Fejer monotone with respect to FixR, then by Lemma 2.1 (b) we know that it converges weakly to some point X∗ ∈ FixR. 

6. Numerical experiments In this section, we first present two simple numerical examples to illustrate that Algorithm 5.1 is feasible to solve Problem I, and then compare our algorithm with the alternating projection algorithm [8,13] and the simultaneous projection algorithm [2,15,26], which can also be used to solve Problem I. All experiments are performed in MATLAB R2010a on a PC with an Intel Core i7 processor at 3.40GHz with machine precision ε = 2.22 × 10−16 .

6.1 The feasibility of Algorithm 5.1 Example 6.1. In quantum information science, a quantum channel is a communication channel which can transmit quantum information, as well as classical information. The quantum information is the state of a qubit. In the mathematical setting, this problem is transformed into finding a TPCP map φ(Ai ) sending quantum states to another quantum states (see Fig. 2).

Fig. 2: Quantum channel of Example 6.1.

Quantum states are mathematically represented as density matrices. An example of this is that Alice send the text document transmitted over the internet to Bob by the quantum channel (see [10,25]). Information is expressed by quantum states. This leads to find a TPCP map φ such that φ(Ai ) = Bi , i = 1, 2, · · · , k, which is equivalent to problem I with the density matrices ( 1 1 ) ( 1 1 ) ( 2 1 ) ( 29 ) 21 2 4 4 3 5 8 50 100 A1 = , A = , B = , B = . 2 1 2 1 1 1 3 1 3 21 21 4

2

3

4

8

5

100

50

We use Algorithm 5.1 to solve Problem I with the iterative parameter λ = 1.6. After 6 steps, we get the solution of Problem I as follows   0.0463 −0.0444 −0.0378 −0.0037 ( ) (6) (6)  −0.0444 0.9537 −0.0037 0.0378  X11 X12 , X ≈ X6 = = (6.1) (6) (6)  −0.0378 −0.0037 0.7915 0.2981  X21 X22 −0.0037 0.0378 0.2981 0.2085

17 and its resdual error is Err(X6 ) ≈ 3.66 × 10−11 . Problem I, that is, 2 ∑

It is easy to verify that X6 is the solution of

(6)

(A1 )ij Xij − B1 = 1.0 × 10−10 ×

i,j=1 2 ∑

(6) (A2 )ij Xij i,j=1

− B2 = 1.0 × 10

(6)

(6)

−10

×

(

−0.1395 −0.0436 −0.0436 −0.2092

)

,

(

−0.2022 −0.0732 −0.0732 −0.1464

)

,

(6)

(6)

tr(X11 ) = 1, tr(X12 ) = 0, tr(X21 ) = 0, tr(X22 ) = 1, λ1 (X6 ) = 0.9612, λ2 (X6 ) = 0.9145, λ3 (X6 ) = 0.0855, λ4 (X6 ) = 0.0388. Hence the trace preserving completely positive linear map sending the quantum states {A1 , A2 } to {B1 , B2 } is 2 ∑ (6) φ(Al ) = (Al )ij Xij , i,j

(6)

where Xij is defined by (6.1). Example 6.2. We consider the problem of constructing quantum channels which transform a given set of quantum states {A1 , A2 } to {B1 , B2 }. This leads to find a TPCP map φ such that φ(Ai ) = Bi , i = 1, 2, which is equivalent to problem I with the density matrices     1 1 −0.0764 0.0053 0.2213 −0.2559 3 3 1 1 −0.0830  , A2 =  0.2213 −0.1359  , A1 =  −0.0764 3 3 1 1 0.0053 −0.0830 −0.2559 −0.1359 3 3 

   1 0.0462 −0.1023 −0.1211 3 1 1 −0.0575  , B2 =  −0.1023 −0.1075  . B1 =  −0.0412 3 3 1 1 0.0462 −0.0575 −0.1211 −0.1075 3 3 1 3

−0.0412

We use Algorithm 5.1 to solve Problem I with the iterative parameter λ = 1.4. After 11 steps, we get the solution of Problem I as follows  (11)  (11) (11) X11 X12 X13  (11) (11) (11)  X ≈ X11 =  X21 (6.2) X22 X23  , (11) (11) (11) X31 X32 X33 where

(11)

X11

   0.0002 0 −0.0569 −0.1560 (11) 1 −0.0006  , X12 =  −0.0569 0 −0.0465  , =  −0.0005 3 1 0.0002 −0.0006 −0.1560 −0.0465 0 3

(11)

X13



1 3

−0.0005

   0 −0.0569 −0.1560 0 0.0500 0.1368 (11) 0 −0.0465  , 0 0.0410  , X21 =  −0.0569 =  0.0500 −0.1560 −0.0465 0 0.1368 0.0410 0 

18 

   0.0002 0 0.0102 0.0277 (11) (11) 1 −0.0006  , X23 =  0.0102 0 0.0084  , X22 =  −0.0005 3 1 0.0002 −0.0006 0.0277 0.0084 0 3     0 0.0500 0.1368 0 0.0102 0.0277 (11) (11) 0 0.0410  , X32 =  0.0102 0 0.0084  , X31 =  0.0500 0.1368 0.0410 0 0.0277 0.0084 0   1 −0.1452 0.0760 3 (11) 1 −0.1897  , X33 =  −0.1452 3 1 0.0760 −0.1897 3 1 3

−0.0005

and its resdual error is Err(X11 ) ≈ 1.32 × 10−11 . Hence the trace preserving completely positive linear map sending the quantum states {A1 , A2 } to {B1 , B2 } is φ(Al ) =

3 ∑

(11)

(Al )ij Xij ,

i,j

(11)

where Xij

is defined by (6.2).

Examples 6.1 and 6.2 shows that Algorithm 5.1 is feasible to solve Problem I.

6.2. Comparion with the alternating projection algorithm and simultaneous projection algorithm Since the alternating projection algorithm (2.2) and simultaneous projection algorithm (2.4) can also be used to find a point in the intersection of the closed and convex sets Ω1 and Ω2 , which is the solution of Problem I, we will compare Algorithm 5.1 with them in this section. It is well-known that these projection algorithms greatly depend on the efficient computations of the projections PΩ1 (K) and PΩ2 (K) of K onto Ω1 and Ω2 , respectively. Such problems are perfectly solved in Theorems 4.1 and 4.2. Example 6.3. Set the mn × mn matrix  P11 Om · · ·  Om P22 · · ·  P = . .. ..  .. . . Om Om · · ·

Om Om .. . Pnn



  , 

where Pii = tr(L1i L∗ ) Li L∗i , Li = rand(m), i = 1, 2, · · · , n. Here, rand(m) stands for m × m rani dom matrix generated by the matlab function rand(·). It is easy to verify that the matrix P satisfies (1.2) and (1.3). Consider Problem I with the n × n density matrices Al =

1 Rl Rl∗ , Rl = rand(n), l = 1, 2, · · · , k, tr(Rl Rl∗ )

and the m × m density matrices Bl =

n ∑

(Al )ij Pij , l = 1, 2, · · · , k,

i,j=1

19 where Pij , i, j = 1, 2, · · · , n are the block matrices of P. The density matrices Al , Bl , l = 1, 2, · · · , k are chosen in this way to guarantee that Problem I is solvable, and at least P is the solution of Problem I. Under the same initial value and stopping criteria, we use Algorithm 5.1, alternating projection method (2.2) (denoted by ”Algorithm AP”) and simultaneous projection algorithm (2.4) (denoted by ”Algorithm SP”) to solve Problem I for the above density matrices with difference values of m, n, k. The experimental results are listed in Table 3, including the number of iteration (denoted by ”IT”), CPU time(denoted by ”CPU”) and the residual error (denoted by ”Err”). m = 20, n = 16, k = 3 IT CP U Err m = 50, n = 47, k = 3 IT CP U Err m = 80, n = 80, k = 2 IT CP U Err m = 100, n = 100, k = 2 IT CP U Err m = 130, n = 120, k = 2 IT CP U Err

Algorithm 5.1 42 12.4971 5.8694 × 10−11 Algorithm 5.1 53 16.0189 2.4837 × 10−11 Algorithm 5.1 69 24.9921 1.1314 × 10−11 Algorithm 5.1 83 33.7756 4.8123 × 10−11 Algorithm 5.1 101 45.6678 1.8718 × 10−11

Algorithm AP 47 15.8442 1.4276 × 10−11 Algorithm AP 59 21.7305 5.0706 × 10−11 Algorithm AP 77 38.9005 9.7183 × 10−11 Algorithm AP 92 49.8921 2.3207 × 10−11 Algorithm AP 121 71.3387 1.1329 × 10−11

Algorithm SP 94 38.0541 2.0131 × 10−11 Algorithm SP 137 54.9327 8.6581 × 10−11 Algorithm SP 206 122.7953 4.9157 × 10−11 Algorithm SP 346 267.1022 4.0752 × 10−11 Algorithm SP − − −

T able 3 : Comparative results of Example 6.3 f or dif f erent values m, n, k Example 6.4. Set |ψ⟩ =

√1 (|00⟩ 3

+ |11⟩ + |22⟩),

2 γ 5−γ σ(γ) = |ψ⟩⟨ψ| + (|01⟩⟨01| + |12⟩⟨12| + |20⟩⟨20|) + (|10⟩⟨10| + |21⟩⟨21| + |02⟩⟨02|), 7 21 21 which is the well-known Horodecki state (see [16,17] for more details). Consider Problem I with the density matrices A1 = σ(γ1 ), A2 = σ(γ2 ), and Bl =

3 ∑

(Al )ij Pij , l = 1, 2.

i,j=1

20 Here Pij is generated by using the similar method in the example 5.3, that is, { 1 ∗ tr(Li L∗i ) Li Li , Li = rand(3), 1 ≤ i = j ≤ 3, Pij = O3 , 1 ≤ i ̸= j ≤ 3. Under the same initial value and stopping criteria, we use Algorithm 5.1, alternating projection method (2.2) and simultaneous projection algorithm (2.4) to solve Problem I for the density matrices A1 , A2 , B1 , B2 with difference values of γ1 , γ2 . The experimental results are listed in Table 4. γ1 = 1.9, γ2 IT CP U Err γ1 = 1.8, γ2 IT CP U Err γ1 = 1.6, γ2 IT CP U Err γ1 = 1.5, γ2 IT CP U Err

= 1.1

= 1.2

= 1.0

= 1.9

Algorithm 5.1 36 1.3750 1.1921 × 10−11 Algorithm 5.1 41 1.5625 1.9242 × 10−11 Algorithm 5.1 40 1.2313 5.4471 × 10−11 Algorithm 5.1 45 1.4601 4.8294 × 10−11

Algorithm AP 42 1.9813 2.4195 × 10−11 Algorithm AP 55 2.1586 6.5683 × 10−11 Algorithm AP 52 2.0661 3.6046 × 10−11 Algorithm AP 57 2.2153 2.1102 × 10−11

Algorithm SP 57 3.0226 9.9483 × 10−11 Algorithm SP 63 3.3293 5.7389 × 10−11 Algorithm SP 61 3.1797 1.8677 × 10−11 Algorithm SP 69 3.8416 4.8673 × 10−11

T able 4 : Comparative results of Example 6.4 f or dif f erent values γ1 and γ2 Several comments can be made on Tables 5.3 and 5.4. (1) From Tables 3 and 4 we can see that the Algorithms 5.1 and AP both work very effectively for Problem I, while the performance of Algorithm 5.1 is slightly better than that of Algorithm AP in term of iteration steps and computing time. The reason is that the step size σ(Xk ) can accelerate the convergence of the relaxed alternating projection method (5.2) (i.e. Algorithm 5.1). (2) In Table 3, the symbol ′ −′ means that the iteration step k has reached the upper limit 800, but it did not derive a solution. Compared with the other algorithms, Algorithm SP is relatively less efficient, especially when the problem size is large. The reason is that the convergence rate of Algorithm SP is very slow. (3) From Table 3, we also see that the CPU time of Algorithm 5.1 increases very slowly with the increase of the problem size. So it is more suitable for solving the large-scale problems than Algorithms AP and SP.

Conclusion In this paper, we consider a class of convex feasibility problem in constructing a quantum channel. The most difficulty to solve this problem is how to characterize the feasible set, because

21 it has highly structured constraints. Based on the matrix equation theory, the feasible set is characterized and an analytic experssion is derived. The relaxed alternating projection method is designed to solve the convex feasible problem. Finally, some numerical examples are used to illustrate that the new method is feasible and effective. When the problem’s size is very large, how to compute the minimal rank solution of Problem I is an open problem in the future. The ∩ other open problem is how to design some direct methods to characterize the intersection Ω1 Ω2 as in the section 3.

Acknowledgements

The authors wish to thank Professor Michael Ng and the anonymous referee for providing very useful suggestions. We also would like to thank Prof. Chi-Kwong Li and Dr. Diane Christine Pelejo of Department of Mathematics, College of William and Mary for helpful discussions on constructing the Problem I and comparing our algorithm with simultaneous projection algorithm and alternating projected algorithm.

References [1] A. Aleyner, S. Reich, Block-iterative algorithms for solving convex feasibility problems in Hilbert and in Banach spaces, J. Math. Anal. Appl., 343(2008) 427-435. [2] H.H. Bauschke, J.M. Borwein, On projection algorithms for solving convex feasibility problems, SIAM Rev., 38(1996) 367-426. [3] N. Buong, P.V. Son, An explicit iteration method for convex feasibility problems in Hilbert spaces, Appl. Math. Sci., 2(2008) 725-734. [4] S.S Chang, H.W. Joseph, L.C.K. Chan, A block hybrid method for solving generalized equilibrium problems and convex feasibility problem, Adv. Comput. Math., 38(2013) 563580. [5] A. Cegielski, A. Suchocka, Relaxed alternating projection method, SIAM J. Optim., 19(2008) 1093-1106. [6] Y. Censor, A. Segal, On the string averaging method for sparse common fixed-point problems, Inter. Trans. Oper. Res., 16(2009) 481-494. [7] A. Cegielski, R. Dylewski, Variable target value relaxed alternating projection method, Comput. Optim. Appl., 47(2010) 455-476. [8] W. Cheney, A. Goldstein, Proximity maps for convex sets, Proceedings of the AMS, 10(1959) 448-450. [9] M.D. Choi, Completely positive linear maps on complex matrices, Linear Algebra Appl., 10(1975) 285-290. [10] G.M. D’Ariano, P.L. Presti, Tomography of quantum operations, Phys. Rev. Lett., 86(2001) 4195-4198.

22 [11] Y.Z. Dang, Y. Gao, Non-monotonous accelerated parallel subgradient projection algorithm for convex feasibility problem, Optim., 10(2012) 1-14. [12] H. Dai, Matrix Theory, Science Press, Beijing, 2002. [13] R. Escalante, M. Raydan, Alternating projected methods, Society for Industrial and Applied Mathematics, Philadelphia, 2011. [14] L.G. Gubin, B.T. Polyak, E.V. Raik, The method of projections for finding the common point of convex sets, U.S.S.R. Comput. Math. Phys., 7(1967) 1-24. [15] N.I.M. Gould, How good are projection methods for convex feasibility problems, Comput. Optim. Appl., 40(2008) 1-12. [16] B.B. Hua, X.H. Gao, M.J. Zhao, S.M. Fei, A note on resistance of NPT to mixture of separable states, Inter. J. Theoret. Phy., 52(2013) 3007-3010. [17] P. Horodecki, M. Horodecki, R. Horodecki, Bound entanglement can be activated, Phys. Rev. Lett., 82(1999), 1056-1058. [18] N. Higham, Computing a nearest symmetric positive semidefinite matrix, Linear Algebra Appl., 103(1988) 103-118. [19] A.N. Iusem, A.R.D. Pierro, Convergence results for an accelerated nonlinear cimmino algorithm, Nmer. Math., 49(1986) 367-378. [20] K.C. Kiwiel, Generalized Bregman Projections in Convex Feasibility Problems, J. Optim. Theory Appl., 96(1998) 139-157. [21] N. Lu, F. Ma, S.Y. Liu, A non-interior continuation algorithm for solving the convex feasibility problem, Appl. Math. Mod., 38(2014) 5421-5430. [22] Y. Li, Iterative algorithm for a convex feasibility problem, An. St. Univ. Ovidius Constanta, 18(2010) 205-218. [23] S. Maruster, C. Popirlan, On the Mann-type iteration and the convex feasibility problem, J. Comput. Appl. Math., 212(2008) 390-396. [24] J.A. Miszczak, Singular value decomposition and matrix reorderings in quantum information theory, Int. J. Mod. Phys. C, 22(2011) 897-918. [25] M.A. Nielsen, I.L. Chuang, Quantum Computation and Quantum Information, Cambridge University Press, Cambridge, 2000. [26] A.R.D. Pierro, A.N. Iusem, A simultaneous projections method for linear inequalities, Linear Algebra Appl., 64(1985) 243-253. [27] L.T. Santos, A parallel subgradient projection method for the convex feasibility problem, J. Comput. Appl. Math., 18(1987) 307-320. [28] X.M. Wang, C. Li, J.C. Yao, Subgradient projection algorithms for convex feasibility on Riemannian manifolds with lower bounded curvatures, J. Optim. Theory Appl., 164(2015) 202-217.

23 [29] P.L. Xu, Numerical solution for bounding feasible point sets, J. Comput. Appl. Math., 156(2003) 201-219. [30] S.S. Zhang, C.K. Chan, H.W.J. Lee, Modified block iterative method for solving convex feasibility problem, equilibrium problems and variational inequality problems, Acta Mathematica Sinica, 28(2012) 741-758. [31] A.J. Zaslavski, Subgradient projection algorithms for convex feasibility problems in the presence of computational errors, J. Approxi. Theory, 175(2013) 19-42. [32] A.J. Zaslavski, Subgradient Projection Algorithms and Approximate Solutions of Convex Feasibility Problems, J. Optim. Theory Appl., 157(2013) 803-819. [33] Y.P. Zhang, Y.L. Li, A viscosity method for solving convex feasibility problems, J. Nonlinear Sci. Appl., 9(2016) 641-651.