Conjugate gradient-like methods for solving general tensor equation with Einstein product
Journal Pre-proof
Conjugate gradient-like methods for solving general tensor equation with Einstein product Masoud Hajarian PII: DOI: Reference:
S0016-0032(20)30025-9 https://doi.org/10.1016/j.jfranklin.2020.01.010 FI 4372
To appear in:
Journal of the Franklin Institute
Received date: Revised date: Accepted date:
23 June 2019 21 October 2019 7 January 2020
Please cite this article as: Masoud Hajarian , Conjugate gradient-like methods for solving general tensor equation with Einstein product, Journal of the Franklin Institute (2020), doi: https://doi.org/10.1016/j.jfranklin.2020.01.010
This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. © 2020 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.
Conjugate gradient-like methods for solving general tensor equation with Einstein product Masoud Hajarian∗ Abstract Tensors have a wide application in data mining, chemistry, information sciences, documents analysis and P medical engineering. In this work, we study the general tensor equation li=1 Fi ∗P X ∗Q Gi = H with Einstein
product where Fi , Gi , H, for i = 1, 2, .., l, are known tensors and X is an unknown tensor to be determined. The
main motivation for this study is the investigation of conjugate gradient-like methods for solving this tensor equation. We show that the conjugate gradient-like methods converge to tensor solutions in a finite number of steps in the absence of round-off errors. Numerical examples confirm the theoretical results and demonstrate the accuracy and computational efficiency of the methods. AMS classification 2010: 15A24; 65H10; 15A69; 65F10. Keywords: Tensor equation; Conjugate gradient-like method; Biconjugate residual (BiCR) method; Einstein product.
1. Introduction and preliminary We first cover the basic concepts of tensors and introduce the notation and definitions needed throughout the paper. Some important and basic facts regarding tensors can be found in [1–5]. We denote tensors with Euler script letters (e.g., A). An order P tensor A = (ai1 ...iP )1≤ij ≤Nj (j = 1, ..., P ) is a multidimensional ar-
ray with N1 ...NP entries. The tensor A is also called a hypermatrix. Tensors are higher-order generalizations of vectors and matrices. Tensors arise in a wide variety of application areas such as in psychometrics [6], documents analysis [7], chemometrics [8], medical engineering [9], higher-order statistics [10], formulating an n-person noncooperative game [11] and so on. Let RN1 ×...×NP represent the space of P th-order tensors. For A ∈ RM1 ×...×MP ×L1 ×...×LP and B ∈ RL1 ×...×LP ×N1 ×...×NQ , the Einstein product (A ∗ B) ∈ RM1 ×...×MP ×N1 ×...×NQ introduced in [1, 12] is defined by
(A ∗P B)m1 ...mP n1 ...nQ = ∗
L1X ,...,LP
am1 ...mP l1 ...lP bl1 ...lP n1 ...nQ .
l1 ,...,lP =1
Faculty of Mathematical Sciences, Shahid Beheshti University, General Campus, Evin, Tehran 19839, Iran.
m
[email protected],
[email protected],
[email protected] (M. Hajarian).
1
E-mail:
When P = Q = 1, the Einstein product reduces to the standard matrix multiplication. Let A ∈ RM1 ×...×MP ×L1 ×...×LP be a given tensor, then its transpose is defined as
(AT )m1 ...mP l1 ...lP = (A)l1 ...lP m1 ...mP . The trace of the tensor A ∈ RM1 ×...×MP ×M1 ×...×MP is defined as the summation of all the diagonal entries, i.e., X
tr(A) =
am1 ...mP m1 ...mP .
m1 ,...,mP
The inner product of tensors A, B ∈ RM1 ×...×MP ×L1 ×...×LQ is defined by hA, Bi = tr(B T ∗P A). When hA, Bi = 0, we say that A and B are orthogonal. The Frobenius norm of A ∈ RM1 ×...×MP ×L1 ×...×LP is
defined as
kAk =
s
X
(am1 ...mP l1 ...lP )2 =
m1 ,...,mP ,l1 ,...,lP
p hA, Ai.
We now present a few properties of the inner product and Einstein product of tensors, below [2, 13]. For any A, B, X ∈ RM1 ×...×MP ×N1 ×...×NQ , C ∈ RM1 ×...×MP ×L1 ×...×LP , Y ∈ RL1 ×...×LP ×N1 ×...×NQ and any scalar λ ∈ R, we have
hA, Bi = tr(B T ∗P A) = tr(A ∗Q B T ) = tr(B ∗Q AT ) = tr(AT ∗P B) = hB, Ai,
(1.1)
hλA, Bi = λhA, Bi,
(1.2)
hX , C ∗P Yi = hC T ∗P X , Yi,
(1.3)
(C ∗P Y)T = Y T ∗P C T .
(1.4)
Definition 1. [13] Define the transformation φM L : RM1 ×...×MP ×L1 ×...×LQ → RM ×L with M = M1 M2 ...MP , L = L1 L2 ...LQ and φM L (A) = A defined component-wise as
(A)m1 ...mP l1 ...lQ → (A)st , where A ∈ RM1 ×...×MP ×L1 ×...×LQ , A ∈ RM ×L and s = mP +
P −1 X k=1
((mk − 1)
P Y
Mr ),
t = lQ +
r=k+1
Q−1 X k=1
((lk − 1)
Q Y
Lr ).
r=k+1
Lemma 1. [13] For A ∈ RM1 ×...×MP ×L1 ×...×LP , X ∈ RL1 ×...×LP ×N1 ×...×NQ and C ∈ RM1 ×...×MP ×N1 ×...×NQ we have A ∗P X = C ⇔ φM L (A)φLN (X ) = φM N (C). This work focuses on the design of new algorithms based on the conjugate gradient-like methods for computing the solution X ∈ RL1 ×...×LP ×T1 ×...×TQ of the general tensor equation l X i=1
Fi ∗P X ∗Q Gi = H, 2
(1.5)
where Fi ∈ RM1 ×...×MP ×L1 ×...×LP , Gi ∈ RT1 ×...×TQ ×N1 ×...×NQ and H ∈ RM1 ×...×MP ×N1 ×...×NQ for i = 1, 2, .., l. It
is worth noting that many existing tensor (matrix) equations such as the Stein, Lyapunov and Sylvester tensor (matrix) equations can be considered as special types of the general tensor equation (1.5). Tensor equations have many applications in applied mathematics, physics and engineering such as discretisation of the partial differential equations in high dimension, continuum physics, isotropic and anisotropic elastic models [14–17]. As described in [18], the tensor equation describes the first order optimal condition of the high order Taylor polynomials approximation for multivariable functions. The Sylvester tensor equation has the applications in solving a three-dimensional radiative transfer equation by the mixed-collocation finite difference discretization [19]. Other application of tensor equations is in solving the 3D matrix equations [20]. The literature on the tensors and matrix equations is large and still growing rapidly in recent years due to their applications [1, 4, 21–34]. He et al. proposed a Newton-type method to solve multilinear systems with M-tensors [35]. In [16], a projection method for the numerical solution of linear systems of equations in low rank tensor format was introduced. Han in [36] proposed a homotopy method for computing the unique positive solution to a multilinear system with a nonsingular M-tensor. In [37], some tensor splitting algorithms were presented to solve multilinear systems with coefficient tensor being a strong M-tensor. Cui et al. established a preconditioned iterative method based
on tensor splitting for solving the multilinear system [38]. In [39], an alternating iterative method was established for solving the tensor equation. Recently Stein, Lyapunov and Sylvester tensor equations have attracted much attention [15, 40]. In [41], the biconjugate gradients (BiCG) and biconjugate residual (BiCR) were extended for solving the Stein tensor equation. Liang and Zheng investigated the sensitivity of the continuous Lyapunov tensor equation [42]. Shi et al. studied the backward error and perturbation bounds for the high order Sylvester tensor equation [43]. By extending the conjugate gradients (CG) method, Wang and Xu proposed a class of iterative algorithms to solve some tensor equations [13]. Outline and contributions of this work: We begin in Section 2 by recalling the conjugate gradient-like methods for solving the linear system Ax = b. We then formulate the conjugate gradient-like methods for the general tensor
equation (1.5) and state our main results. In Section 3, we present our findings with computational results. Finally we give our conclusions in Section 4.
2. Conjugate gradient-like methods for (1.5) It is well known that conjugate gradient-like methods are now commonly used for solving sparse nonsymmetric linear systems Ax = b and linear matrix equations [44–51]. The biconjugate residual (BiCR) method, as one of conjugate gradient-like methods, is a powerful technique to solve linear systems with nonsymmetric coefficient matrices [44, 52–54]. Two variants of BiCR method proposed in [52, 53], named BiCR1 and BiCR2 here, are presented below. Algorithm BiCR1 (The variant 1 of BiCR method) Initial values: x(1) and p(1) arbitrary, r(1) = Ax(1) − b, s(1) = r(1) and t(1) = p(1). Recursions:
w(k) = At(k), z(k) = AT s(k), α(k) =
||r(k)||2 , w(k)T s(k)
x(k + 1) = x(k) − α(k)t(k), r(k + 1) = r(k) − α(k)w(k), 3
β(k) =
||p(k)||2 , w(k)T s(k)
p(k + 1) = p(k) − β(k)z(k), γ(k + 1) =
t(k + 1) = p(k + 1) + γ(k + 1)t(k), δ(k + 1) =
||r(k+1)||2 , ||r(k)||2
||p(k+1)||2 , ||p(k)||2
s(k + 1) = r(k + 1) + δ(k + 1)s(k).
Algorithm BiCR2 (The variant 2 of BiCR method) Initial values: x(1) and s(1) arbitrary, r(1) = Ax(1)−b, u(1) = s(1), v(1) = r(1), w(1) = Au(1), and z(1) = AT v(1). Recursions: α(k) = β(k) = η(k) =
w(k)T r(k) , x(k + 1) = x(k) − α(k)u(k), r(k + 1) = r(k) − α(k)w(k), kw(k)k2 T As(k+1) z(k)T s(k) , s(k + 1) = s(k) − β(k)z(k), γ(k) = w(k)kw(k)k , u(k + 1) = s(k + 1) − γ(k)u(k), 2 kz(k)k2 z(k)T AT r(k+1) , v(k + 1) = r(k + 1) − η(k)v(k), w(k + 1) = Au(k + 1), z(k + 1) = AT v(k + 1). kz(k)k2
By Lemma 1, we can now formulate the general tensor equation (1.5) as a linear system of equations below. Proposition 1. The general tensor equation (1.5) can be formulated as the following both forms l X
φM L (Fi )φLT (X )φT N (Gi ) = φM N (H),
(2.1)
i=1
and
l X { [φT N (Gi )]T ⊗ [φM L (Fi )]} vec(φLT (X )) = vec(φM N (H)) . | {z } | {z } x b | i=1 {z }
(2.2)
A
In what follows, in view of the above algorithms and (1.5)-(2.2), tensor forms of BiCR1 and BiCR2 methods are proposed for solving the general tensor equation (1.5). Algorithm BiCR1 for (1.5) Choose the initial tensor X (1) ∈ RL1 ×...×LP ×T1 ×...×TQ and arbitrary nonzero tensor P(1) ∈ RL1 ×...×LP ×T1 ×...×TQ , P R(1) = li=1 Fi ∗P X (1) ∗Q Gi − H, S(1) = R(1), T (1) = P(1).
For k = 1, 2, ... do P W(k) = li=1 Fi ∗P T (k) ∗Q Gi , P Z(k) = li=1 FiT ∗P S(k) ∗Q GiT , α(k) =
||R(k)||2 hS(k),W(k)i ,
X (k + 1) = X (k) − α(k)T (k),
R(k + 1) = R(k) − α(k)W(k),
β(k) =
||P(k)||2 hS(k),W(k)i ,
P(k + 1) = P(k) − β(k)Z(k),
γ(k + 1) =
||P(k+1)||2 , ||P(k)||2
T (k + 1) = P(k + 1) + γ(k + 1)T (k),
δ(k + 1) =
||R(k+1)||2 , ||R(k)||2
S(k + 1) = R(k + 1) + δ(k + 1)S(k), 4
End. Algorithm BiCR2 for (1.5) Choose the initial tensor X (1) ∈ RL1 ×...×LP ×T1 ×...×TQ and arbitrary nonzero tensor S(1) ∈ RL1 ×...×LP ×T1 ×...×TQ , P R(1) = li=1 Fi ∗P X (1) ∗Q Gi − H, U(1) = S(1), V(1) = R(1), P W(1) = li=1 Fi ∗P U(1) ∗Q Gi , P Z(1) = li=1 FiT ∗P V(1) ∗Q GiT . For k = 1, 2, ... do α(k) =
hR(k),W(k)i , kW(k)k2
X (k + 1) = X (k) − α(k)U(k),
R(k + 1) = R(k) − α(k)W(k),
β(k) =
hS(k),Z(k)i , kZ(k)k2
S(k + 1) = S(k) − β(k)Z(k),
γ(k) =
Pl
Fi ∗P S(k+1)∗Q Gi ,W(k)i , kW(k)k2
Pl
FiT ∗P R(k+1)∗Q GiT ,Z(k)i , kZ(k)k2
h
i=1
U(k + 1) = S(k + 1) − γ(k)U(k),
η(k) =
h
i=1
V(k + 1) = R(k + 1) − η(k)V(k), P W(k + 1) = li=1 Fi ∗P U(k + 1) ∗Q Gi , P Z(k + 1) = li=1 FiT ∗P V(k + 1) ∗Q GiT , End.
Remark 1. For P = Q = 1, the above algorithms can be used to solve the generalized Sylvester matrix equation l X
Fi XGi = H.
i=1
The following lemmas and theorems describe the important properties of the above both algorithms. Lemma 2. For Algorithm BiCR1, assume that there exists a positive integer number t such that α(k) 6= 0 and α(k) 6= ∞
for all k = 1, 2, ..., t. The following holds for Algorithm BiCR1
hR(u), R(v)i = 0, u, v = 1, 2, ..., t,
u 6= v;
(2.3)
hP(u), P(v)i = 0, u, v = 1, 2, ..., t,
u 6= v;
(2.4)
hS(u), W(v)i = 0, u, v = 1, 2, ..., t,
v > u;
(2.5)
hW(u), S(v)i = 0, u, v = 1, 2, ..., t,
v > u;
(2.6)
hT (u), P(v)i = 0, u, v = 1, 2, ..., r,
v > u;
(2.7)
hS(u), R(v)i = 0, u, v = 1, 2, ..., r,
v > u.
(2.8)
The proof of Lemma 2 is long and can be found in Appendix A. 5
Lemma 3. For Algorithm BiCR2, assume that there exists a positive integer number t such that α(k) 6= 0, α(k) 6= ∞ and kR(k)k = 6 0 for all k = 1, 2, ..., t. The following holds for Algorithm BiCR2
hW(u), R(v)i = 0, u, v = 1, 2, ..., t, v > u,
(2.9)
hZ(u), S(v)i = 0, u, v = 1, 2, ..., t, v > u,
(2.10)
hZ(u), Z(v)i = 0, u, v = 1, 2, ..., t, v 6= u,
(2.11)
hW(u), W(v)i = 0, u, v = 1, 2, ..., t, v 6= u.
(2.12)
The proof of this lemma follows from the same way as that used in the proof of the previous lemma. The convergence theorem of BiCR1 and BiCR2 algorithms follow immediately from Lemmas 2 and 3, respectively. Theorem 1. Let the general tensor equation (1.5) be consistent. BiCR1 and BiCR2 algorithms compute the solution of (1.5) within a finite number of iterations in the absence of round-off errors. In the next theorems, we show that BiCR1 and BiCR2 algorithms can also obtain the least Frobenius norm solution of the general tensor equation (1.5). Theorem 2. Let the general tensor equation (1.5) be consistent. Suppose X (1) and P(1) in BiCR1 algorithm are considered
as follows:
X (1) = P(1) =
l X i=1
l X i=1
FiT ∗P M(1) ∗Q GiT ,
(2.13)
FiT ∗P N (1) ∗Q GiT ,
(2.14)
where M1 (1), N1 (1) ∈ RM1 ×...×MP ×N1 ×...×NQ are arbitrary. Then the solution X ∗ generated by BiCR1 algorithm is the least Frobenius norm solution of the general tensor equation (1.5).
Proof. Using BiCR1 algorithm with (2.13) and (2.14), the tensor sequences {X (k)} and {P(k)} generated by BiCR1 algorithm can be expressed as
X (k) = P(k) =
for some tensors M1 (k), N1 (k) ∈
l X i=1
l X
FiT ∗P M(k) ∗Q GiT ,
(2.15)
FiT ∗P N (k) ∗Q GiT ,
(2.16)
i=1 M ×...×M ×N ×...×N 1 1 P Q. R
From this observation, it is easy to see that
l X vec(φLT (X (k))) ={ [φT N (Gi )] ⊗ [φM L (Fi )]T }vec(φM N (M(k))) i=1
l hX iT T = [φT N (Gi )] ⊗ [φM L (Fi )] vec(φM N (M(k))) | {z } i=1 z {z } | A
T
=A z.
This completes the proof of the theorem. 6
Similarly, the following theorem holds. Theorem 3. Let the general tensor equation (1.5) be consistent. Suppose X (1) and S(1) in BiCR2 algorithm are considered
as follows:
where M1 (1), L(1) ∈
X (1) =
l X
FiT ∗P M(1) ∗Q GiT ,
(2.17)
S(1) =
l X
FiT ∗P L(1) ∗Q GiT ,
(2.18)
RM1 ×...×MP ×N1 ×...×NQ
i=1
i=1
are arbitrary. Then the solution X ∗ generated by BiCR2 algorithm is the
least Frobenius norm solution of the general tensor equation (1.5).
3. Numerical experiments In this section, we present further numerical examples to illustrate the behavior of BiCR1 and BiCR2 algorithms. The numerical solutions are compared with exact solutions and good agreement has been observed. All the examples were conducted in MATLAB [55]. Example 1. As the first example, we consider the tensor equation F ∗2 X ∗2 G = H,
(3.1)
where F = −12 ∗ tenrand([7 8 4 4]) ∈ R7×8×4×4 , and G = 15 ∗ tenrand([3 3 7 8]) ∈ R3×3×7×8 . By considering the exact solution X as follows X = −4 ∗ tensor(ones(4, 4, 3, 3)) ∈ R4×4×3×3 , we generate the right hand side tensor H ∈ R7×8×7×8 by (3.1). Now we apply the BiCR1 and BiCR2 algorithms for solving the tensor equation with the initial tensor
X (1) = tensor(zeros(4, 4, 3, 3)) ∈ R4×4×3×3 . In Figure 1, we depict the generated error and residual with e(k) = log kX − X (k)k,
r(k) = log kR(k)k = log kH − F ∗2 X (k) ∗2 Gk.
7
Figure 1: The results for Example 1
5
10 BiCR1 algorithm
0
BiCR1 algorithm
5
BiCR2 algorithm
r(k)
e(k)
BiCR2 algorithm
-5
-10
0
-5
-15 0
50
100
k
150
-10
200
0
50
k
100
150
200
Figure 2: The results for Example 2
5
5 BiCR1 algorithm
0
r(k)
e(k)
BiCR1 algorithm
BiCR2 algorithm
0
-5
BiCR2 algorithm
-5
-10
-10
-15
-15
0 2. As 100the second 200 300 400 500 the following tensor equation 0 100 Example example, we study
200
k
300
400
500
k
F1 ∗2 X ∗2 G1 + F2 ∗2 X ∗2 G2 = H,
(3.2)
with F1 = 2 ∗ tenrand([8 6 5 5]) ∈ R8×6×5×5 , G1 = 3 ∗ tenrand([4 4 8 9]) ∈ R4×4×8×9 , F2 = −4 ∗ tenrand([8 6 5 5]) ∈ R8×6×5×5 , G2 = 5 ∗ tenrand([4 4 8 9]) ∈ R4×4×8×9 . The right hand side tensor H ∈ R8×6×8×9 is generated by choosing the tensor X = tenrand([5 5 4 4]) ∈ R5×5×4×4 , as the solution of (3.2). For the initial tensors X (1) = tensor(zeros(4, 4, 3, 3)) ∈ R5×5×4×4 , X (1) = tensor(ones(5, 5, 4, 4)) ∈ R5×5×4×4 , Figures 2 and 3, respectively, portray the results obtained using the BiCR1 and BiCR2 algorithms. Numerical results confirm the accuracy and performance of the BiCR1 and BiCR2 algorithms.
8
Figure 3: The results for Example 2
5
5 BiCR1 algorithm
BiCR1 algorithm
0
BiCR2 algorithm
r(k)
e(k)
0
-5
-10
BiCR2 algorithm
-5
-10
-15
-15 0
100
200
300
400
500
600
0
100
k
200
300
400
500
k
4. Conclusions We have proposed, for the first time, the conjugate gradient-like methods to compute the solution of the general tensor equation (1.5). We have shown that the proposed algorithms can solve this general tensor equation within a finite number of iterations in the absence of roundoff errors. To test the effectiveness of the BiCR1 and BiCR2 algorithms, we have presented two numerical examples. The results have demonstrated the accuracy of the proposed algorithms.
Acknowledgments The author wishes to express his gratitude to the referees for many corrections and helpful suggestions. I also thanks Hassan Bozorgmanesh for his help in developing the MATLAB code.
Appendix A. Proof of Lemma 2 To prove the desired results, we will proceed by induction on u and v. Throughout the proof, we will make heavy use of properties (1.1)-(1.4). For the sake of brevity, the details of the proof are omitted. Starting for u = 1 and v = 2, we observe that hR(1), R(2)i = hR(1), R(1) − α(1)W(1)i = 0, hP(1), P(2)i = hP(1), P(1) − β(1)Z(1)i = kP(1)k2 − β(1)hP(1), Z(1)i = kP(1)k2 − β(1)hT (1),
l X i=1
FiT ∗P S(1) ∗Q GiT i
l X = kP(1)k2 − β(1)h Fi ∗P T (1) ∗Q Gi , S(1)i = kP(1)k2 − β(1)hW(1), S(1)i = 0, i=1
9
hS(1), W(2)i = hS(1),
l X i=1
Fi ∗P T (2) ∗Q Gi i
l X h FiT ∗P S(1) ∗Q GiT , T (2)i = hZ(1), P(2) + γ(2)T (1)i i=1
= hZ(1), P(2)i + γ(2)hZ(1), T (1)i =
1 hP(1) − P(2), P(2)i + γ(2)hS(1), W(1)i = 0, β(1)
hW(1), S(2)i = hW(1), R(2) + δ(2)S(1)i = hW(1), R(2)i + δ(2)hW(1)S(1)i 1 hR(1) − R(2), R(2)i + δ(2)hW(1)S(1)i = 0, = α(1) hT (1), P(2)i = hP(1), P(2)i = 0, and hS(u), R(v)i = hR(u), R(v)i = 0. Next let us assume that hR(u), R(w)i = 0, hP(u), P(w)i = 0, hS(u), W(w)i = 0,
(4.1)
hW(u), S(w)i = 0, hT (u), P(w)i = 0, hS(u), R(w)i = 0, for u < w < t.
(4.2)
We conclude from (4.1) and (4.2) that hR(u), R(w + 1)i = hR(u), R(w) − α(w)W(w)i = 0, hP(u), P(w + 1)i = hP(u), P(w) − β(w)Z(w)i = −β(w)hP(u), Z(w)i = −β(w)hT (u) − γ(u)T (u − 1),
l X i=1
FiT ∗P S(w) ∗Q GiT i
= −β(w)hW(u) − γ(u)W(u − 1), S(w)i = 0, hS(u), W(w + 1)i = hS(u), l X
=h
i=1
l X i=1
Fi ∗P T (w + 1) ∗Q Gi i
FiT ∗P S(u) ∗Q GiT , T (w + 1)i = hZ(u), P(w + 1) + γ(w + 1)T (w)i
= hZ(u), P(w + 1)i + γ(w + 1)hZ(u), T (w)i = =−
1 hP(u) − P(u + 1), P(w + 1)i + γ(w + 1)hZ(u), T (w)i β(u)
1 hP(u + 1), P(w + 1)i, β(u)
hW(u), S(w + 1)i = hW(u), R(w + 1) + δ(w + 1)S(w)i 1 1 = hR(u) − R(u + 1), R(w + 1)i = − hR(u + 1), R(w + 1)i, α(u) α(u) hT (u), P(w + 1)i = hT (u), P(w) − β(w)Z(w)i = −β(w)hW(u), S(w)i = 0, and hS(u), R(w + 1)i = hS(u), R(w) − α(w)W(w)i = 0. 10
(4.3)
(4.4)
In addition, for u = w it can be verified that hR(w), R(w + 1)i = hR(w), R(w) − α(w)W(w)i = kR(w)k2 − α(w)hS(w), W(w)i = 0, hP(w), P(w + 1)i = kP(w)k2 − β(w)hW(w) − γ(w)W(w − 1), S(w)i = 0, 1 hP(w) − P(w + 1), P(w + 1)i + γ(w + 1)hS(w), W(w)i = 0, hS(w), W(w + 1)i = β(w) hW(w), S(w + 1)i =
1 hR(w) − R(w + 1), R(w + 1)i + δ(w + 1)hW(w), S(w)i = 0, α(w)
hT (w), P(w + 1)i = kP(w)k2 − β(w)hW(w), S(w)i = 0, and hS(w), R(w + 1)i = kR(w)k2 − α(w)hW(w), S(w)i = 0. Combining hS(u), W(w)i = 0, hP(w), P(w + 1)i = 0 and (4.3) directly yields hS(u), W(w + 1)i = 0. Also using hW(u), S(w)i = 0, hR(w), R(w + 1)i = 0 and (4.4) we obtain hW(u), S(w + 1)i = 0. Therefore, by the principle of induction, the lemma holds.
References [1] M. Brazell, N. Li, C. Navasca, and C. Tamon, “Solving multilinear systems via tensor inversion.,” SIAM Journal on Matrix Analysis and Applications, vol. 34, pp. 542–570, 2013. [2] L. Sun, B. Zheng, C. Bu, and Y. Wei, “Moore-Penrose inverse of tensors via Einstein product,” Linear and Multilinear Algebra, vol. 64, pp. 686–698, 2016. [3] J. Ji and Y. Wei, “Weighted Moore-Penrose inverses and fundamental theorem of even-order tensors with Einstein product,” Frontiers of Mathematics in China, vol. 12, pp. 1319–1337, 2017. [4] T. G. Kolda and B. W. Bader, “Tensor decompositions and applications,” SIAM Review, vol. 51, pp. 455–500, 2009. [5] L. Qi and Z. Luo, Tensor Analysis: Spectral Theory and Special Tensors. SIAM, 2017. [6] P. M. Kroonenberg, Three-Mode Principal Component Analysis: Theory and Applications. The Netherlands: DSWO Press, Leiden, 1983. [7] D. Cai, X. He, and J. Han, “Tensor space model for document analysis,” In: Proc. 29th Annu. ACM SIGIR Int. Conf. Research and Development in Information Retrieval, (Washington), pp. 625–626, August 2006. [8] A. Smilde, R. Bro, and P. Geladi, Multi-way Analysis: Applications in the Chemical Sciences. New York: Wiley & Sons, 2004. [9] L. Qi, Y. Wang, and E. X. Wu, “D-eigenvalues of diffusion kurtosis tensors,” Journal of Computational and Applied Mathematics, vol. 221, pp. 150–157, 2008. [10] P. McCullagh, Tensor Methods in Statistics. New York: Dover Publications, second ed., 2018. [11] Z.-H. Huang and L. Qi, “Formulating an n-person noncooperative game as a tensor complementarity problem,” Computational Optimization and Applications, vol. 66, pp. 557–576, 2017. [12] A. Einstein, “The foundation of the general theory of relativity,” in The Collected Papers of Albert Einstein (A. Kox, M. Klein, and R. Schulmann, eds.), vol. 6, (Princeton University Press, Princeton), pp. 146–200, 2007.
11
[13] Q.-W. Wang and X. Xu, “Iterative algorithms for solving some tensor equations,” Linear and Multilinear Algebra, vol. 67, pp. 1325–1349, 2019. [14] A. Malek and S. H. Momeni-Masuleh, “A mixed collocationfinite difference method for 3D microscopic heat transport problems,” Journal of Computational and Applied Mathematics, vol. 217, pp. 137–147, 2008. [15] Z. Chen and L. Lu, “A projection method and Kronecker product preconditioner for solving Sylvester tensor equations,” Science China Mathematics, vol. 55, pp. 1281–1292, 2012. [16] J. Ballani and L. Grasedyck, “A projection method to solve linear systems in tensor format,” Numerical Linear Algebra with Applications, vol. 20, pp. 27–43, 2013. [17] Y. Matsuno, “Exact solutions for the nonlinear KleinGordon and Liouville equations in four-dimensional Euclidean space,” Journal of Mathematical Physics, vol. 28, pp. 2317–2322, 1987. [18] Z. Li, Y.-H. Dai, and H. Gao, “Alternating projection method for a class of tensor equations,” Journal of Computational and Applied Mathematics, vol. 346, pp. 490–504, 2019. [19] A. Malek, Z. K. Bojdi, and P. N. N. Golbarg, “Solving fully three-dimensional microscale dual phase lag problem using mixed-collocation finite difference discretization,” Journal of Heat Transfer, vol. 134, p. 2012, 094504. [20] B.-W. Li, S. Tian, Y.-S. Sun, and Z.-M. Hu, “Schur-decomposition for 3D matrix equations and its application in solving radiative discrete ordinates equations discretized by Chebyshev collocation spectral method,” Journal of Computational Physics, vol. 229, pp. 1198–1212, 2010. [21] V. de Silva and L.-H. Lim, “Tensor rank and the ill-posedness of the best low-rank approximation problem,” SIAM Journal on Matrix Analysis and Applications, vol. 30, pp. 1084–1127, 2008. [22] W. Ding and Y. Wei, “Solving multi-linear systems with M-tensors,” Journal of Scientific Computing, vol. 68, pp. 689–715, 2016.
[23] H. Bozorgmanesh and M. Hajarian, “Convergence of a transition probability tensor of a higher-order markov chain to the stationary probability vector,” Numerical Linear Algebra with Applications, vol. 23, pp. 972–988, 2016. [24] H. Bozorgmanesh, M. Hajarian, and A. T. Chronopoulos, “Interval tensors and their application in solving multi-linear systems of equations,” Computers and Mathematics with Applications, vol. 79, pp. 697–715, 2020. [25] H. Bozorgmanesh and M. Hajarian, “Solving tensor E-eigenvalue problem faster,” Applied Mathematics Letters, vol. 100, 2020,106020. [26] J. Yao, Jun-eFeng, and M. Meng, “On solutions of the matrix equation AX = B with respect to semi-tensor product,” Journal of the Franklin Institute, vol. 353, pp. 1109–1131, 2016. [27] Y.-L. Jiang, H.-B. Chen, and Z.-Z. Qi, “Nonlinear model order reduction based on tensor Kronecker product expansion with Arnoldi process,” Journal of the Franklin Institute, vol. 353, pp. 3641–3655, 2016. [28] G. Zhao, D. Wang, and Z. Song, “A novel tensor product model transformation-based adaptive variable universe of discourse controller,” Journal of the Franklin Institute, vol. 353, pp. 4471–4499, 2016. [29] H. Zhang and F. Ding, “A property of the eigenvalues of the symmetric positive definite matrix and the iterative algorithm for coupled Sylvester matrix equations,” Journal of the Franklin Institute, vol. 351, pp. 340–357, 2014. [30] H. Zhang and F. Ding, “Iterative algorithms for X + AT X −1 A = I by using the hierarchical identification principle,” Journal of the Franklin Institute, vol. 353, pp. 1132–1146, 2016.
12
[31] F. Ding and H. Zhang, “Gradient-based iterative algorithm for a class of the coupled matrix equations related to control systems,” IET Control Theory and Applications, vol. 8, pp. 1588–1595, 2015. [32] F. Ding and T. Chen, “Iterative least-squares solutions of coupled Sylvester matrix equations,” Systems and Control Letters, vol. 54, pp. 95–107, 2005. [33] F. Ding and T. Chen, “Gradient based iterative algorithms for solving a class of matrix equations,” IEEE Transactions on Automatic Control, vol. 50, pp. 1216–1221, 2005. [34] F. Ding and T. Chen, “On iterative solutions of general coupled matrix equations,” SIAM Journal on Control and Optimization, vol. 44, pp. 2269–2284, 2006. [35] H. He, C. Ling, L. Qi, and G. Zhou, “A globally and quadratically convergent algorithm for solving multilinear systems with M-tensors,” Journal of Scientific Computing, vol. 76, pp. 1718–1741, 2018. [36] L. Han, “A homotopy method for solving multilinear systems with m-tensors,” Applied Mathematics Letters, vol. 69, pp. 49–54, 2017. [37] D. Liu, W. Li, and S.-W. Vong, “The tensor splitting with application to solve multi-linear systems,” Journal of Computational and Applied Mathematics, vol. 330, pp. 75–94, 2018. [38] L.-B. Cui, M.-H. Li, and Y. Song, “Preconditioned tensor splitting iterations method for solving multi-linear systems,” Applied Mathematics Letters, vol. 96, pp. 89–94, 2019. [39] M. Liang, B. Zheng, and R. Zhao, “Alternating iterative methods for solving tensor equations with applications,” NumericalAlgorithms, vol. 80, pp. 1437–1465, 2019. [40] F. P. A. Beik, F. S. Movahed, and S. Ahmadi-Asl, “On the Krylov subspace methods based on tensor format for positive definite Sylvester tensor equations,” Numerical Linear Algebra with Applications, vol. 23, pp. 444–466, 2016. [41] X. Xu and Q.-W. Wang, “Extending BiCG and BiCR methods to solve the Stein tensor equation,” Computers and Mathematics with Applications, vol. 77, pp. 3117–3127, 2019. [42] L. Liang and B. Zheng, “Sensitivity analysis of the lyapunov tensor equation,” Linear and Multilinear Algebra, vol. 67, pp. 555–572, 2019. [43] X. Shi, Y. Wei, and S. Ling, “ackward error and perturbation bounds for high order Sylvester tensor equation,” Linear and Multilinear Algebra, vol. 61, pp. 1436–1446, 2013. [44] C. G. Broyden and M. T. Vespucci, Krylov Solvers for Linear Algebraic Systems. ELSEVIER, 2004. [45] A. T. Chronopoulos and A. B. Kucherov, “Block s-step Krylov iterative methods,” Numerical Linear Algebra with Applications, vol. 17, pp. 3–15, 2010. [46] A. T. Chronopoulos and D. Kincaid, “On the Odir iterative method for non-symmetric indefinite linear systems,” Numerical Linear Algebra with Applications, vol. 8, pp. 71–82, 2001. [47] A. T. Chronopoulos, “S-step iterative methods for (non)symmetric (in)definite linear systems,” SIAM Journal on Numerical Analysis, vol. 28, pp. 1776–1789, 1991. [48] A. T. Chronopoulos, “On the squared unsymmetric Lanczos method,” Journal of Computational and Applied Mathematics, vol. 54, pp. 65–78, 1994. [49] M. A. Ramadan and A. M. E. Bayoumi, “A modified gradient-based algorithm for solving extended Sylvesterconjugate matrix equations,” Asian Journal of Control, vol. 20, pp. 228–235, 2018.
13
[50] M. Hajarian, “Matrix iterative methods for solving the Sylvester-transpose and periodic Sylvester matrix equations,” Journal of the Franklin Institute, vol. 350, pp. 3328–3341, 2013. [51] M. Hajarian, “Developing BiCOR and CORS methods for coupled Sylvester-transpose and periodic Sylvester matrix equations,” Applied Mathematical Modelling, vol. 39, pp. 6073–6084, 2015. [52] M. Vespucci and C. Broyden, “Implementation of different computational variations of biconjugate residual methods,” Computers and Mathematics with Applications, vol. 42, pp. 1239–1253, 2001. ˝ “Generating conjugate directions for arbitrary matrices by matrix equations I,” Computers and Mathematics [53] C. Hegedus, with Applications, vol. 21, pp. 71–85, 1991. ˝ “Generating conjugate directions for arbitrary matrices by matrix equations II,” Computers and Mathematics [54] C. Hegedus, with Applications, vol. 21, pp. 87–94, 1991. [55] B. W. Bader, T. G. Kolda, et al., “Matlab tensor toolbox version 2.6,” Available online, February 2015. http://www.sandia.gov/ tgkolda/TensorToolbox/.
14
URL: