Journal Pre-proof Numerical algorithms for solving discrete Lyapunov tensor equation Tao Li, Qing-Wen Wang, Xue-Feng Duan
PII: DOI: Reference:
S0377-0427(19)30681-8 https://doi.org/10.1016/j.cam.2019.112676 CAM 112676
To appear in:
Journal of Computational and Applied Mathematics
Received date : 19 August 2018 Revised date : 17 July 2019 Please cite this article as: T. Li, Q.-W. Wang and X.-F. Duan, Numerical algorithms for solving discrete Lyapunov tensor equation, Journal of Computational and Applied Mathematics (2019), doi: https://doi.org/10.1016/j.cam.2019.112676. This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
© 2019 Elsevier B.V. All rights reserved.
Journal Pre-proof
Numerical algorithms for solving discrete Lyapunov tensor equation * Tao Li,a Qing-Wen Wang,a,1) Xue-Feng Duanb Department of Mathematics, Shanghai University, Shanghai P.R. China b School of Mathematics and Computing Science, Guangxi Colleges and Universities Key Laboratory of Data Analysis and Computation, Guilin University of Electronic Technology, Guilin, P.R. China
p ro
Abstract
of
a
Pr e-
In this paper, we present several numerical algorithms to solve the discrete Lyapunov tensor equation, which is the generalized form of discrete Lyapunov matrix equation. Based on the structure of the tensor equation, we firstly propose a simple iterative (SI) method to consider the numerical solution of the discrete Lyapunov tensor equation. Then we extend the gradient based iterative (GBI) method and the residual norm steepest descent (RNSD) iterative method to study the tensor equation, respectively. Furthermore, we develop a residual norm conjugate gradient (RNCG) iterative method to accelerate the convergence speed of the RNSD method. Convergence analysis shows that the proposed methods converge to an exact solution for any initial value. Finally, some numerical examples are provided to illustrate the efficiency and validity of these methods proposed. Key words: discrete Lyapunov tensor equation; iterative method; convergence; residual norm conjugate gradient Mathematics subject classification: 15A69; 65F10
1. Introduction
urn
al
Tensors are denoted by boldface Euler script letters, e.g., X . The order of a tensor is the number of dimensions, also known as modes or ways. Matrices are denoted by bold capital letters, e.g., A. Vectors are denoted by boldface lowercase letters, e.g., x. Fibers are the higherorder analogous to matrix rows and columns. A fiber is defined by fixing every index except one. For example, a third order tensor X ∈ RI×J×K has column, row and tube fibers, e.g. a:jk , ai:k and aij: , where R denotes the real number field. In this article, we consider the numerical solution of the following discrete Lyapunov tensor equation [1, 2] X − X ×1 A ×2 · · · ×m A = B, (1.1)
Jo
where the mth-order n-dimension tensor B ∈ Rn×n···×n and matrix A ∈ Rn×n are given, and X ∈ Rn×n···×n is the unknown tensor to be determined. The definition of the n-mode product ×n (n = 1, · · · , m) is expounded in Section 2. When X is a 2th-order tensor, (1.1) can reduces to the following discrete Lyapunov matrix equation X − AT XA = B.
(1.2)
* This research is supported by the National Natural Science Foundation of China [grant numbers 11971294 and 11571220]. 1) Corresponding author. Email:
[email protected],
[email protected]
Journal Pre-proof 2
T. Li, Q.-W. Wang, X.-F. Duan
al
Pr e-
p ro
of
It was not merely applied to system reduction [3, 4], image processing [5], computer simulation and control theory [6–11], functionally graded plates [12–16], also extensively penetrated into optimal or robust stabilization of stochastic linear control systems [17, 18]. For example, Zhou [19] designed a robust regulator to solve the vibration suppression problem of flexible spacecraft, which can be regarded as to obtain the parameter solution of Eq.(1.2). Furthermore, Shaker [20] derived a gramian-based model reduction method to solve a bilinear systems such that the associated generalised gramians are the solutions of Eq.(1.2). Recently, some theoretical results and algorithms related to (1.2) have been developed well. For instance, Kadry [21] proposed an SQR iterative method to investigate the numerical solution of Eq.(1.2). By splitting the coefficient matrix, Gu [22] proposed a hierarchical identification method for solving Lyapunov matrix equation. Gui [23] devised a DLF algorithm when matrices A, B are Frobenius form. However, the Sylvester tensor equation as the generalized case of Eq.(1.1) also has drawn attention from a great deal of researchers. Chen [24] derived a gradient based iterative (GBI) method based on hierarchial identification principal to solve the Sylvester tensor equation. Then Chen [25] proposed a projection method and Kronecker product precondition to obtain the iterative solution. Fatemeh [26] used residual norm steepest descent algorithm (RNSD) to accelerate the convergence speed of GBI method. Beik [27] proposed the tensor forms of CG and NCG for searching the solution of the Sylvester tensor equation when the coefficient matrices are symmetric and nonsymmetric positive definite, respectively. To the best konwledge of authors, so far no iterative algorithms have been used to solve Eq.(1.1). Motivated by these issues, we present several numerical algorithms to study the tensor equation Eq.(1.1). The rest of this paper is organized as follows. In Section 2, we briefly review some definitions and properties about tensors. In Section 3, we firstly give a simple iterative method to solve the discrete Lyapunov tensor equation. According to the hierarchial identification principal [28–30], we transform Eq.(1.1) into two unconstrained subproblems which can be solved by the gradient based iterative method. To improve the mentioned methods, we propose a residual norm conjugate gradient method for solving discrete Lyapunov tensor equation (1.1). The convergence analysis of the proposed methods is provided in Section 4. In Section 5, we give some numerical examples to illustrate that our methods are feasible and effective. Finally, we conclude this paper with some remarks.
urn
2. Preliminaries
Jo
In this section, we briefly review some results of tensor [31–35] which will be used in the sequel. The mode-n matricization of tensor X ∈ RJ1 ×J2 ×···×JM is arrange the mode-n fibers to be the columns of resulting matrix, denoted by X(n) . For the relationship between tensor element (j1 , · · · , jm ) with matrix element (jn , i), we have i=1+
M X
(jk − 1)
k=1,k6=n
k−1 Y
Jl .
l=1,l6=n
The inner product of two same-sized tensors X , Y ∈ RJ1 ×J2 ×···×JM is defined as hX , Y i =
J1 X J2 X
j1 =1 j2 =1
···
JM X
jM =1
xj1 j2 ···jM yj1 j2 ···jM .
(2.1)
Journal Pre-proof 3
Numerical algorithms for solving discrete Lyapunov tensor equation
When X = Y , the Frobenius norm of tensor can be comprehensible given by v u J1 J2 JM X uX X p kX k = hX , X i = t ··· x2j1 j2 ···jM . j1 =1 j2 =1
(2.2)
jM =1
Jn X
jn =1
xj1 j2 ···jN aijn .
p ro
(X ×n A)j1 ···jn−1 ijn+1 ···jN =
of
An important operation for a tensor is the tensor-matrix multiplication [31, 32]. The nmode product of a X ∈ RJ1 ×J2 ×···×JN with matrix A ∈ RI×Jn , denoted by X ×n A, is a J1 × · · · × Jn−1 × I × Jn+1 · · · × JN tensor in which its entries are given by (2.3)
It is equivalently to the matrix A multiply each mode-n fiber, expressed in terms of unfolded tensors Y = X ×n A ⇔ Y(n) = AX(n) , (2.4) where Y(n) is a matrix. For distinct modes in a series of multiplications, the order of the multiplication is irrelevant, i.e,
If the modes are the same, then
Pr e-
X ×n A ×m B = X ×m B ×n A.
X ×n A ×n B = X ×n (BA).
(2.5)
(2.6)
For two tensors X and Y , the subtraction of them with respect to the norm is defined as [36] 2
2
2
kX − Y k = kX k − 2hX , Y i + kY k ,
(2.7)
Moreover, the n-mode product commutes with respect to inner product, namely,
al
hX , Y ×n Ai = hX ×n AT , Y i.
(2.8)
Finally, the following inequality for the tensor norm of n-mode product is hold, namely, (2.9)
urn
kX ×n Ak ≤ kX kkAk2 ,
where k · k2 represents the 2-norm of matrix.
3. Numerical algorithms
Jo
In this section, we firstly propose a simple iterative (SI) method to study the discrete Lyapunov tensor equation. Then we present two iterative methods including the gradient based iterative (GBI) method and residual norm steepest descent (RNSD) iterative method to find the numerical solution of Eq.(1.1). On these basis, we develop a residual norm conjugate gradient (RNCG) iterative method to improve the mentioned above methods. Next, we give a necessary and sufficient condition of the existence and uniqueness for the solution of Eq.(1.1). Lemma 3.1. The discrete Lyapunov tensor equation has a unique solution if and only if λi1 λi2 · · · λim 6= 1 and λij 6= ±1. where λij , j = 1, · · · , m is the eigenvalue of matrix A.
Journal Pre-proof 4
T. Li, Q.-W. Wang, X.-F. Duan
Proof. By using (2.4) and the Kronceker product, Eq (1.1) can be transformed into the following matrix-vector equation Tvec(X ) = vec(B),
(3.1)
of
where T = (I ⊗ I ⊗ · · · ⊗ I − A ⊗ · · · ⊗ A), I is the identity matrix. The operator vec(·) stacks the column vector of the matrix. If λi1 λi2 · · · λim = 1 or λij = ±1, then T must be a singular matrix which contradicts to the condition. On the contrary, if Eq.(1.1) has a unique solution, then we have λi1 λi2 · · · λim 6= 1 or λij 6= ±1. 3.1. The simple iterative algorithm
p ro
In this subsection, we derive a simple and easy programming method to study the tensor equation Eq.(1.1). A natural iterative scheme can be proposed by the basic structure of Eq.(1.1) Xk+1 = B + Xk ×1 A ×2 · · · ×m A,
Rk+1 = B + Xk+1 ×1 A ×2 · · · ×m A − Xk+1 ,
where the 2-norm of A must be satisfy
Pr e-
kAk2 < 1
(3.2)
(3.3)
to preserve convergence performance and give the proof in the following section. In fact, the 2-norm of matrix A is not always to satisfy (3.3). Thus, we need to design other effective algorithms for solving Eq.(1.1). Algorithm 3.1. The simple iterative algorithm
1. Given initial values X0 , A, B,set and k = 0. 2. while k < kmax do,
al
3. Update. Set Xk+1 = B + Xk ×1 A ×2 · · · ×m A. 4. Stopping check. If kRk+1 k < , then Stop, otherwise, k ← k + 1 and continue.
urn
3.2. The gradient based iterative algorithm
where
Jo
In this subsection, we consider the numerical solution of the tensor equation (1.1). By utilizing the hierarchial identification principal, (1.1) can be transformed into the following two subproblems Xk1 = arg min G1 (X ), X ∈Rn×···×n (3.4) Xk2 = arg min G2 (X ), X ∈Rn×···×n 2
G1 (X ) = kX − U1 k ,
2
G2 (X ) = kX ×1 A ×2 · · · ×m A − U2 k
with
U1 = B + X ×1 A ×2 · · · ×m A,
U2 = X − B.
(3.5)
Journal Pre-proof 5
Numerical algorithms for solving discrete Lyapunov tensor equation
By taking the derivative of X [35, 36], then we have ∂G1 (X ) = G01 (X ) = X − U1 , ∂X ∂G2 (X ) = G02 (X ) = (X ×1 A ×2 · · · ×m A − U2 ) ×m AT ×m−1 · · · ×1 AT . ∂X
(3.6)
of
Similar to the steepest descent method in matrix, a natural idea is to take (3.6) as the descent directions. Then the iterative formulas can be presented as follows 1 Xk+1 = Xk + α(U1 − Xk ),
2 Xk+1 = Xk + α(U2 − Xk ×1 A ×2 · · · ×m A) ×m AT ×m−1 · · · ×1 AT ,
p ro
Xk+1
2 X 1 + Xk+1 , = k+1 2
(3.7)
where α is the step length. Let Rk = B + Xk ×1 A ×2 · · · ×m A − Xk . Then (3.7) can be rewritten as 1 Xk+1 = Xk + αRk , 2 Xk+1 = Xk − αRk ×m AT ×m−1 · · · ×1 AT ,
(3.8)
Pr e-
2 X 1 + Xk+1 . Xk+1 = k+1 2 Therefore, we present the gradient based iterative (GBI) algorithm to solve (1.1).
Algorithm 3.2. The gradient based iterative algorithm. 1. Given X0 , B, A, , kmax . 2. while k < kmax do,
3. Update as (3.8), α must be yield to (4.3).
al
4. Stopping check. If kRk k < , then Stop, otherwise, k ← k + 1 and continue. In order to make full use of the information generated in the first half-iterative step, we give a modified gradient based iterative algorithm to solve the tensor equation (1.1).
urn
Algorithm 3.3. The modified gradient based iterative (MGBI) algorithm. 1. Given X0 , B, A, , kmax . 2. while k < kmax do, 3. Update as:
1 Xk+1 = Xk + α(U1 − Xk ),
Jo
1 Xk+1 + Xk , 2 = Xk + α(U2 − Xk ×1 A ×2 · · · ×m A) ×m AT ×m−1 · · · ×1 AT ,
Xk =
2 Xk+1
Xk+1 =
Xk1 + Xk2 , 2
where α must be satisfy (4.7).
(3.9)
Journal Pre-proof 6
T. Li, Q.-W. Wang, X.-F. Duan
4. Stopping check. If kRk k < , then Stop, otherwise, k ← k + 1 and continue.
of
It is remarkable that the GBI and MGBI methods only give the range of step length in Theorems 4.2 and 4.3, respectively. Whereas its optimum value has not been given. To overcome this drawback, we propose two iterative methods based on a projection technique [37–39] to solve Eq.(1.1). In the reminder of this paper, without specification, we always assume that Eq.(1.1) has a unique solution. 3.3. The residual norm conjugate gradient iterative algorithm
p ro
In this subsection, we extend the residual norm steepest descent (RNSD) method proposed by Fatemeh [26] to solve the discrete Lyapunov tensor equation. Then we develop a residual norm conjugate gradient method to improve the RNSD method. In fact, at k−th iteration, we derive the parameter such that the norm of the residual tensor Rk = B − Xk + Xk ×1 A ×2 · · · ×m A
corresponding to the new approximation Xk+1 is minimized over the following feasible set
where
Pr e-
Ωk = {Xk+1 |Xk+1 = Xk + γk Sk },
Sk = Rk − Rk ×1 AT ×2 · · · ×m AT .
By using the basic concepts of the projection techniques for solving linear systems, we want to find the iterative step length γk associated with Xk+1 which satisfies kRk+1 k = arg
min
Xk+1 ∈Ωk
kB − Xk+1 + Xk+1 ×1 A ×2 · · · ×m Ak.
(3.10)
Therefore, we have the following theorem.
Theorem 3.1. Suppose that Xk is a minimizer of Eq.(3.10) at k-th iteration, then hRk , A (Sk )i , hA (Sk ), A (Sk )i
al γk =
where
urn
A (Sk ) = Sk − Sk ×1 A ×2 · · · ×m A, hA (Sk ), A (Sk )i = 6 0.
(3.11)
(3.12)
Proof. Substituting Xk+1 = Xk + γk Sk into Eq.(3.10), then it follows that kRk+1 k = arg min kB − Xk + Xk ×1 A ×2 · · · ×m A − γk (Sk − Sk ×1 A ×2 · · · ×m A)k X ∈Ωk
= arg min kRk − γk A (Sk )k. X ∈Ωk
(3.13)
Jo
By taking the derivative of γk in (3.13), we have hA (Sk ), γk A (Sk ) − Rk i = γk hA (Sk ), A (Sk )i − hA (Sk ), Rk i.
(3.14)
If hA (Sk ), A (Sk )i = 0, we can obtain that Xk is an exact solution of Eq.(1.1). Otherwise, (3.11) is hold. In what follows, we present the residual norm steepest descent algorithm to solve (1.1).
Journal Pre-proof 7
Numerical algorithms for solving discrete Lyapunov tensor equation
Algorithm 3.4. The residual norm steepest descent algorithm. 1. Given X0 , B, A, set k = 0, R0 = B − X0 + X0 ×1 A ×2 · · · ×m A. 2. while k < kmax do,
of
3. k = k + 1, S0 = R0 , k−1 ,A (Sk−1 )i γk−1 = hAhR (Sk−1 ),A (Sk−1 )i , Xk = Xk−1 + γk−1 Sk−1 , Rk = B − Xk + Xk ×1 A ×2 · · · ×m A.
p ro
4. Stopping check. If kRk k < , then Stop, otherwise, k ← k + 1 and continue.
From the above process, it is easy to know that Sk is merely the local optimal descent direction. Consequently, we redefine the better direction of descent Sk such that propose the residual norm conjugate gradient algorithm (RNCG). The specific process is presented below. The first step (k = 1) is still choose S0 as the descent direction, namely hR0 , A (S0 )i , X1 = X0 + γ0 S0 , W1 = R1 − R1 ×1 AT ×2 · · · ×m AT . hA (S0 ), A (S0 )i
Pr e-
γ0 =
For k + 1 step (k ≥ 1), we reconsider the feasible set
Ωk = {Xk+1 |Xk+1 = Xk + γk Wk + ηk Sk−1 },
(3.15)
such that Eq.(3.10) could be rewritten as
kRk+1 k(γk ,ηk ) = arg min kB − Xk+1 + Xk+1 ×1 A ×2 · · · ×m Ak = arg min kRk − γk A (Wk ) − ηk A (Sk−1 )k,
(3.16)
urn
where hWk , Sk−1 i = 0. Let
al
where A (Wk ) = Wk − Wk ×1 A ×2 · · · ×m A. Hence, we have ∂kRk+1 k = 2(γk hA (Wk ), A (Wk )i + ηk hA (Wk ), A (Sk−1 )i − hWk , Wk i), ∂γk ∂kRk+1 k = 2(ηk hA (Sk−1 ), A (Sk−1 )i + γk hA (Wk ), A (Sk−1 )i − hWk , Sk−1 i), ∂ηk ∂kRk+1 k ∂kRk+1 k = = 0. ∂γk ∂ηk
Then the unique minimizer point of kRk+1 k is Xk+1 = Xk + γk Wk + ηk Sk−1 ,
Jo
where the parameters γk and ηk satisfy ( γk hA (Wk ), A (Wk )i + ηk hA (Wk ), A (Sk−1 )i = hWk , Wk i, γk hA (Wk ), A (Sk−1 )i + ηk hA (Sk−1 ), A (Sk−1 )i = 0.
(3.17)
It has shown that γk 6= 0 if Wk 6=k 0. Then we take Sk = Wk +
ηk Sk−1 . γk
(3.18)
Journal Pre-proof 8
T. Li, Q.-W. Wang, X.-F. Duan
Let θk−1 =
ηk γk .
Then
hA (Wk ), A (Sk−1 )i . (3.19) hA (Sk−1 ), A (Sk−1 )i Combined (3.11) with (3.19), the iterative scheme of RNCG algorithm can be given by Rk = B − Xk + Xk ×1 A ×2 · · · ×m A, W = R − R × AT × · · · × AT , k k k 1 2 m (3.20) S = W + θ S , k k k−1 k−1 Xk+1 = Xk + γk Sk .
of
θk−1 = −
In what follows, we present the residual norm conjugate gradient algorithm to solve (1.1).
p ro
Algorithm 3.5. The residual norm conjugate gradient algorithm
1. Given X0 , B, A, set k = 0, R0 = B − X0 + X0 ×1 A ×2 · · · ×m A. 2. while k < kmax do,
Pr e-
3. k = k + 1, if k = 1, S0 = R0 , else hA (Wk−1 ),A (Sk−2 )i θk−2 = − hA (Sk−2 ),A (Sk−2 )i , Sk−1 = Wk−1 + θk−2 Sk−2 . end k−1 ,A (Sk−1 )i γk−1 = hAhR (Sk−1 ),A (Sk−1 )i , Xk = Xk−1 + γk−1 Sk−1 , Rk = B − Xk + Xk ×1 A ×2 · · · ×m A.
4. Stopping check. If kRk k < , then Stop, otherwise, k ← k + 1 and continue.
al
Lemma 3.2. If the discrete Lyapunov tensor equation (1.1) has a unique solution, then the tensor series {Wi } and {Sj } generated by Algorithm 3.5 have the following properties hWk , Sk−1 i = hRk , A (Sk−1 )i = 0, hA (Sk ), A (Sk−1 )i = 0.
(3.21)
urn
Proof. For the first equality, it follows that
hWk , Sk−1 i = hRk − Rk ×1 AT ×2 · · · ×m AT , Sk−1 i = hRk , A (Sk−1 )i
= hRk−1 − γk−1 A (Sk−1 ), A (Sk−1 )i
= hRk−1 , A (Sk−1 )i − γk−1 hA (Sk−1 ), A (Sk−1 )i
Jo
= 0.
For the second equality, we have hA (Sk ), A (Sk−1 )i = hA (Wk ) − θk−1 A (Sk−1 ), A (Sk−1 )i = hA (Wk ), A (Sk−1 )i − = 0.
hA (Wk ), A (Sk−1 )i hA (Sk−1 ), A (Sk−1 )i hA (Sk−1 ), A (Sk−1 )i
Journal Pre-proof 9
Numerical algorithms for solving discrete Lyapunov tensor equation
4. Convergence analysis In this section, we shall present the convergence properties of the aforementioned methods. So, we firstly give the following theorem for Algorithm 3.1. Theorem 4.1. Let X∗ be a unique solution of Eq.(1.1). Then iterative sequence {Xk } generated by Algorithm 3.1 converges to X∗ .
Xk+1 = B + Xk ×1 A ×2 · · · ×m A,
of
Proof. Let X k = Xk − X be the error tensor at k-th iteration. Then
Therefore,
p ro
Xk+1 − X = (Xk − X ) ×1 A ×2 · · · ×m A. X k+1 = X k ×1 A ×2 · · · ×m A.
(4.1)
By takeing norm on both sides of (4.1) and using (2.9), we have
(k+1)m
2m kX k+1 k ≤ kX k kkAkm 2 ≤ kX k−1 kkAk2 ≤ · · · ≤ kX 0 kkAk2
Pr e-
Since kAk2 < 1, it follows that kX k+1 k → 0 as k → ∞.
.
(4.2)
Theorem 4.2. Let X∗ be a unique solution of Eq.(1.1). If the step length α satisfies 0<α<
1 , 1 + kAk2m 2
(4.3)
then the iterative sequence {Xk } generated by Algorithm 3.2 converges to X∗ for any initial value. Proof. Let
j
X k = Xkj − X ,
j = 1, 2.
1
al
According to (3.8), it follows that X k+1 = X k + αL k , 2
X k+1 = X k − αL k ×1 AT ×2 · · · ×m AT ,
(4.4)
1
urn
where L k+1 = X k ×1 A ×2 · · · ×m A − X k . By taking norm on both sides of (4.4), we have kX k+1 k2 ≤ kX k k2 + α2 kL k+1 k2 + 2αhX k , L k+1 i, 2
kX k+1 k2 ≤ kX k k2 + α2 kL k+1 k2 kAk2m 2 − 2αhX k ×1 A ×2 · · · ×m A, L k+1 i. Consequently,
1
(4.5)
2
kX k+1 k2 + kX k+1 k2 2 2 ≤ kX k k2 + α2 kL k+1 k2 (1 + kAk2m 2 ) − αkL k+1 k
Jo
kX k+1 k2 ≤
≤ kX k k2 + αkL k+1 k2 (α + αkAk2m 2 − 1) ≤ kX 0 k2 − α(1 − α − αkAk2m 2 )
k+1 X i=1
kL i k2 .
(4.6)
Journal Pre-proof 10
T. Li, Q.-W. Wang, X.-F. Duan
When 0 < α <
1 , 1+kAk2m 2
∞ X i=1
kL i k2 ≤ kX 0 k2 ≤ ∞.
This implies that L k → 0 as k → ∞. Therefore, we have that X k → 0 if k → ∞.
0 < α < min{1,
1 }, kAk2m 2
of
Theorem 4.3. Let X∗ be a unique solution of Eq.(1.1). If the step length α satisfies (4.7)
p ro
then the iterative sequence {Xk } generated by Algorithm 3.3 is convergent to X∗ for any initial value. Proof. The proof is similar to Theorem 4.2, hence we omit it.
Theorem 4.4. The tensor equation (1.1) has a approximation solution if and only if satisfy one of the following conditions. (1) If hA (Sk ), A (Sk )i = 0, then Xk is a solution of Eq.(1.1);
Pr e-
(2) if hA (Sk ), A (Sk )i = 6 0, then we have kRk k ≤ kRk−1 k.
Proof. If hA (Sk ), A (Sk )i = 0, we can obtain A (Sk ) = 0. Moreover, hRk , A (Sk )i = hWk , Sk i
= hWk , Wk + θk−1 Sk−1 i
= hWk , Wk i + θk−1 hWk , Sk−1 i = 0
al
By Lemma 3.2, it follows that hWk , Wk i = 0. Therefore, we have Rk = 0 which implies Xk is a solution of Eq.(1.1). The rest proof is similar to Theorem 2 in [26], hence we omit it.
5. Numerical experiments
urn
The purpose of these examples in this section is to illustrate the effectiveness of the mentioned SI and RNCG methods. We implemented our methods in Matlab R2016a and ran the codes on a PC with 2.20GHz CPU and 2.0GB RAM. For the sake of convenience, we use respectively ”IT” and ”CPU” to represent the number of iteration steps and the elapsed CPU time in seconds, and RES=kRk kF , where Rk is the residual tensor at k-th iteration. The κ(A) represents the 2-norm of matrix. Moreover, the stopping criterion of the proposed methods on the successive iterates are RES< , where = 1 × 10−7 . By taking different value κ(A), we tested the proposed methods and compared the numerical results.
Jo
Example 5.1. In this example, we firstly consider the follwoing convection-diffusion equation [27] ( −v∆u + cT ∇u = f in Ω = [0, 1] × [0, 1] × [0, 1], . u = 0 on ∂Ω. Taking a standard finite difference discretization on a uniform grid for the diffusion term 1 . Then we can and the Fromm’s scheme for the convection term with the mesh-size h = n+1
Journal Pre-proof 11
Numerical algorithms for solving discrete Lyapunov tensor equation
obtain the discrete system matrix of Eq.(3.1) as follows A=
2vh−2 + 34 ch−1 −vh−2 + 14 ch−1 .. .
−vh−2 − 54 ch−1 2vh−2 + 34 ch−1 .. .
1 −1 4 ch −2 −vh − 54 ch−1
..
. + 14 ch−1
−2
−vh
1 −1 4 ch
..
.
..
−1 2 .. .
−1 .. . −1
..
.
2 2
3 1 + c 4h −1 −1
−5 3 .. .
1 −5 .. . 1
B = tenrand(n, n, n), X0 = tenzeros(n, n, n).
.
n×n
1 , −5 3
p ro
2 −1 v A= 2 h
.
−vh−2 − 45 ch−1 2vh−2 + 34 ch−1
Thus we solve the discrete Lyapunov tensor equation (1.1) with
of
..
.
3 1
κ(A) < 1,
Pr e-
The tensor B and parameters n, h, c, v are generated by different dimensions of system. We find the solution of the tensor equation (1.1) by the five methods mentioned. The numerical results were reported in Table 5.1. It has shown that our methods are feasible and the iterative number and elapsed CPU time are increasing as the dimension of tensor increased. In addition, this table also reflects that the CPU time corresponding to the SI method spent is less than others. Figure 5.1 shows the performance of each methods on the problem with n = 2 and n = 20, respectively. It follows from Figure 5.1 that the SI method performs at their best, which coincides with the analysis presented in Section 4. Table 5.1: IT,
5,
1 , 1 , 0.7834 400 6400
10,
1 , 1 , 0.8508 961 15376
GBI
MGBI
RNSD
RNCG
8 0.0167 3.7096e-08 14 0.0355 6.8657e-08 26 0.0674 4.5741e-08 33 0.1646 7.4725e-08 61 0.2098 8.4290e-08
46 0.3343 8.9032e-08 100 0.7140 9.1219e-08 225 1.8646 9.8147e-08 283 2.8707 9.5454e-08 1282 11.6530 9.9051e-08
34 0.2350 7.2629e-08 45 0.3017 9.6359e-08 72 0.7024 8.3748e-08 89 1.1332 9.6160e-08 193 2.2793 9.9749e-08
12 0.0872 5.4184e-08 25 0.1739 7.0398e-08 45 0.3657 9.2018e-08 53 0.5633 8.8554e-08 196 1.5892 9.2517e-08
6 0.0541 1.1444e-16 12 0.1048 9.8458e-08 20 0.2283 5.9033e-08 24 0.3663 4.8178e-08 42 0.5336 9.1730e-08
1 , 1 , 0.8614 1764 28224
20,
1 , 1 , 0.9307 2601 41616
Jo
15,
RES.
SI
urn
2,
1 , 1 , 0.6799 144 2304
al
n, v, c, κ(A)
CPU(s),
Journal Pre-proof 12
T. Li, Q.-W. Wang, X.-F. Duan
105
102 SI GBI MGBI RNSD RNCG
100
SI GBI MGBI RNSD RNCG
100
10-10
10-4
10-15
10-6
10-20
0
5
10
15
20
25
30
35
40
45
10-8
50
0
200
400
600
800
1000
1200
1400
Iteration Number
p ro
Iteration Number
of
||R(X)||
10-2
||R(X)||
10-5
Fig. 5.1. : SI, GBI, MGBI, RNSD, RNCG methods for Eq.(1.1) with n = 2 (the left) and n = 20 (the right).
Pr e-
Example 5.2. We reconsider the Example 4.1 of [40], expect for B is an identity tensor. Let A11 A12 1−α α A= , A11 = , α 0 −AT11 1−α 2 X0 = tenzeros(4, 4, 4), B = tenones(4, 4, 4), where A12 ∈ R2×2 is a random matrix.
Table 5.2: IT, GBI
0.7, 1.1201
604 4.0333 9.7503e-08 788 5.2185 9.9093e-08 1437 9.7987 9.9884e-08
0.8, 1.2238
urn
0.9, 1.2987
RES.
MGBI
RNSD
RNCG
298 2.5993 9.8840e-08 233 3.5621 9.6778e-08 865 6.9527 9.8268e-08
189 1.4201 9.4686e-08 571 1.6723 9.6569e-08 391 2.9500 9.5892e-08
24 0.2417 5.6358e-08 33 0.3385 4.4610e-08 32 0.3578 7.2781e-08
al
α, κ(A)
CPU(s),
Jo
For this example, the SI algorithm is invalid. Therefore, we only used the GBI, MGBI, RNSD and RNCG algorithms to solve the tensor equation (1.1). The numerical results were listed in Table 5.2. Table 5.2 displays that the iteration steps, the elapsed CPU time and the residual corresponding to the RNCG algorithm are commonly less than the other methods. From Figure 5.2, we can obtain that the RNCG algorithm may has superlinear convergence.
6. Concluding remarks
We have extended some well-known iterative methods to solve the discrete Lyapunov tensor equation. On these basis, we proposed the SI and RNCG methods for solving the tensor
Journal Pre-proof 13
Numerical algorithms for solving discrete Lyapunov tensor equation
102
102 GBI MGBI RNSD RNCG
100
GBI MGBI RNSD RNCG
100
10-4
10-4
10-6
10-6
10-8
0
100
200
300
400
500
600
700
10-8
0
500
1000
1500
Iteration Number
p ro
Iteration Number
of
||R(X)||
10-2
||R(X)||
10-2
Fig. 5.2. : GBI, MGBI, RNSD, RNCG methods for solving Eq.(1.1) with α = 0.7 (the left) and α = 0.9 (the right).
Pr e-
equation (1.1). The convergence analysis shows that the iterative sequences generated by the proposed methods converge to an unique solution for any initial value. Our limited numerical results have shown that the proposed methods are feasible and efficient, where the SI and RNCG methods perform much better than others.
References
Jo
urn
al
[1] L. Li, B.-D. Zheng, Sensitivity analysis of the Lyapunov tensor equation, Linear and Multilinear Algebra. 67 (3) (2019) 555-572. [2] L. Li, B.-D. Zheng, Algebraic Lyapunov and Stein stability results for tensors, Linear and Multilinear Algebra. 66 (4) (2018) 731-741. [3] D.-C. Sorensen, A.-C. Antoulas, The Sylvester equation and approximate balanced reduction, Linear Alg. Appl. 351 (2002) 671-700. [4] U. Baur, P. Benner, Cross-Gramian based model reduction for data-sparse systems, Electr. Trans. Num. Anal. 31 (27) (2008) 256-270. [5] D. Calvetti, R. Lothar, Application of ADI iterative methods to the restoration of noisy images, SIAM J. Matrix Anal. Appl. 17 (1) (1996) 165-186. [6] R.-E. Kalman, J.-E. Bertram, Control system analysis and design via the ’second method’ of Lyapunov: I-Continuous-time systems, J. Basic Eng. 82 (2) (1960) 371-393. [7] G.-H. Golub, S. Nash, A hessenberg-schur method for the problem AX + XB = C, IEEE Trans. Autom. Control. 24 (6) (1979) 909-913. [8] I. Troch, Solving the discrete Lyapunov equation using the solution of the corresponding continuous Lyapunov equation and vice versa, IEEE Trans. Autom. Control. 33 (10) (1988) 944-946. [9] T. Mori, N. Fukuma, On the discrete Lyapunov matrix equation, IEEE Trans. Autom. Control. 27 (2) (1982), 463-464. [10] Q. Wang, J. Lam, Y.-Y. Wei, T.-W. Chen, Iterative solutions of coupled discrete Markovian jump Lyapunov equations, Comput. Math. Appl. 55 (4) (2008) 843-850. [11] J. Heinen, A technique for solving the extended discrete Lyapunov matrix equation, IEEE Trans. Autom. Control. 17 (1) (1972) 156-157. [12] H. Bellifa, A. Bakora, A. Tounsi, An efficient and simple four variable refined plate theory for buckling analysis of functionally graded plates, Steel Compos. Struct. 25 (3) (2017) 257-270. [13] H. Hebali, A. Tounsi, M.-S. Houari, New quasi-3D hyperbolic shear deformation theory for the
Journal Pre-proof 14
[21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38]
of
p ro
[19] [20]
Pr e-
[17] [18]
al
[16]
urn
[15]
static and free vibration analysis of functionally graded plates, J. Eng. Mech. 140 (2) (2014) 374-383. A. Kaci, M.-S. Houar, A.-A. Bousahla, Post-buckling analysis of shear-deformable composite beams using a novel simple two-unknown beam theory, Struct. Eng. Mech. 65 (5) (2018) 621-631. H.-H. Abdelaziz, M.-A. Meziane, A.-A. Bousahla, An efficient hyperbolic shear deformation theory for bending, buckling and free vibration of FGM sandwich plates with various boundary conditions, Steel Compos. Struct. 25 (6) (2017) 693-704. A.-A. Bousahla, S. Benyoucef, A. Tounsi, On thermal stability of plates with functionally graded coefficient of thermal expansion, Struct. Eng. Mech. 60 (2) (2016) 313-335. A. Halanay, V. Ionescu, Time-varying discrete linear systems, Birkauser. 1994. Y. Saito, T. Mitsui, Stability analysis of numerical schemes for stochastic differential equations, SIAM J. Num. Anal. 33 (6) (1996) 2254-2267. K.-M. Zhou, J.-C. Doyle, Roubst and optimal control, New Jersey: Prentice hall, 1996. H.-R. Shaker, M. Tahavori, Time-interval model reduction of bilinear systems, Int. J. Control. 87 (8) (2014) 1487-1495. S. Kadry, Z. Woznicki, On discussion of SQR method for solving sylvester equation, Int. J. Soft. Comput. (2) (2007) 236-242. C.-Q. Gu, H. Xue, A shift-splitting hierarchical identification method for solving Lyapunov matrix equations, Linear Alg. Appl. 430 (56) (2009) 1517-1530. B. Gui, M.-L. Yan, Transform DLF algorithm to solve discrete Lyapunov matrix equations, 2012 Fifth Int Conf. Inf Comput. Sci. (2012) 146-149. Z. Chen, L.-Z. Lu, A projection method and Kronecker product preconditioner for solving Sylvester tensor equations, Sci. China Math. 55 (6) (2012) 1281-1292. Z. Chen, L.-Z. Lu, A gradient based iterative solutions for Sylvester tensor equations, Math. Prob. Eng. 2013 (2013). F. Panjeh, A.-A. Salman, Residual norm steepest descent based iterative algorithms for Sylvester tensor equations, J. Math. Mode. 2 (2) (2015) 115-131. A. Beik, F. Panjeh, On the Krylov subspace methods based on tensor format for positive definite Sylvester tensor equations, Numer. Linear Alg. Appl. 23 (3) (2016) 444-466. Z.-Z. Bai, J.-F. Yin, Y.-F. Su, A shift-splitting preconditioner for non-Hermitian positive definite matrices, J. Comput. Math. 24 (2006), 539-552. F. Ding, T.-W. Chen, Gradient based iterative algorithms for solving a class of matrix equations, IEEE Trans. Autom. Control. 50 (8) (2005) 1216-1221. F. Ding, P.-X. Liu, J. Ding, Iterative solutions of the generalized Sylvester matrix equations by using the hierarchical identification principle, Appl. Math. Comp. 197 (1) (2008) 41-50. T.-G. Kolda, B.-W. Bader, Tensor decompositions and applications, SIAM Rev. 51 (3) (2009) 455-500. T.-G. Kolda, Multilinear operators for higher-order decompositions, Sandia National Laboratory. 2006. L.-Q. Qi, Z.-Y. Luo, Tensor analysis: Spectral theory and special tensors, SIAM Soc. Ind. Appl. Math. (151) (2017). L.-P. Zhang, L.-Q. Qi, G. Zhou, M-tensors and some applications, SIAM J. Matrix Anal. Appl. 35 (2) (2014) 437-452. Y. Song, L.-Q. Qi, Properties of some classes of structured tensors, J. Optim. Theory Appl. 165 (3) (2015) 854-873. A. Cichocki, R. Zdunek, Nonnegative matrix and tensor factorizations: applications to exploratory multi-way data analysis and blind source separation, John Willy Sons. 2009. Z.-Z. Bai, M. Rozloznik, On the numerical behavior of matrix splitting iteration methods for solving linear systems, SIAM J. Numer. Anal. 53 (2015) 1716-1737. Y. Saad. Iterative methods for sparse linear system, Soc Ind. Appl Math. 82 (2003).
Jo
[14]
T. Li, Q.-W. Wang, X.-F. Duan
Journal Pre-proof 15
Numerical algorithms for solving discrete Lyapunov tensor equation
Jo
urn
al
Pr e-
p ro
of
[39] J. Ballani, L. Grasedyck, A projection method to solve linear systems in tensor format, Numer. Linear Alg. Appl. 20 (1) (2013) 27-43. [40] P. Benner, E.-S. Quintana-Orti, Numerical solution of discrete stable linear matrix equations on multicomputers, Parallel Algorithms Appl. 17 (2) (2002) 127-146.