Iterative criteria for identifying strong H -tensors

Iterative criteria for identifying strong H -tensors

Accepted Manuscript Iterative criteria for identifying strong H-tensors Baohua Huang, Changfeng Ma PII: DOI: Reference: S0377-0427(18)30723-4 https:...

500KB Sizes 3 Downloads 71 Views

Accepted Manuscript Iterative criteria for identifying strong H-tensors Baohua Huang, Changfeng Ma

PII: DOI: Reference:

S0377-0427(18)30723-4 https://doi.org/10.1016/j.cam.2018.11.033 CAM 12041

To appear in:

Journal of Computational and Applied Mathematics

Received date : 17 May 2017 Revised date : 14 June 2018 Please cite this article as: B. Huang and C.-F. Ma, Iterative criteria for identifying strong H-tensors, Journal of Computational and Applied Mathematics (2018), https://doi.org/10.1016/j.cam.2018.11.033 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Manuscript Click here to view linked References

Iterative criteria for identifying strong H-tensors∗ Baohua Huang, Changfeng Ma† College of Mathematics and Informatics & FJKLMAA, Fujian Normal University, Fuzhou 350117, P.R. China.

Abstract: Strong H-tensors play an important role in the theories and applications of numerical linear algebra. It is necessary to identify whether a given tensor is a strong H-tensor or not. In this paper, we establish some iterative criteria for identifying strong H-tensors. These criteria depend on the elements of the tensors; therefore, they are easy to be verified. The results obtained in this paper extend the corresponding conclusions for strictly generalized diagonally dominant matrices. As an application, some sufficient conditions for the positive definiteness of an evenorder real symmetric tensor are presented. Some numerical experiments show the feasibility and efficiency of the results which are obtained in this paper. Keywords: Strong H-tensors; Strictly generalized diagonally dominant; Iterative algorithm; Positive definiteness; Numerical experiments. MSC (2010): 15A69; 15A18; 65F15; 65H17

1

Introduction

A complex (real) mth order n-dimensional tensor A = (ai1 i2 ···im ) consists of nm complex (real) entries: ai1 i2 ···im ∈ C(R), where ij = 1, 2, · · · , n for j = 1, 2, · · · , m [1–3]. It is obvious that a matrix is a 2th order tensor. Moreover, a tensor A = (ai1 i2 ···im ) is called symmetric [4, 5] if ai1 i2 ···im = aπ(i1 i2 ···im ) , ∀ π ∈ Πm , ∗ This research is supported by National Science Foundation of China (41725017), National Basic Research Program of China under grant number 2014CB845906. It is also partially supported by the CAS/CAFEA international partnership Program for creative research teams (No. KZZD-EW-TZ-19 and KZZD-EW-TZ-15), Strategic Priority Research Program of the Chinese Academy of Sciences (XDB18010202). † Corresponding author. Email address: [email protected] (C.-F. Ma).

1

where Πm is the permutation group of m indices. As we all know, the mth degree homogeneous polynomial of n variables f (x) is defined as follows: f (x) =

X

i1 ,i2 ,··· ,im ∈N

ai1 i2 ···im xi1 xi2 · · · xim ,

(1.1)

where x = (x1 , x2 , · · · , xn )T ∈ Rn , N = {1, 2, · · · , n}. When m is even, f (x) is called positive definite if f (x) > 0, for all x ∈ Rn , x 6= 0. By means of tensor, the above form can be represented as f (x) = Axm , where A = (ai1 i2 ···im ) is an mth order n-dimensional tensor. Positive semidefinite polynomials (nonnegative polynomials) are important in the field of optimization. Positive definiteness of multivariate polynomials play an important role in the stability study of nonlinear autonomous systems [3, 6], such as the multivariate network realizability theory [7], a test for Lyapunov stability in multivariate filters [8], the output feedback stabilization problems [9] and a test of existence of periodic oscillations using Bendixons theorem [10]. Due to the importance of tensor positive definiteness in real fields, the tensor positive definiteness receives much attention of researchers’ in recent decade [11–14]. For example, based on Sturm theorem, the positive definiteness of the multivariate polynomial form can be checked for n ≤ 3 [15]. Chen and Qi [16] established the positive definiteness and semi-definiteness of even order symmetric Cauchy tensors. Based on the relationship between positive definiteness of a tensor and its eigenvalue signs [4], Ni and Qi [12] presented an eigenvalue method for testing positive definiteness of a multivariate form. However, all the eigenvalues of the tensor are needed in this method, thus the method is infeasible when tensor order or dimension is large. On the other hand, Li et al. [17] provided a practical sufficient condition for identifying the positive definiteness of an even-order symmetric tensor. They pointed out that strong H-tensor is a special kind of tensors and an even order symmetric strong H-tensor with positive diagonal entries is positive definite. Due to this, we may identify the positive definiteness of a tensor via identifying the strong H-tensor. A typical feature of the strong H-tensor is that it is strictly generalized diagonally dominant [17, 18]. Based on this characterization, various criteria for strong H-tensors are established 2

[19–26]. For a diagonally dominant tensor with at least one index being strictly diagonally dominant, it is shown to be a strong H-tensor under irreducible condition [17]. Recently, Wang et al. [22] gave a relaxed criteria for strong H-tensors by means of S-diagonal product dominant tensor. Different from [19–26], the method in [17] is an implementable iterative algorithm for identifying strong H-tensors and it is recently improved in [27]. However, the algorithm is parameterinvolved iterative method for identifying strong H-tensors. Maybe an inappropriate iterative parameter can result in a large computing quantity. Later, Zhang and Wang [28] proposed a non-parameter-involved scheme for identifying strong H-tensors. Inspired by the previous work, in this paper, we establish some iterative criteria for strong H-tensors. The obtained results extend the corresponding conclusions for strictly generalized diagonally dominant matrices [29, 30]. The validity of our proposed iterative algorithms are theoretically guaranteed and the numerical experiments show their efficiency. The rest of this paper is organized as follows. Section 2 gives some preliminaries and related lemmas which will be used in this paper. Section 3 provides three implementable iterative algorithms for identifying strong H-tensors. Moreover, we prove the validity of these algorithms in this section. As an application, some new sufficient conditions for the positive definiteness of an even-order real symmetric tensor are presented in Section 4. In Section 5, numerical experiments are given to show the efficiency of the proposed algorithms. The paper ends up with some conclusions in Section 6. To end this section, we give some notations used in this paper. We use small letters x, y, v, α, · · · , for scalars, small bold letters x, y, · · · , for vectors, capital letters A, B, · · · , for matrices and calligraphic letters A, B, · · · , for tensors. Let O and O be zero tensor and zero matrix, respectively. Denote the set of all mth order n-dimensional tensors by Tm,n . For a set α, |α| denotes the number of elements in α. Let N = {1, 2, · · · , n}. We use I to denote mth order n-dimensional unit tensor [31] with entries Ii1 i2 ···im =

(

1, if i1 = i2 = · · · = im ,

0, otherwise

and use the following Kronecker delta function δi1 i2 ···im =

(

1, if i1 = i2 = · · · = im , 0, otherwise. 3

For a vector x = (x1 , x2 , · · · , xn )T ∈ Cn and a tensor A = (ai1 ···im ) ∈ Tm,n , Axm−1 is ndimensional vector with its ith component defined by m−1

(Ax

)i :=

n X

i2 ,··· ,im =1

aii2 ···im xi2 · · · xim .

The mode-k product [32, 33] of a tensor A ∈ Tm,n by a matrix B ∈ Cn×n , denoted by A ×k B, is a tensor in Tm,n with its entries (A ×k B)i1 ···ik−1 ik ik+1 ···im =

n X

jk =1

ai1 ···ik−1 jk ik+1 ···im bik jk ,

where i1 , · · · , im = 1, 2, · · · , n. Moreover, we denote AB m−1 = A ×2 B · · · ×m B, B1T AB2m−1 = A ×1 B1 ×2 B2 · · · ×m B2 . Particularly, for diagonal matrix X = diag(x1 , x2 , · · · , xn ), the product of tensor A and matrix X is given by (AX m−1 )i1 i2 ···im = ai1 i2 ···im xi2 xi3 · · · xim .

(1.2)

The composite of a diagonal tensor D and another tensor A [18] is defined as (DA)i1 i2 ···im = di1 i1 ···i1 ai1 i2 ···im . Given a tensor A = (ai1 i2 ···im ) ∈ Tm,n , we denote Ri (A) =

X

i2 ,i3 ,··· ,im ∈N δii2 ···im =0

|aii2 ···im | =

X

i2 ,i3 ,··· ,im ∈N

|aii2 ···im | − |aii···i |,

N1 (A) = {i ∈ N : |aii···i | ≤ Ri (A)}, N2 (A) = {i ∈ N : |aii···i | > Ri (A)}.

2

Preliminaries

In this section, we first give some preliminaries developed on tensor analysis and computing [4, 31, 34], and then discuss some properties of strong H-tensors. A complex number λ is said

to be an eigenvalue of A if there exists a nonzero vector x ∈ C such that Axm−1 = λx[m−1] ,

where x[m−1] is the vector in Cn with ith component xm−1 , see [4]. Define the spectrum σ(A) i to be the set of all eigenvalues of A. Then, the spectral radius of A is defined by ρ(A) := max{|λ| : λ ∈ σ(A)}. 4

Analogous with that of M -matrices, comparison matrices and H-matrices, the definition of M-tensors, comparison tensors and strong H-tensors are given by the following. Definition 2.1 [3] Let A = (ai1 i2 ···im ) ∈ Tm,n be a real tensor. A is called an M-tensor if there exists a nonnegative tensor B and a positive real number s ≥ ρ(B) such that A = sI − B. If s > ρ(B), then A is a strong M-tensor. Definition 2.2 [3] Let A = (ai1 i2 ···im ) ∈ Tm,n be a complex tensor. We call another tensor m(A) = (mi1 i2 ···im ) as the comparison tensor of A if ( |ai1 i2 ···im |, if (i2 , i3 , · · · , im ) = (i1 , i1 , · · · , i1 ), mi1 i2 ···im = −|ai1 i2 ···im |, if (i2 , i3 , · · · , im ) 6= (i1 , i1 , · · · , i1 ). Definition 2.3 [3] Let A = (ai1 i2 ···im ) ∈ Tm,n be a complex tensor. We call the tensor A is an H-tensor if its comparison tensor m(A) is M-tensor; we call A is a strong H-tensor if its comparison tensor m(A) is a strong M-tensor. Moreover, Li et al. [17] also provided the following definition of strong H-tensor, which is equivalent to Definition 2.3. Definition 2.4 [17] Let A = (ai1 i2 ···im ) ∈ Tm,n be a complex tensor. A is called a strong

H-tensor if there is an entrywise positive vector x = (x1 , x2 , · · · , xn )T ∈ Rn such that for all i ∈ N,

|ai···i |xm−1 > i

X

i2 ,i3 ,··· ,im ∈N δii2 ···im =0

|aii2 ···im |xi2 · · · xim .

Definition 2.5 [3] Let A = (ai1 i2 ···im ) ∈ Tm,n be a complex tensor. A is called a diagonally dominant tensor if for all i ∈ N , |aii···i | ≥

X

i2 ,i3 ,··· ,im ∈N δii2 ···im =0

|aii2 ···im |.

(2.1)

A is strictly diagonally dominant if the strict inequality holds in (2.1) for all i ∈ N . Finally, we give the following lemmas which will be used in the sequel. Lemma 2.1 [18] The following conditions are equivalent: (i) a tensor A is a strong H-tensor;

(ii) there exists a positive diagonal matrix D such that AD m−1 is strictly diagonally dominant;

(iii) there exists two positive diagonal matrix D1 and D2 such that D1 AD2m−1 is strictly diagonally dominant. 5

Lemma 2.2 [17] Let A = (ai1 i2 ···im ) ∈ Tm,n be a complex tensor. If A is a strictly diagonally dominant tensor, then A is a strong H-tensor. From Lemmas 2.1 and 2.2, we have the following result. Lemma 2.3 [18] Let A = (ai1 i2 ···im ) ∈ Tm,n be a strong H-tensor. Let X = diag(x1 , x2 , · · · , xn )

be a positively diagonal matrix. Then AX m−1 is a strong H-tensor.

In the end of this section, we review a property of strong H-tensor via principal subtensor. Let A = (ai1 i2 ···im ) ∈ Tm,n , and let α be a subset of index set N . A principal subtensor A[α] of tensor A is an mth order |α|-dimensional subtensor of A composed by the following |α|m

elements: A[α] = (ai1 i2 ···im ), i1 , i2 , · · · , im ∈ α.

Lemma 2.4 [35] Let A = (ai1 i2 ···im ) ∈ Tm,n be a strong H-tensor. And let α be a nonempty subset of N . Then A[α] is a strong H-tensor.

Some iterative criteria for identifying strong H-tensors

3

In this section, we will give iterative algorithms for identifying strong H-tensors. First, we present the following algorithm proposed in [17] which may be the first iterative scheme for identifying strong H-tensors. Algorithm 3.1

INPUT. a tensor A ∈ Tm,n and ε > 0.

OUTPUT. a positively diagonal matrix X hki = X (1) X (2) · · · X (k) if A is a strong H-tensor. Step 1. If N2 (A) = ∅ or aii···i = 0 for some i ∈ N , then A is not a strong H-tensor, stop. Otherwise, go to the next step. Step 2. Set A(0) = A, X (0) = I, X h0i = I and k = 1.

Step 3. Compute A(k) = A(k−1) [X (k−1) ]m−1 .

Step 4. If N2 (A(k) ) = N , then A is a strong H-tensor, stop. Otherwise, go to the next

step. Step 5. Set the diagonal matrix X (k) with diagonal entries  ! 1 (k)  (k) ) m−1  |a | − R (A i  1 − ii···i (k) , if i ∈ N2 (A(k) ), (k) xi = |aii···i | + ε    1, otherwise. 6

Step 6. Set X hki = X hk−1i X (k) , k = k + 1, go to Step 3. For Algorithm 3.1, an inappropriate choice of parameter ε may result in a large number of iteration numbers. In the following, we establish some new iterative algorithms for identifying strong H-tensors.

3.1

Iterative algorithm for identifying strong H-tensors

To establish new implementable iterative algorithm for identifying strong H-tensors, we use the following notations. Let S be a nonempty subset of N and let N \S be the complement of S in N . Given a tensor A = (ai1 i2 ···im ) ∈ Tm,n , we denote N m−1 = {i2 i3 · · · im : ij ∈ N, j = 2, 3, · · · , m}; S m−1 = {i2 i3 · · · im : ij ∈ S, j = 2, 3, · · · , m}; N m−1 \ S m−1 = {i2 i3 · · · im : i2 i3 · · · im ∈ N m−1 and i2 i3 · · · im 6∈ S m−1 }. Moreover, we set

Φi (A) =

and

    0,   

P

if N2 (A) = ∅ or N2 (A) = {i},

i2 ···im ∈N2m−1 δii2 ···im =0

|aii2 ···im |, otherwise

Ψi (A) = Ri (A) − Φi (A) =

X

i2 i3 ···im ∈N m−1\N2m−1

|aii2 ···im |.

Now, we give some iterative algorithms for identifying strong H-tensors. Algorithm 3.2

INPUT. a tensor A ∈ Tm,n , an inner iterative number l and the constant

s ≥ 1.

OUTPUT. a positively diagonal matrix X hki = X (1) X (2) · · · X (k) if A is a strong H-tensor. Step 1. If N2 (A) = ∅ or aii···i = 0 for some i ∈ N , then A is not a strong H-tensor, stop.

Otherwise, go to Step 2. Step 2. Set A(0) = A, X (0) = I, X h0i = I and k = 1.

Step 3. Compute A(k) = A(k−1) [X (k−1) ]m−1 .

Step 4. If N2 (A(k) ) = N , then A is a strong H-tensor, stop. Otherwise, go to Step 5. Step 5. Compute (k)

(k)

r0 = 1, θi

=

Ri (A(k) ) (k)

|aii···i |

, i ∈ N2 (A(k) ); 7

(k)

r1 =

max

i∈N2

(A(k) )

(k)

θi ; (k)

(k)

δp+1,i = (k)

rp+1 =

Ψi (A(k) ) + rp Φi (A(k) ) max

i∈N2 (A(k) )

(k) |aii···i | (k) δp+1,i ,

, i ∈ N2 (A(k) ), p = 1, 2, · · · , l;

p = 1, 2, · · · , l.

Step 6. Set vector x(k) ∈ Rn with entries  ! 1 m−1    (k) δl+1,i + ε(k) , if i ∈ N2 (A(k) ), (k) xi =    1, otherwise,

where

ε(k)

 1 (k) (k) (k) (k)  min (θi − δl+1,i ), if θi > δl+1,i for all i ∈ N2 (A(k) ), (k) s i∈N2 (A ) =  (k) (k) σk , if θj = δl+1,j for some j ∈ N2 (A(k) ), (k)

where σk is sufficiently small positive number such that 0 < δl+1,i + σk ≤ 1. (k)

(k)

Step 7. Set X (k) = diag(x1 , · · · , xn ), X hki = X hk−1i X (k) , k = k + 1, go to Step 3.

Algorithm 3.3

INPUT. a tensor A ∈ Tm,n , an inner iterative number l and the constant

s ≥ 1.

OUTPUT. a positively diagonal matrix X hki = X (1) X (2) · · · X (k) if A is a strong H-tensor. Step 1. If N2 (A) = ∅ or aii···i = 0 for some i ∈ N , then A is not a strong H-tensor, stop.

Otherwise, go to Step 2. Step 2. Set A(0) = A, X (0) = I, X h0i = I and k = 1.

Step 3. Compute A(k) = A(k−1) [X (k−1) ]m−1 .

Step 4. If N2 (A(k) ) = N , then A is a strong H-tensor, stop. Otherwise, go to Step 5. Step 5. Compute (k)

(k)

λ0 = 1, λ1 = (k) µp+1,i (k)

max

i∈N2 (A(k) ) (k)

=

λp+1 =

Ψi (A(k) ) (k)

|aii···i | − Φi (A(k) )

Ψi (A(k) ) + λp Φi (A(k) ) max

i∈N2 (A(k) )

(k) |aii···i | (k) µp+1,i ,

;

, i ∈ N2 (A(k) ), p = 1, 2, · · · , l;

p = 1, 2, · · · , l.

Step 6. Set vector x(k) ∈ Rn with entries  ! 1 m−1    (k) (k) µ + ε , if i ∈ N2 (A(k) ), (k) l+1,i xi =    1, otherwise, 8

where ε(k)

 1 (k) (k) (k) (k)  min (θi − µl+1,i ), if θi > µl+1,i for all i ∈ N2 (A(k) ), (k) s i∈N2 (A ) =  (k) (k) σk , if θj = µl+1,j for some j ∈ N2 (A(k) ), (k)

(k)

where σk is sufficiently small positive number such that 0 < µl+1,i + σk ≤ 1, θi (k)

(k)

Step 7. Set X (k) = diag(x1 , · · · , xn ), X hki

=

Ri (A(k) ) (k)

|aii···i | hk−1i (k) =X X , k = k + 1, go to Step 3.

.

INPUT. a tensor A ∈ Tm,n .

Algorithm 3.4

OUTPUT. a positively diagonal matrix X hki = X (1) X (2) · · · X (k) if A is a strong H-tensor. Step 1. If N2 (A) = ∅ or aii···i = 0 for some i ∈ N , then A is not a strong H-tensor, stop. Otherwise, go to Step 2. Step 2. Set A(0) = A, X (0) = I, X h0i = I and k = 1.

Step 3. Compute A(k) = A(k−1) [X (k−1) ]m−1 .

Step 4. If N2 (A(k) ) = N , then A is a strong H-tensor, stop. Otherwise, go to Step 5. Step 5. Set vector x(k) ∈ Rn with entries  ! 1  n o m−1 (k) )  1 Ψ (A  i 1 + (k) max , if i ∈ N2 (A(k) ), (k) xi = (k) 2 i∈N2 (A(k) ) |aii···i | − Φi (A )    1, otherwise. (k)

(k)

Step 6. Set X (k) = diag(x1 , · · · , xn ), X hki = X hk−1i X (k) , k = k + 1, go to Step 3.

3.2

Convergence analysis of Algorithm 3.2

In order to show Algorithm 3.2 is well defined, we first derive the following lemmas. (k)

Lemma 3.1 Let A(k) = (ai1 i2 ···im ) ∈ Tm,n . Then, for all i ∈ N2 (A(k) ), p = 1, 2, · · · , we have (k)

(k)

0 ≤ δp+1,i ≤ rp+1 ≤ rp(k) < 1.

(3.1) (k)

Proof. We prove this result by induction. For p = 1, since i ∈ N2 (A(k) ), we have 0 ≤ r1 < 1.

Moreover, for i ∈ N2 (A(k) ), one has (k)

Ψi (A(k) ) + r1 Φi (A(k) ) ≤ Ψi (A(k) ) + Φi (A(k) ) = Ri (A(k) ), which implies that (k)

(k)

0 ≤ δ2,i ≤ r2 =

max

i∈N2

(A(k) )



(k)

δ2,i



max

i∈N2

(A(k) )

9

n R (A(k) ) o i (k) |aii···i |

(k)

= r1 < 1, ∀ i ∈ N2 (A(k) ).

Therefore, (3.1) holds for p = 1. (k)

(k)

(k)

Suppose that (3.1) holds for p = q. That is, 0 ≤ δq+1,i ≤ rq+1 ≤ rq

< 1. Then, for

p = q + 1, one has (k)

Ψi (A(k) ) + rq+1 Φi (A(k) ) ≤ Ψi (A(k) ) + rq(k) Φi (A(k) ). (k)

Divided by |aii···i | on both sides of the above inequality, we have (k)

(k)

δq+2,i ≤ δq+1,i , ∀ i ∈ N2 (A(k) ), which implies that (k)

(k)

0 ≤ δq+2,i ≤ rq+2 =

max

i∈N2 (A(k) )

 (k) δq+2,i ≤

max

i∈N2 (A(k) )



(k) (k) δq+1,i = rq+1 < 1, ∀ i ∈ N2 (A(k) ).

So inequality (3.1) holds for p = q + 1, which completes the proof. (k)



(k)

Lemma 3.2 Let X (k) = diag(x1 , · · · , xn ) be diagonal matrix generated by Algorithm 3.2.

Then, 0 < xki ≤ 1 for all i ∈ N .

Proof. First, by Lemma 3.1, one has (k)

(k)

δl+1,i = (k)

If θi

Ψi (A(k) ) + rl Φi (A(k) ) (k) |aii···i |



Ψi (A(k) ) + Φi (A(k) ) (k) |aii···i | (k)

(k)

> δl+1,i for all i ∈ N2 (A(k) ), it is easy to see that xi

=

Ri (A(k) ) (k) |aii···i |

(k)

= θi .

(3.2)

> 0. Since s ≥ 1, it then follows

from (3.2) that (k)

1 (k) (k) min (θ − δl+1,i ) s i∈N2 (A(k) ) i 1 (k) (k) (k) δl+1,i + (θi − δl+1,i ) s 1 (k) 1 (k) θi + (1 − )δl+1,i s s 1 (k) 1 (k) θ + (1 − )θi s i s (k) θi < 1. (k)

[xi ]m−1 = δl+1,i + ≤ = ≤ =

(k)

If there exists some j ∈ N2 (A(k) ) such that θj (k)

It is clear that xi

(k)

(3.3) (k)

= δl+1,j , then we can take σk = 1 − θi

> 0.

> 0 and hence (k)

(k)

(k)

[xi ]m−1 = δl+1,i + ε(k) ≤ θi

+ σk ≤ 1.

If i 6∈ N2 (A(k) ), the result is obvious. From (3.3) and (3.4), the proof is completed. 10

(3.4) 

Lemma 3.3 Algorithm 3.2 will terminate in finite iterations or generates infinite sequence of (k)

(k)

distinct tensors {A(k) = (ai1 i2 ···im )} such that lim |ai1 i2 ···im | exists for all i1 , i2 , · · · , im ∈ N . k→+∞

Proof. Suppose that Algorithm 3.2 doesn’t terminate within finite steps. Then there exists (k)

an infinite sequence of distinct tensors. That means that |N2 (A(k) )| ≥ 1 and aii···i 6= 0 for all i ∈ N . By Algorithm 3.2 and Lemma 3.1, one has (k)

rl

(k)

(k)

+ ε(k) ≥ δl+1,i + ε(k) = [xi ]m−1 , ∀ i ∈ N2 (A(k) ),

which implies that (k)

rl

+ ε(k) ≥

(k)

[xi ]m−1 .

max

i∈N2

(3.5)

(A(k) )

Since i ∈ N2 (A(k) ), we have (k)

|aii···i | > Ri (A(k) ) ≥ Φi (A(k) ).

(3.6)

It then follows from (3.5), (3.6) and Lemma 3.2 that (k+1)

(k)

(k)

(k)

(k)

|aii···i | = |aii···i |[xi ]m−1 = |aii···i |[δl+1,i + ε(k) ] (k)

(k)

(k)

= |aii···i |δl+1,i + |aii···i |ε(k) (k)

(k)

= Ψi (A(k) ) + rl Φi (A(k) ) + ε(k) |aii···i | (k)

> Ψi (A(k) ) + (rl X ≥

+ ε(k) )Φi (A(k) )

i2 i3 ···im ∈N m−1\N2m−1



X

i2 i3 ···im ∈N m−1\N2m−1

(k)

|aii2 ···im | + (k)

X

(k)

i2 i3 ···im ∈N2m−1 δii2 ···im =0

(k)

(k)

|aii2 ···im |xi2 · · · xim +

|aii2 ···im | X

max

i∈N2

i2 i3 ···im ∈N2m−1 δii2 ···im =0

(k)

which implies that N2 (A) = N2 (A(1) ) ⊆ N2 (A(2) ) ⊆ · · · ⊆ N2 (A(k+1) ) ⊆ · · ·

N2 (A(k0 ) ) = N2 (A(k0 +q) ), ∀ q = 1, 2, · · · . 11

(k)

(k)

|aii2 ···im |xi2 · · · xim

= Ri (A(k+1) ),

Hence, there exists the least positive number k0 such that

(k)

[xi ]m−1

(A(k) )

Without loss of generality, suppose that k0 = 1. Let N2 (A) = N2 (A(1) ) = {j1 , j2 , · · · , jt }, where 1 ≤ t < n. (Otherwise, if t = n, then Algorithm 3.2 will stop.) Moreover, from Algorithm (k)

3.2 and Lemma 3.2, we know that 0 < xj

(k)

≤ 1 for all j ∈ {j1 , j2 , · · · , jt } and xj

= 1 for all

j 6∈ {j1 , j2 , · · · , jt }. Since (k)

(k)

(k+1)

(k)

|aii2 ···im | = |aii2 ···im |xi2 · · · xim for all k = 1, 2, · · · , (k)

we can conclude that {|ai1 i2 ···im |} is nonincreasing and bounded sequence for all i1 , i2 , · · · , im ∈ (k)

N . Therefore, lim |ai1 i2 ···im | exists for all i1 , i2 , · · · , im ∈ N . The proof is completed. k→+∞



(k)

Lemma 3.4 If Algorithm 3.2 generates infinite sequence of distinct tensors {A(k) = (ai1 i2 ···im )}, (k)

then for all i ∈ N2 (A(k) ), lim [|ai···i | − Ri (A(k) )] = 0. k→+∞

(k)

Proof. From Lemma 3.3, we know that both {|ai···i |} and {Ri (A(k) )} converge. Suppose, on

the contrary, that for some i0 ∈ N2 (A(k) )

(k)

lim [|ai0 ···i0 | − Ri0 (A(k) )] 6= 0.

k→+∞

Then there exists ε0 > 0 such that (k)

|ai0 ···i0 | − Ri0 (A(k) ) > ε0 , k = 1, 2, · · · . (k)

If θi

(3.7)

(k)

> δl+1,i for all i ∈ N2 (A(k) ), it then follows from the Step 6 of Algorithm 3.2 and

inequality (3.3) and (3.7) that (k+1)

(k)

(k)

(k)

(k)

|ai0 ···i0 | = |ai0 ···i0 |[xi0 ]m−1 = |ai0 ···i0 |[δl+1,i0 + ε(k) ]  (k) 1 (k) (k)  (k) min (θi − δl+1,i ) = |ai0 ···i0 | δl+1,i0 + s i∈N2 (A(k) )  (k)  1 (k) (k) (k) ≤ |ai0 ···i0 | δl+1,i0 + (θi0 − δl+1,i0 ) s  1 (k) 1 (k)  (k) = |ai0 ···i0 | θi0 + (1 − )δl+1,i0 s s  1 (k) 1 (k)  (k) ≤ |ai0 ···i0 | θi0 + (1 − )θi0 s s (k) (k) (k) = |ai0 ···i0 |θi0 = Ri0 (A ) (k)

< |ai0 ···i0 | − ε0 . (k)

(k)

If δl+1,j = θj

(3.8)

for some j ∈ N2 (A(k) ), we can take (k)

0 < σk <

|ai0 ···i0 | − Ri0 (A(k) ) − ε0 (k)

|ai0 ···i0 |

12

.

Then by inequality (3.3), one has (k+1)

(k)

(k)

(k)

(k)

(k)

(k)

|ai0 ···i0 | = |ai0 ···i0 |[xi0 ]m−1 = |ai0 ···i0 |[δl+1,i0 + σk ] ≤ |ai0 ···i0 |[θi0 + σk ] (k)

≤ Ri0 (A(k) ) + |ai0 ···i0 |σk (k)

< |ai0 ···i0 | − ε0 .

(3.9)

Together (3.8) with (3.9) yields (1)

(2)

(k)

|ai0 ···i0 | = |ai0 ···i0 | > |ai0 ···i0 | + ε0 > · · · > |ai0 ···i0 | + (k − 1)ε0 . (k)

Let k → +∞, we obtain a contradiction with the fact that the sequence {|ai0 ···i0 |} converges. 

The proof is completed.

Theorem 3.1 The tensor A = (ai1 i2 ···im ) is a strong H-tensor if and only if Algorithm 3.2 ter-

minates within finite steps and there exists a positively diagonal matrix X = X (1) X (2) · · · X (k) such that AX m−1 is a strictly diagonally dominant tensor.

Proof. If Algorithm 3.2 terminates within finite steps, then we obtain positively diagonal matrix X = X (1) X (2) · · · X (k) such that A(k) = AX m−1 and N2 (A(k) ) = N. It then follows from Lemmas 2.1 and 2.2 that A is a strong H-tensor. Conversely, let A be a strong H-tensor. If Algorithm 3.2 doesn’t terminate within finite steps, then Algorithm 3.2 generates the infinite sequences (k)

{A(k) }, {|ai···i |}, {Ri (A(k) )}, {N2 (A(k) )}. By using the same proof as that of Lemma 3.3, without loss of generality, assume that N2 (A) = N2 (A(k) ) = {j1 , j2 , · · · , jt } for all k = 1, 2, · · · and 1 ≤ t < n. Denote hki

hki

X hki = X (1) X (2) · · · X (k) and X hki = diag(x1 , x2 , · · · , xhki n ). Then hki

xi

(1) (2)

(k)

= xi xi · · · xi , ∀ i ∈ N

and (k+1)

aii2 ···im

(k)

(k)

(k)

= aii2 ···im xi2 · · · xim 13

(k−1)

(k) (k−1)

(k) (k−1)

· · · x im x im

(k) (k−1)

· · · x i2 · · · x im x im

= aii2 ···im xi2 xi2 = ··· = aii2 ···im xi2 xi2 hki

(1)

(k) (k−1)

(1)

· · · x im

hki

= aii2 ···im xi2 · · · xim , ∀ i ∈ N. Notice that hki xj

=

(

(1) (2)

(k)

xj xj · · · xj 1,

≤ 1, if j ∈ {j1 , j2 , · · · , jt },

if j 6∈ {j1 , j2 , · · · , jt },

it then follows from Lemma 3.3 that both {A(k) } and {xhki } have a limitation. Furthermore, let lim A(k) = B = (bi1 i2 ···im ) and

k→+∞

lim X hki = X = diag(x1 , x2 , · · · , xn ),

k→+∞

respectively, where xj ≤ 1, for all j ∈ {j1 , j2 , · · · , jt } and others xj = 1. Obviously, B = AX m−1 . By Lemma 3.4, one has

|bj···j | = Rj (B), ∀ j ∈ {j1 , j2 , · · · , jt } and |bj···j | = |aj···j | ≤ Rj (B), ∀ j 6∈ {j1 , j2 , · · · , jt }. Hence, N2 (B) = ∅, and so B is not a strong H-tensor. Now, we claim that xj = 0 for j ∈ {j1 , j2 , · · · , jt }. In fact, if for all j ∈ {j1 , j2 , · · · , jt }, xj > 0, then from Lemma 2.3, B would be a strong H-tensor. This is a contradiction. So at least one of {xj1 , xj2 , · · · , xjt } is zero. Without loss of generality, assume that xj1 = xj2 = · · · = xjc = 0 for some c < t and that xjd > 0 for all d = c + 1, c + 2, · · · , t. Let α = N − {j1 , j2 , · · · , jc }. Then, by Lemmas 2.3 and 2.4, we can conclude that B[α] is a strong H-tensor. This contradicts to the fact that |bj···j | ≤ Rj (B) = Rj (B[α]), ∀ j ∈ α. 

The proof is completed.

3.3

Convergence analysis of Algorithm 3.3

Similarly, we first derive the following lemmas. (k)

Lemma 3.5 Let A(k) = (ai1 i2 ···im ) ∈ Tm,n . Then, for all i ∈ N2 (A(k) ), p = 1, 2, · · · , we have (k)

(k)

0 ≤ µp+1,i ≤ λp+1 ≤ λ(k) p < 1. 14

(3.10)

Proof. We prove this result by induction. For p = 1, since i ∈ N2 (A(k) ), we have (k)

|ai···i | > Ψ(A(k) ) + Φ(A(k) ). Then (k)

λ1 =

Ψ(A(k) )

max

(k)

|ai···i | − Φ(A(k) )

i∈N2 (A(k) )

and, hence

(k)

(k)

λ1 ≥

Ψ(A(k) ) + λ1 Φ(A(k) ) (k) |ai···i |

By the above inequality, one has (k)

< 1,

(k)

0 ≤ µ2,i ≤ λ2 =

max

i∈N2

(A(k) )

Therefore, (3.10) holds for p = 1.



(k)

µ2,i

(k)

= µ2,i .

(k)

≤ λ1 < 1, ∀ i ∈ N2 (A(k) ). (k)

(k)

(k)

Suppose that (3.10) holds for p = q. That is, 0 ≤ µq+1,i ≤ λq+1 ≤ λq

< 1. Then, for

p = q + 1, one has (k)

(k) Ψi (A(k) ) + λq+1 Φi (A(k) ) ≤ Ψi (A(k) ) + λ(k) q Φi (A ). (k)

Divided by |aii···i | on both sides of the above inequality, we have (k)

(k)

µq+2,i ≤ µq+1,i , ∀ i ∈ N2 (A(k) ), which implies that (k)

(k)

0 ≤ µq+2,i ≤ λq+2 =

max

i∈N2

(A(k) )



(k) µq+2,i ≤

max

i∈N2

(A(k) )

 (k) (k) µq+1,i = λq+1 < 1, ∀ i ∈ N2 (A(k) ).

So inequality (3.10) holds for p = q + 1, which completes the proof. (k)



(k)

Lemma 3.6 Let X (k) = diag(x1 , · · · , xn ) be the diagonal matrix generated by Algorithm 3.3. Then, 0 < xki ≤ 1 for all i ∈ N .

Proof. First, from Lemma 3.5, we have (k) µl+1,i (k)

If θi

(k)

=

Ψi (A(k) ) + λl Φi (A(k) ) (k) |aii···i |



Ψi (A(k) ) + Φi (A(k) ) (k) |aii···i |

(k)

(k)

> µl+1,i for all i ∈ N2 (A(k) ), it is easy to see that xi

=

Ri (A(k) ) (k) |aii···i |

(k)

(k)

1 (k) (k) min (θ − µl+1,i ) s i∈N2 (A(k) ) i 15

(3.11)

> 0. Since s ≥ 1, it then follows

from (3.11) that [xi ]m−1 = µl+1,i +

(k)

= θi .

1 (k) (k) (k) ≤ δl+1,i + (θi − µl+1,i ) s 1 (k) 1 (k) = θ + (1 − )µl+1,i s i s 1 (k) 1 (k) ≤ θ + (1 − )θi s i s (k) = θi < 1. (k)

If there exists some j ∈ N2 (A(k) ) such that θj (k)

It is clear that xi

(3.12) (k)

(k)

= µl+1,j , then we can take σk = 1 − θi

> 0.

> 0 and hence (k)

(k)

(k)

[xi ]m−1 = µl+1,i + ε(k) ≤ θi

+ σk ≤ 1.

(3.13)

If i 6∈ N2 (A(k) ), the result is obvious. From (3.12) and (3.13), the proof is completed.



Theorem 3.2 The tensor A = (ai1 i2 ···im ) ∈ Tm,n is a strong H-tensor if and only if Algorithm 3.3 terminates within finite steps and there exists a positively diagonal matrix X = X (1) X (2) · · · X (k) such that AX m−1 is a strictly diagonally dominant tensor. Proof. This result can be shown by using the same proof as that of Theorem 3.1, so we omit 

here.

By using the same proof as that of Lemmas 3.3, 3.4 and Theorem 3.1, we have the following results. Remark 3.1 Since for all i ∈ N2 (A(k) ), we have Ψi (A(k) ) (k)

|aii···i | − Φi (A(k) )



Ri (A(k) ) (k)

|aii···i |

.

Then (k)

λ1 =

max

i∈N2 (A(k) )

Ψi (A(k) ) (k) |aii···i |

− Φi

(A(k) )



max

i∈N2 (A(k) )

Ri (A(k) ) (k) |aii···i |

(k)

= r1 .

Furthermore, it is easy to prove that (k)

(k)

µl+1,i ≤ δl+1,i . Therefore, we can conclude that the iterative number k of Algorithm 3.3 is less than that of Algorithm 3.2 if we choose the same inner iterative number l. 16

3.4

Convergence analysis of Algorithm 3.4

Lemma 3.7 Algorithm 3.4 will terminate in finite iterations or generates infinite sequence of (k)

(k)

distinct tensors {A(k) = (ai1 i2 ···im )} such that lim |ai1 i2 ···im | exists for all i1 , i2 , · · · , im ∈ N . k→+∞

Proof. Suppose that Algorithm 3.4 doesn’t terminate within finite steps. Then there exists (k)

an infinite sequence of distinct tensors. That means that |N2 (A(k) )| ≥ 1 and aii···i 6= 0 for all

i ∈ N . By some calculations, it is easy to see that for all i ∈ N2 (A(k) ) and for all k = 1, 2, · · · , we have (k)

|aii···i | − Ψi (A(k) ) − Φi (A(k) ) > 0.

(3.14)

Then Ψi (A(k) ) (k)

|aii···i | − Φi (A(k) )

which implies that (k)

[xi ]m−1 =

< 1,

n o Ψi (A(k) ) 1 1 + (k) max < 1, ∀ i ∈ N2 (A(k) ), ∀ k = 1, 2, · · · . (3.15) 2 i∈N2 (A(k) ) |aii···i | − Φi (A(k) )

On the other hand, by Step 5 of Algorithm 3.4, one has o Ψi (A(k) ) 1n (k) m−1 1 + (k) [xi ] ≥ 2 |aii···i | − Φi (A(k) ) (k)

=

=

1 |aii···i | − Φi (A(k) ) + Ψi (A(k) ) × (k) 2 |aii···i | − Φi (A(k) )  1  (k) |aii···i | − Ri (A(k) ) + Ψi (A(k) ) 2 (k) |aii···i | − Φi (A(k) )

> 0, ∀ i ∈ N2 (A(k) ), ∀ k = 1, 2, · · · ,

(3.16)

which, together with (3.15), yields that (k)

0 < xi

< 1, ∀ i ∈ N2 (A(k) ), ∀ k = 1, 2, · · · .

(3.17)

Moreover, by inequality (3.16), one has (k+1)

|ai···i | − Ri (A(k+1) ) (k)

(k)

= |ai···i |[xi ]m−1 −

X

i2 i3 ···im ∈N m−1\N2m−1

(k)

(k)

≥ |ai···i |[xi ]m−1 −

X

(k)

(k)

(k)

|aii2 ···im |xi2 · · · xim −

i2 i3 ···im ∈N m−1\N2m−1

(k)

|aii2 ···im | −

17

X

X

i2 i3 ···im ∈N2m−1 δii2 ···im =0

i2 i3 ···im ∈N2m−1 δii2 ···im =0

(k)

(k)

|aii2 ···im |xi2 · · · xim (k)

|aii2 ···im |[xi ]m−1

 (k)  (k) = [xi ]m−1 |aii···i | − Φi (A(k) ) − Ψi (A(k) )  1  (k) |aii···i | − Ri (A(k) ) + Ψi (A(k) ) − Ψi (A(k) ) ≥ 2  1  (k) = |aii···i | − Ri (A(k) ) > 0, 2

which implies that

N2 (A) = N2 (A(1) ) ⊆ N2 (A(2) ) ⊆ · · · ⊆ N2 (A(k+1) ) ⊆ · · · . Hence, there exists the least positive number k0 such that N2 (A(k0 ) ) = N2 (A(k0 +q) ), ∀ q = 1, 2, · · · . Without loss of generality, suppose that k0 = 1. Let N2 (A) = N2 (A(1) ) = {j1 , j2 , · · · , jt }, where 1 ≤ t < n. (Otherwise, if t = n, then Algorithm 3.4 will stop.) Moreover, from (k)

Algorithm 3.4 and inequality (3.17), we know that 0 < xj (k)

xj

≤ 1 for all j ∈ {j1 , j2 , · · · , jt } and

= 1 for all j 6∈ {j1 , j2 , · · · , jt }. Since (k+1)

(k)

(k)

(k)

|aii2 ···im | = |aii2 ···im |xi2 · · · xim for all k = 1, 2, · · · , (k)

(k)

we can conclude that {|ai1 i2 ···im |} is nonincreasing and bounded sequence. Therefore, lim |ai1 i2 ···im | k→+∞

exists for all i1 , i2 , · · · , im ∈ N . The proof is completed.



Theorem 3.3 The tensor A = (ai1 i2 ···im ) is a strong H-tensor if and only if Algorithm 3.4 ter-

minates within finite steps and there exists a positively diagonal matrix X = X (1) X (2) · · · X (k) such that AX m−1 is a strictly diagonally dominant tensor.

Proof. If Algorithm 3.4 terminates within finite steps, then we obtain positively diagonal matrix X = X (1) X (2) · · · X (k) such that A(k) = AX m−1 and N2 (A(k) ) = N. It then follows from Lemmas 2.1 and 2.2 that A is a strong H-tensor. Conversely, let A be a strong H-tensor. Without loss of generality, we assume that A is a nonnegative tensor. If Algorithm 3.4 doesn’t terminate within finite steps, then Algorithm 3.4 generates the infinite sequences {A(k) }, {N2 (A(k) )}, {X (k) }, 18

where A(k) = A(X (1) )m−1 (X (2) )m−1 · · · (X (k−1) )m−1 , and X (k) = diag(xk1 , xk2 , · · · , xkn ) with 0 < xki ≤ 1, ∀ i ∈ N. Denote X hki = X (1) X (2) · · · X (k) . Then from the fact that each diagonal entry of positive diagonal matrix X (k) is not larger than 1, one has A = A(0) = A(1) ≥ · · · ≥ A(k) ≥ · · · ≥ O, X h1i ≥ · · · ≥ X hk−1i ≥ X hki ≥ · · · > O. That is, {A(k) } is nonincreasing and bounded tensor sequence and {X hki } is nonincreasing and bounded matrix sequence. Hence, both {A(k) } and {X hki } have a limitation. Let lim A(k) = B ≥ O,

k→+∞

Obviously, B = AX m−1 , where X = Now, we claim that

∞ Q

lim X hki = X.

k→+∞

X hki is a positive diagonal matrix.

k=1

lim N2 (A(k) ) = N2 (B) = ∅.

k→+∞

Suppose not, i.e., lim N2 (A(k) ) 6= ∅. Then k→+∞

max

i∈N2 (A(k) )

and, hence

n

Ψi (A(k) ) (k)

|aii···i | − Φi (A(k) )

o

< 1,

o n 1 Ψi (A(k) ) < 1. 1 + (k) max 2 i∈N2 (A(k) ) |aii···i | − Φi (A(k) )

(3.18)

Moreover, since lim N2 (A(k) ) 6= ∅, there exists i0 ∈ N2 (A(k) ) such that k→+∞

(k)

lim [ai0 i0 ···i0 − Ri0 (A(k) )] 6= 0.

k→+∞

Then there exists ξ0 > 0 such that (k)

ai0 i0 ···i0 > Ri0 (A(k) ) + ξ0 ≥ ξ0 , which, together with (3.18), implies that there exists ε0 > 0 such that  n o Ψi (A(k) ) 1 (k) ai0 i0 ···i0 1 − 1 + (k) ≥ ε0 , k = 1, 2, · · · . max 2 i∈N2 (A(k) ) |a | − Φi (A(k) ) ii···i

19

(3.19)

It then follows from Step 5 of Algorithm 3.4 that (k+1)

(k)

(k)

0 < ai0 i0 ···i0 = ai0 i0 ···i0 [xi0 ]m−1  n o Ψi (A(k) ) 1 (k) 1 + (k) max = ai0 i0 ···i0 2 i∈N2 (A(k) ) |a | − Φi (A(k) ) ii···i



(k) ai0 i0 ···i0

− ε0 .

Hence (0)

(1)

(2)

(k)

ai0 i0 ···i0 = ai0 i0 ···i0 ≥ ai0 i0 ···i0 + ε0 ≥ · · · ≥ ai0 i0 ···i0 + (k − 1)ε0 . (k)

Let k → +∞, we obtain a contradiction with the fact that the sequence |ai0 i0 ···i0 | converges, which means that lim N2 (A(k) ) = N2 (B) = ∅. Hence, B is not a strong H-tensor. k→+∞

On the other hand, since A is a strong H-tensor, there exists a positive diagonal matrix D

such that AD m−1 = B(X −1 D)m−1 is a strictly diagonally dominant tensor. It then follows from Lemma 2.1 that B is a strong H-tensor. We also obtains a contradiction. This contradiction means that our hypothesis doesn’t hold and we are done.

4



An application: the positive definiteness of an even-order real symmetric tensor

In this section, based on the iterative criteria for identifying strong H-tensors in Section 3, we present new conditions for identifying the positive definiteness of an even-order real symmetric tensor. First, we present the following lemma. Lemma 4.1 [17] Let A = (ai1 i2 ···im ) ∈ Tm,n be an even-order real symmetric tensor with ai···i > 0 for all i ∈ N . If A is a strong H-tensor, then A is positive definite. Based on Lemma 4.1, Theorems 3.1, 3.2 and 3.3, we obtain the following results. Theorem 4.1 Let A = (ai1 i2 ···im ) ∈ Tm,n be an even-order real symmetric tensor, and ai···i > 0 for all i ∈ N . If one of the following conditions is satisfied: (i) Algorithm 3.2 terminates within finite steps and there exists a positively diagonal matrix X = X (1) X (2) · · · X (k) such that AX m−1 is a strictly diagonally dominant tensor, (ii) Algorithm 3.3 terminates within finite steps and there exists a positively diagonal matrix X = X (1) X (2) · · · X (k) such that AX m−1 is a strictly diagonally dominant tensor, (iii) Algorithm 3.4 terminates within finite steps and there exists a positively diagonal matrix X = X (1) X (2) · · · X (k) such that AX m−1 is a strictly diagonally dominant tensor, then A is a positive definite tensor. 20

Now, we give some examples to demonstrate that our proposed algorithms are feasible and efficient to identify the positive definiteness of an even-order real symmetric tensor. Example 4.1 Consider a tensor A = (ai1 i2 i3 i4 ) of 4th order 4-dimension which is defined as follows:





2

0 0 −1

  0 0 0 A(:, :, 1, 1) =    0 0 0 −1 0 0 0

0

0

0

  0 3 −1 0 A(:, :, 2, 2) =    0 −1 0 0 0 0 0 0





 0  ,  0  0 

0

0

1

0

     , A(:, :, 3, 3) =  0 0 −2 0     1 −2 18 0 0 0 0 0

     

A(:, :, 2, 1) = A(:, :, 1, 2) = A(:, :, 4, 2) = A(:, :, 2, 4) = A(:, :, 4, 3) = A(:, :, 3, 4) = zeros(4, 4), A(:, :, 3, 1) = A(:, :, 1, 3) = diag{0, 0, 1, 0}, A(:, :, 4, 1) = A(:, :, 1, 4) = diag{−1, 0, 0, 0}, A(:, :, 3, 2) = A(:, :, 2, 3) = diag{0, −1, −2, 0}, A(:, :, 4, 4) = diag{0, 0, 0, 10}. Obviously, N2 (A) = {3, 4}, N1 (A) = {1, 2}. By Step 5 of Algorithm 3.2, one has A(2) = Adiag{1.0000, 1.0000, 0.8221, 0.4642}. By some calculations, we know that N2 (A(2) ) = {1, 3, 4}, N1 (A(2) ) = {2}. It then follows from Step 5 of Algorithm 3.2 that A(3) = A(2) diag{0.8516, 1.0000, 0.8591, 0.8591}. Obviously, N2 (A(3) ) = {1, 2, 3, 4}. By Step 4 of Algorithm 3.2, we can conclude that A is a strong H-tensor. Furthermore, from Theorem 4.1, we have that A is positive definite. Example 4.2 Consider a tensor A = (ai1 i2 i3 i4 ) of 4th order 4-dimension which is defined as follows:



2

0 0 −1

  0 0 0 A(:, :, 1, 1) =    0 0 0 −1 0 0 21



 0  ,  0  0



0

0

0

0





0

0

−1 0



     0 4 −1 0   0 0 −2 0      A(:, :, 2, 2) =   , A(:, :, 3, 3) =   0 −1 0 0 −1 −2 15 0     0 0 0 0 0 0 0 0

A(:, :, 2, 1) = A(:, :, 1, 2) = A(:, :, 4, 2) = A(:, :, 2, 4) = A(:, :, 4, 3) = A(:, :, 3, 4) = zeros(4, 4), A(:, :, 3, 1) = A(:, :, 1, 3) = diag{0, 0, −1, 0}, A(:, :, 4, 1) = A(:, :, 1, 4) = diag{−1, 0, 0, 0}, A(:, :, 3, 2) = A(:, :, 2, 3) = diag{0, −1, −2, 0}, A(:, :, 4, 4) = diag{0, 0, 0, 25}. Obviously, N2 (A) = {3, 4}, N1 (A) = {1, 2}. According to Step 5 of Algorithm 3.3, we have A(2) = Adiag{1.0000, 1.0000, 0.8736, 0.3420}. It is obvious that N2 (A(2) ) = {1, 2, 3, 4}. By Step 4 of Algorithm 3.3, we can conclude that A is a strong H-tensor. Furthermore, from Theorem 4.1, we have that A is positive definite. Example 4.3 Consider a tensor A = (ai1 i2 i3 i4 ) of 4th order 4-dimension which is defined as follows:





  A(:, :, 2, 2) =   

2

0 0 −1



   0 0 0 0   , A(:, :, 1, 1) =    0 0 0 0  −1 0 0 0   0 0 0 0 0 0 2 0    0 6 −1 0   , A(:, :, 3, 3) =  0 0 −2 0   0 −1 0 0   2 −2 24 0 0 0 0 0 0 0 0 0

     

A(:, :, 2, 1) = A(:, :, 1, 2) = A(:, :, 4, 2) = A(:, :, 2, 4) = A(:, :, 4, 3) = A(:, :, 3, 4) = zeros(4, 4), A(:, :, 3, 1) = A(:, :, 1, 3) = diag{0, 0, 2, 0}, A(:, :, 4, 1) = A(:, :, 1, 4) = diag{−1, 0, 0, 0}, A(:, :, 3, 2) = A(:, :, 2, 3) = diag{0, −1, −2, 0}, A(:, :, 4, 4) = diag{0, 0, 0, 10}. Obviously, N2 (A) = {2, 3, 4}, N1 (A) = {1}. According to Step 5 of Algorithm 3.4, we have A(2) = Adiag{1.0000, 0.6765, 0.6765, 0.6765}. By some calculations, we know that N2 (A(2) ) = {2, 3, 4}, N1 (A(2) ) = {1}. It then follows from Step 5 of Algorithm 3.4 that A(3) = A(2) diag{1.0000, 0.7609, 0.7609, 0.7609}. 22

It is obvious that N2 (A(3) ) = {1, 2, 3, 4}. By Step 4 of Algorithm 3.4, we can conclude that A is a strong H-tensor. Furthermore, from Theorem 4.1, we have that A is positive definite.

5

Numerical experiments

In this section, we report some numerical results to support Algorithms 3.2, 3.3 and 3.4 for identifying strong H-tensors. All of the tests were run on the Intel (R) Core (TM), where the CPU is 2.40 GHz and the memory is 8.0 GB, the programming language is MATLAB R2015a and the Tensor Toolbox Version 2.5 [36]. Now, we make numerical comparison of Algorithms 3.2, 3.3, 3.4 with Algorithm 3.1 with different choices of parameter ε [17]. Example 5.1 [17,28] Consider polynomial f (x) = Ax4 with A being the 4th order 4-dimensional real symmetric tensor with entries a1111 = 2, a2222 = a, a3333 = b, a4444 = 5, a1114 = a1141 = a1411 = a4111 = −1, a1333 = a3133 = a3313 = a3331 = −1, a2223 = a2232 = a2322 = a3222 = −1, a2333 = a3233 = a3323 = a3332 = −2, and others ai1 i2 i3 i4 = 0. Here a, b ∈ R can take different values. In this example, we take l = 2, s = 5, σk = 3 × 10−16 in Algorithms 3.2 and 3.3. Numerical results for Example 5.1 are listed in Table 1, where “k”denotes the iteration steps, “X”denotes the output positively diagonal matrix when algorithm terminates. From Table 1, we find that the iterative number of Algorithm 3.1 is related to the parameter ε. Furthermore, Algorithms 3.2, 3.3 and 3.4 are more efficient than Algorithm 3.1. Table 1 also shows that the iterative number of Algorithm 3.2 is less than that of Algorithm 3.3 if we take the same inner iterative number l. Example 5.2 In this example, we consider a 4th order 6-dimensional tensor whose entries is randomly generated from [−1, 1]. For the tensor, we make the following modification: first, we symmetrize it and then replace its each diagonal entry with its absolute value, finally, we amplify all the diagonal entries to a certain degree to let the tensor is strong H-tensor. 23

Table 1. Numerical results for Example 5.1. Algorithm

a

b

ε

k

X

Algorithm 3.1 [17]

3

18

1

9

diag(1.0000, 0.8903, 0.6124, 0.5854)

4

15

1

12

diag(1.0000, 0.7571, 0.6233, 0.5849)

5

20

1

5

diag(1.0000, 0.7439, 0.5861, 0.5949)

3

18

2

11

diag(1.0000, 0.8841, 0.6067, 0.5875)

4

15

2

15

diag(1.0000, 0.7490, 0.6151, 0.5853)

5

20

2

7

diag(1.0000, 0.6887, 0.5390, 0.5991)

3

18

-

7

diag(0.9927, 0.8153, 0.5695, 0.5848)

4

15

-

7

diag(0.9980, 0.7131, 0.5966, 0.5848)

5

20

-

4

diag(0.9946, 0.6385, 0.5256, 0.5848)

3

18

-

5

diag(0.9955, 0.8200, 0.5751, 0.5848)

4

15

-

5

diag(0.9954, 0.6848, 0.5778, 0.5848)

5

20

-

4

diag(0.9880, 0.5828, 0.4877, 0.5848)

3

18

-

4

diag(1.0000, 0.8576, 0.5873, 0.5873)

4

15

-

5

diag(0.9808, 0.6813, 0.5678, 0.5789)

5

20

-

5

diag(0.9292, 0.6849, 0.5137, 0.5528)

Algorithm 3.2

Algorithm 3.3

Algorithm 3.4

In this example, we take l = 4, s = 10, σk = 3 × 10−16 in Algorithms 3.2 and 3.3. The detailed numerical results are shown in Table 2, where “CPU”denotes the running time when algorithm terminates, “k”denotes the iteration steps, “X”denotes the output positively diagonal matrix when algorithm terminates. The numerical results show that Algorithms 3.2, 3.3 and 3.4 indeed can identify strong H-tensors. And CPU time and iterative steps of Algorithms 3.2, 3.3 and 3.4 are less than that of Algorithm 3.1. Table 2. Numerical results for Example 5.2. Algorithm

CPU

ε

k

X

Algorithm 3.1 [17]

0.0829

1

36

diag(0.9502, 1.0000, 0.5560, 0.8293)

Algorithm 3.2

0.0517

-

14

diag(0.9503, 1.0000, 0.5560, 0.8291)

Algorithm 3.3

0.0447

-

14

diag(0.9503, 1.0000, 0.5560, 0.8291)

Algorithm 3.4

0.0760

-

28

diag(0.8853, 0.9319, 0.5180, 0.7724)

Example 5.3 In this example, we test Algorithm 3.3. Randomly generate a set of 6th order 24

10-dimensional tensors such that the elements of each tensor satisfying ( (−c × 0.6, c × 0.6), if i1 = i2 = · · · = im , ai1 i2 ···im ∈ (−1, 1), otherwise. In this example, we take l = 2, s = 15, σk = 3 × 10−16 in Algorithm 3.3. The numerical results are reported in Table 3, where “c”denotes the factor of amplification of diagonal entries, “CPU”denotes the running time when algorithm terminates, “k”denotes the iteration steps. And “Y”, “N”and symbol “-”denote the output result which respectively corresponding to the case that input tensor is a strong H-tensor, not a strong H-tensor, and undetermined. The results reported in Table 3 show that Algorithm 3.3 can identify some tensors whether are strong H-tensors or not. Table 3. Numerical results for Example 5.3. c

k

CPU

A strong H-tensor

30000

8

0.5744

Y

30000

2

0.1378

Y

28000

6

0.4404

Y

28000

3

0.2110

Y

25000

6

0.4200

Y

25000

3

0.2245

Y

23000

7

0.5046

Y

23000

500

34.4063

-

20000

500

34.9769

-

Example 5.4 In this example, we test Algorithm 3.4. We consider a set of 6th order 15dimensional tensors whose entries are randomly generated from [−1, 1]. For each tensor, we amplify all the diagonal entries to a certain degree to let some of these tensors be strong Htensors. The numerical results are reported in Table 4, where “Amplification”denotes the factor of amplification of diagonal entries, “CPU”denotes the running time when algorithm terminates, “k”denotes the iteration steps. And “Y”, “N ”and symbol “-”denote the output result which respectively corresponding to the case that input tensor is a strong H-tensor, not a strong H-tensor, and undetermined. The results reported in Table 4 show that Algorithm 3.4 can identify some tensors whether are strong H-tensors or not. 25

Table 4. Numerical results for Example 5.4. Amplification

k

CPU

94

A strong H-tensor

500

2.6180

-

94

500

2.5669

-

104

45

0.2381

Y

104

6

4

0.0153

Y

11 ×

103

46

0.2721

Y

11 × 103

71

0.3753

Y

12 ×

103

45

0.2429

Y

12 ×

103

3

0.0130

Y

13 × 103

3

0.0229

Y

13 ×

103

2

0.0077

Y

114

2

0.0081

Y

114

3

0.0143

Y

Conclusions

In this paper, we establish some iterative criteria for identifying strong H-tensors which are implementable algorithms for identifying strong H-tensors. The results obtained in this paper extend the corresponding conclusions for strict generalized diagonally dominant matrices.

Acknowledgment The authors deeply thank the anonymous referee for helping to improve the original manuscript by valuable suggestions.

References [1] Y. Liu, G. Zhou, N.F. Ibrahim, An always convergence algorithm for the largest eigenvalue of an irreducible nonnegative tensor, J. Comput. Appl. Math., (2010) 235:286-292. [2] M. Ng, L. Qi, G. Zhou, Finding the largest eigenvalue of a nonnegative tensor, SIAM J. Matrix Anal. Appl., (2009) 31:1090-1099. [3] L. Zhang, L. Qi, G. Zhou, M-tensors and some applications, SIAM J. Matrix Anal. Appl., (2014) 35:437-452. [4] L. Qi, Eigenvalues of a real supersymmetric tensor, J. Symbolic Comput., (2005) 40:1302-1324. [5] E. Kofidis, P.A. Regalia, On the best rank-1 approximation of higher-order supersymmetric tensors, SIAM J. Matrix Anal. Appl., (2002) 23:863-884. [6] G. Ni, L. Qi, F. Wang, Y. Wang, The degree of the E-characteristic polynomial of an even order tensor, J. Math. Anal. Appl., (2007) 329:1218-1229.

26

[7] N.K. Bose, R.W. Newcomb, Tellegons theorem and multivariable realizability theory, Int. J. Electron., (1974) 36:417-425. [8] N. Bose, P. Kamat, Algorithm for stability test of multidimensional filters, IEEE Trans. Acoust. Speech Signal Process., (1974) 22:307-314. [9] B. Anderson, N. Bose, E. Jury, Output feddback stabilization and related problems-solutions via decision methods, IEEE Trans. Automat. Control, (1975) 20:53-66. [10] J.C. Hsu, A.U. Meyer, Mordern control principles and applications, McGrawHill, New York, (1968). [11] M.A. Hasan, A.A. Hasan, A procedure for the positive definiteness of forms of even-order, IEEE Trans. Automat. Control, (1996) 41:615-617. [12] Q. Ni, L. Qi, F. Wang, An eigenvalue method for testing positive definiteness of a multivariate form, IEEE Trans. Automat. Control, (2008) 53:1096-1107. [13] Y. Song, L. Qi, Necessary and sufficient conditions for copositive tensors, Linear Multilinear Algebra, 63 (2015) 120-131. [14] F. Wang, L. Qi, Comments on explicit criterion for the positive definiteness of a general quartic form, IEEE Trans. Automat. Control., (2005) 50:416-418. [15] N. Bose, A. Modarressi, General procedure for multivariable polynomial positivity with control applications, IEEE Trans. Autom. Control, (1976) 21:696-701. [16] H. Chen, L. Qi, Positive definiteness and semi-definiteness of even order symmetric Cauchy tensors, J. Ind. Manag. Optim., (2015) 11:1263-1274. [17] C. Li, F. Wang, J. Zhao, Y. Zhu, Y. Li, Criterions for the positive definiteness of real supersymmetric tensors, J. Comput. Appl. Math., (2014) 255:1-14. [18] W. Ding, L. Qi, Y. Wei, M-tensors and nonsingular M-tensors, Linear Algebra Appl., (2013) 439:3264-3278. [19] Y. Li, Q. Liu, L. Qi, Programmable criteria for strong H-tensors, Numer. Algorithms, 74 (2017) 199-221. [20] K.L. Zhang, Y.J. Wang, An H-tensor based iterative scheme for identifying the positive definiteness of multivariate homogeneous forms, J. Comput. Appl. Math., 305 (2016):1-10. [21] Y.J. Wang, K.L. Zhang, H.C. Sun, Criteria for strong H-tensor, Front. Math. China., (2016) 11:577-592. [22] Y.J. Wang, G. Zhou, L. Caccetta, Nonsingular H-tensor and its criteria, J. Ind. Manag. Optim., (2016) 12:1173-1186. [23] F. Wang, D. Sun, New iterative codes for H-tensors and an application, Open Math., (2016) 14:212-220. [24] J. Cui, G. Peng, Q. Lu, Z. Huang, New iterative criteria for strong H-tensors and an application, J. Inequal. Appl., (2017) 2017:49. [25] F. Wang, D. Sun, New criteria for H-tensors and an application, J. Inequal. Appl., (2016) 2016:96.

[26] F. Wang, D. Sun, J. Zhao, C. Li, New practical criteria for H-tensors and its application, Linear Multilinear Algebra, (2017) 65:269-283.

[27] Q. Liu, C. Li, Y. Li, On the iterative criterion for strong H-tensors, Comput. Appl. Math., (2017) 36:1623-1635. [28] K. Zhang, Y. Wang, An H-tensor based iterative scheme for identifying the positive definiteness of multivariate homogeneous forms, Comput. Math. Appl., (2016) 305:1-10. [29] W. Zhang, Z. Xu, Q. Lu, X. Zhang, Iterative criteria for generalized strictly diagonally dominant matrices, Numerical Mathematics: A Journal of Chinese Universities, Chinese Series, (2016) 38:301-312.

27

[30] Z. Xu, Q. Lu, K. Zhang, X. An, The theory and application of H-matrix, Science Press, Beijing (2013). (In Chinese) [31] Y. Yang, Q. Yang, Further results for Perron-Frobeinus theorem for nonnegative tensors, SIAM J. Matrix Anal. Appl., (2010) 31:2517-2530. [32] A. Cichocki, R. Zdunek, A.H. Phan, S.I. Amari, Nonnegative matrix and tensor factorizations: applications to exploratory multi-way data analysis and blind source separation, JohnWiley & Sons, Chichester, (2009). [33] T.G. Kolda, B.W. Bader, Tensor decompositions and applications, SIAM Rev., (2009) 51:455-500. [34] K.C. Chang, K. Pearson, T. Zhang, Perron-Frobenius theorem for nonnegative tensors, Commun. Math. Sci., (2008) 6:507-520. [35] M.R. Kannan, N. Shaked-Monderer, A. Berman, Some properties of strong H-tensors and general H-tensors, Linear Algebra Appl., (2015) 476:42-55.

[36] B.W. Bader, T.G. Kolda, et al, MATLAB Tensor Toolbox Version 2.5, http://www.sandia.gov/ tgkolda/TensorToolbox/ (2012).

28