A greedy block Kaczmarz algorithm for solving large-scale linear systems

A greedy block Kaczmarz algorithm for solving large-scale linear systems

Journal Pre-proof A greedy block Kaczmarz algorithm for solving large-scale linear systems Yu-qi Niu, Bing Zheng PII: DOI: Reference: S0893-9659(20...

2MB Sizes 0 Downloads 43 Views

Journal Pre-proof A greedy block Kaczmarz algorithm for solving large-scale linear systems

Yu-qi Niu, Bing Zheng

PII: DOI: Reference:

S0893-9659(20)30087-2 https://doi.org/10.1016/j.aml.2020.106294 AML 106294

To appear in:

Applied Mathematics Letters

Received date : 16 December 2019 Revised date : 11 February 2020 Accepted date : 11 February 2020 Please cite this article as: Y.-q. Niu and B. Zheng, A greedy block Kaczmarz algorithm for solving large-scale linear systems, Applied Mathematics Letters (2020), doi: https://doi.org/10.1016/j.aml.2020.106294. This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. © 2020 Published by Elsevier Ltd.

Journal Pre-proof

A Greedy Block Kaczmarz Algorithm for Solving Large-scale Linear Systems

repro of

Yu-qi Niu, Bing Zheng∗

School of Mathematics and Statistics, Lanzhou University, Lanzhou 730000, PR China

Abstract

The block Kaczmarz algorithm is an iterative scheme for solving large-scale linear systems. At each iteration, the process of the algorithm is to orthogonally project the current iteration point onto the solution space of a subset of the constraints. In this paper, we propose a greedy block Kaczmarz algorithm by using a greedy strategy to construct control index set and choosing row submatrix in each iteration. This algorithm has an linear rate of convergence that can be expressed in terms of the geometric properties of the matrix and its row submatrices. The theoretical analysis and numerical results show that the developed algorithm can be more efficient than the greedy randomized Kaczmarz algorithm if parameter η is chosen appropriately. Keywords: Greedy Kaczmarz Algorithm, Block Kaczmarz Algorithm, Greedy Strategy, Convergence.

1. Introduction

al P

Consider solving large-scale consistent linear systems

Ax = b,

(1.1)

where A ∈ Rm×n , and b ∈ Rm . One of the most popular solvers for consistent linear systems (1.1) is Kaczmarz algorithm [1]. It was first discovered by the Polish mathematician Stefan Kaczmarz in 1937, and was rediscovered in the field of image reconstruction from projections by Richard Gordon et al. in 1970, where it is called the Algebraic Reconstruction Technique (ART) [2]. It has many applications ranging from computer tomography (CT) to image reconstruction [3, 4].

Jou rn

1.1. The simple Kaczmarz algorithm

At each iteration, the simple Kaczmarz algorithm makes progress by enforcing a single constraint. The Kaczmarz algorithm is an iterative algorithm that produces an approximation to the solution x∗ of linear system (1.1). At the (k + 1)th iteration of Kaczmarz algorithm, we select a row index ik of the matrix A, and orthogonally project the current iterate point xk onto the solution space of the equation {x ∈ Rn |A(ik ) x = bik }. Specifically, the simple Kaczmarz algorithm can be formulated as xk+1 = xk +

bik −A(ik ) xk T A(ik ) , kA(ik ) k22

k = 0, 1, 2 · · · ,

where A(1) , · · · , A(m) denote the rows of A. The simple Kaczmarz algorithm can be divided into two main categories: deterministic and random, according to index ik different ways of selecting. In deterministic Kaczmarz algorithm, the row index ik is chosen in a cyclic or based on some greedy strategy. For cyclic index search, Kaczmarz proved that this process converges to the unique solution for square nonsingular matrices [1]. Bounds on the rate of convergence of the Kaczmarz algorithm are given in [5, 6]. However, in randomized Kaczmarz algorithm, the ∗ Corresponding

author Email addresses: [email protected] (Yu-qi Niu), [email protected] (Bing Zheng )

Preprint submitted to Elsevier

February 11, 2020

Journal Pre-proof

1.2. The block Kaczmarz algorithm

repro of

row index ik is chosen randomly based on some probability distribution. In 2009, Strohmer and Vershynin [7] proposed the randomized Kaczmarz algorithm with exponential rate of convergence. Soon after that, the random Kaczmarz algorithm is further extended and modified, see [8, 9, 10, 11] for the various variants of the randomized Kaczmarz algorithm. Recently, a greedy randomized Kaczmarz algorithm was proposed by Z. Bai and W. Wu [12], which has more better performance over the existing randomized algorithms by an effective probability criterion.

The block Kaczmarz algorithm is an extension of the classical Kaczmarz’s one, which, at each iteration, it enforces many constraints at once [13, 14, 15]. At the (k + 1)th iteration, we select a row index subset Jk of the matrix A, and orthogonally project the current iterate point xk onto the solution space of {x ∈ Rn |AJk x = bJk }. Therefore, the block Kaczmarz algorithm can be formulated as xk+1 = xk + A†Jk (bJk − AJk xk ),

(1.2)

where AJk and A†Jk stand for the row submatrix of A indexed by Jk , i.e. AJk = A(Jk , :) and its MoorePenrose pseudo inverse, respectively; while bJk is the subvector of b with components listed in Jk . To specify a block Kaczmarz algorithm, one must decide what blocks of indices are permissible, as well as the mechanism for selecting a block at each iteration. This paper describes a greedy block Kaczmarz algorithm by using a greedy strategy to construct control index set and choosing block submatrix in each iteration. The organization of this paper is as follows. In section 2 we describe the greedy block Kaczmarz algorithm and establish its convergence theory. In section 3 we test some numerical examples to show that our proposed GBK algorithm can outperform that in [12]. Finally, in section 4, we end the paper with conclusions.

al P

2. A Greedy Block Kaczmarz Algorithm

Jou rn

In the section, we first consider greedy selection rules and then provide a greedy block Kaczmarz algorithm using a greedy strategy. There are very few results in the literature that explore the use of greedy selection rules for Kaczmarz-type algorithms. Nutini et al. proposed the maximum residual (MR) and maximum distance (MD) rules for Kaczmarz algorithm in [16].However, in many applications computing the exact MR or MD rule will be too inefficient due to their complex expressions, but we can approximate it by using a cheaper approximate greedy rule, as in the method of [17]. In this section, we consider methods that compute the greedy rules up to multiplicative error. Suppose we have approximated the MD rule in which there is a parameter η ∈ (0, 1] for the selection of index ik such that n o |bik −A(ik ) xk |2 |bi −A(i) xk |2 ≥ η max . kA k2 kA k2 (ik ) 2

1≤i≤m

(i) 2

Since the index ik satisfying the above inequality may not be unique, we let Jk = {ik : |bik − A(ik ) xk |2 ≥ n o |bi −A(i) xk |2 εk kA(ik ) k22 }, where εk = η max . We then select a row index subset Jk of the matrix A, and kA k2 1≤i≤m

(i) 2

project the current iterate point xk orthogonally onto the solution space of {x|AJk x = bJk }. According to this greedy strategy, we can describe the following greedy block Kaczmarz algorithm framework, named Algorithm 1. From the Algorithm 1, we see that the main difference of the GBK algorithm and the block Kaczmarz (BK) algorithm is the selection of the control index set Jk . In fact, in each iteration of the GBK algorithm, the index of each equation in (1.1) is divided into Jk and its complement, while the partition of blocks is predetermined in the BK algorithm. Remark 1. The control index set Jk ⊂ {1, 2, · · · , m} is nonempty. Indeed, if n o |bj −A(j) xk |2 |bi −A(i) xk |2 = max , kA k2 kA k2 (j) 2

1≤i≤m

then j ∈ Jk . 2

(i) 2

Journal Pre-proof

repro of

Algorithm 1 A Greedy Block Kaczmarz Algorithm (GBK) Input: A, b, x0 and parameter η ∈ (0, 1]. 1. for k = 0, 1, · · · do until termination criterion is satisfied; 2. Compute o n |bi −A(i) xk |2 ; εk = η max kA k2 1≤i≤m

3. Determine the control index set

(i) 2

 Jk = ik : |bik − A(ik ) xk |2 ≥ εk kA(ik ) k22 ;

4. Update xk+1 = xk + A†Jk (bJk − AJk xk ). 5. end for

(2.1)

For the convergence theory of the greedy block Kaczmarz algorithm, we can establish the following theorem. Theorem 2.1. Let the linear system (1.1) be consistent. Then the iteration sequence {xk } generated by the Algorithm 1 converges to the minimal norm solution x∗ = A† b. Moreover we have for any k ≥ 0 that kxk+1 − x∗ k22 ≤ 1 − γk (η) kAk2

where γk (η) = η kAk2 −kAJF F

k−1

kAJk k2F 2 (AJk ) k2F σmax

2 σmin (A)  kxk kAk2F

− x∗ k22 ,

kAJ0 k2F

(defining γ0 (η) ≡ η σ2

max (AJ0 )

(2.2)

), η ∈ (0, 1], and σmin (A) and

σmax (A) are the minimum and maximum singular values of A, respectively.

al P

Proof. Let ek = xk − x∗ . According to the update rule (1.2), we have

xk+1 = xk + A†Jk (AJk x∗ − AJk xk ).

Subtract x∗ from the both sides of the above identity, we obtain ek+1 = (I − A†Jk AJk )ek .

Since A†Jk AJk is an orthogonal projector, using the Pythagorean Theorem we have the following relation = k(I − A†Jk AJk )ek k22

Jou rn

kek+1 k22

= kek k22 − kA†Jk AJk ek k22 .

(2.3)

We note that

kA†Jk AJk ek k22



=

2 σmin (A†Jk )kAJk ek k22 X 2 σmin (A†Jk ) |A(ik ) ek |2 ik ∈Jk

X



2 σmin (A†Jk )

=

kAJk k2F 2 σmax (AJk ) εk ,

ik ∈Jk

εk kA(ik ) k22

2 where the second inequality depends on (2.1) and the last equality holds because of the fact σmin (A†Jk ) = −2 σmax (AJk ). Furthermore, from Algorithm 1, we have   b − Axk = b − A xk−1 + A†Jk−1 (bJk−1 − AJk xk−1 )

=

(b − Axk−1 ) − AA†Jk−1 (bJk−1 − AJk−1 xk−1 ), k = 1, 2, · · · . 3

Journal Pre-proof

Thus

bJk−1 − AJk−1 xk = (bJk−1 − AJk−1 xk−1 ) − AJk−1 A†Jk−1 (bJk−1 − AJk−1 xk−1 ) = 0. kb − Axk k22

X

=

i∈[m]\Jk−1

X

=

i∈[m]\Jk−1

≤ Hence εk = η max

1≤i≤m

n

|bi −A(i) xk |2 kA(i) k22

max

1≤i≤m

o

n

|bi − A(i) xk |2

repro of

For k = 1, 2, · · · , we know that

|bi −A(i) xk |2 kA(i) k22 kA(i) k22

|bi −A(i) xk |2 kA(i) k22

kb−Ax k22

≥ η kAk2 −kAJk F

k−1

o

 kAk2F − kAJk−1 k2F . kAk2

≥ η kAk2 −kAJF

k2F

F

k−1

k2F

2 σmin (A) kek k22 , kAk2F

2 where second inequality holds because the kAek k22 ≥ σmin (A)kek k22 . Hence kAk2

kA†Jk AJk ek k22 ≥ η kAk2 −kAJF F

2 kAJk k2F σmin (A) kek k22 . 2 σ2 2 (A ) k kAk J max F F k k−1

(2.4)

Thus, combining (2.3) and (2.4), we finally have the inequality (2.2).

   |bi −A(i) xk |2 −1 ∆ kb−Ax k2 , it’s easy Remark 2. In the Algorithm 1, when parameter η = ηˆ = 21 + 21 kAk2k 2 max 2 kA(i) k2 F 1≤i≤m    |bi −A(i) xk |2 kb−Axk k22 and Jˆk = {ik : |bik − A(ik ) xk |2 ≥ εˆk kA(ik ) k22 }. to calculate εk = εˆk = 12 max + kAk2 kA k2 1≤i≤m

(i) 2

F

al P

In this case, we have the following error estimate

kAJˆ k2F

kxk+1 − x∗ k22 ≤ 1 − γk σ2

k

ˆ ) max (AJ k

where γk =

1 2



kAk2F kAk2F −kAJˆ

k−1

k2F

·

2 σmin (A)  kxk kAk2F

− x∗ k22 .

(2.5)

 + 1 , defining γ0 ≡ 1. Need mention that the greedy randomized Kaczmarz

(GRK) algorithm in [12] for solving the linear systems (1.1) is based on the control index set Jˆk and its error estimate in expectation reads

where γ =

Jou rn

Ekxk+1 − x∗ k22 ≤ 1 − γ

kAk2F 1 2 kAk2F − min kA(i) k22 1≤i≤m

2 σmin (A)  Ekxk kAk2F

− x∗ k22 ,

(2.6)

 + 1 . The comparison of both convergent factors in these two algorithms

will be given in the following Remark 3.

The error estimate (2.2) for convergence of Algorithm 1 exhibits the linear rate of convergence of the proposed greedy block Kaczmarz method. Obviously, the geometric properties of the matrix and its row submatrices control the rate of convergence. Our GBK algorithm can be used to solve any compatible linear system regardless of whether it is ill-conditioned or not, but the bigger the smallest singular value σmin (A) of A is, the faster its convergence rate is. Remark 3. We further remark that the factor of convergence of our greedy block Kaczmarz algorithm is smaller, uniformly with respect to the iteration step k, than that of the greedy randomized Kaczmarz algorithm in [12], which shows theoretically that the GBK algorithm is superior to the GRK on their convergence rate though the GRK method has many other advantages. 2 In fact, as kAJˆk k2F ≥ σmax (AJˆk ) and min kA(j) k22 ≤ kAJˆk−1 k2F , it holds that 1≤j≤m

kAk2F kAk2F − min kA(j) k22 1≤j≤m

≤ 4

kAk2F kAk2F −kAJˆ

k−1

k2F

,

Journal Pre-proof

i.e., γk ≥ γ. In addition, we know that the inequality kAJˆ k2F

1 − γk σ 2

k

·

2 σmin (A) kAk2F

≥ 1 always holds. Hence,

≤ 1−γ

2 σmin (A) . kAk2F

repro of

ˆ ) max (AJ k

kAJˆ k2F k 2 σmax (AJˆ ) k

3. Numerical results

name m×n Full Rank Density Condition Number

al P

In this section, we perform some numerical experiments for our GBK and compare the computed results with those by the known GRK [12] algorithms. We implement our proposed GBK algorithm with the same parameter η = ηˆ, i.e., εk = εˆk as that in GRK algorithm. To avoid calculating the Moore-Penrose inverse A†Jk when implementing the block Kaczmarz update rule (1.2), we use the CGLS algorithm [18] to solve a underdetermined least-squares problem. Each iteration of CGLS algorithm requires about 2nz(AJˆk )+3n+2τ complex flops, where nz(AJˆk ) is the number of nonzero elements in AJˆk ∈ Rτ ×n , τ is cardinal number of the set Jˆk . So the cost of the block Kaczmarz method to perform the update (1.2) is (2nz(AJˆk ) + 3n + 2τ )IT + n + 2nτ complex flops, where IT is the number of iteration of the CGLS algorithm in each iteration of the greedy block Kaczmarz algorithm. The randomized greedy Kaczmarz algorithm loads the required row from memory without doing any extra computation, so the cost of the simple update rule is just 4n + 1 complex flops. We consider some real coefficient matrices A ∈ Rm×n either which are chosen from the Florida sparse matrix collection [19] or randomly generated dense matrices by using the MATLAB function randn. The right-hand side b ∈ Rm is taken to be Ax, where x ∈ Rn is randomly generated by using the MATLAB function randn. All experiments are carried out using MATLAB (version R2017b) on a personal computer with 1.60 GHz central processing unit (Intel(R) Core(TM) i5-8265U CPU), 8.00 GB memory, and Windows operating system (64 bit Windows 10). All computations are started from the initial vector x0 = 0, and kxk −x∗ k22 , satisfies RE < 10−10 . terminated once the relative error (RE), defined by RE = kx 2 ∗k ash958 958 × 292 Yes 0.68% 3.20

relat6 2340 × 157 No 2.21% inf

cari 400 × 1200 Yes 31.83% 3.13

2

GL7d26 305 × 2798 No 0.87% 6.02×1018

Trefethen 700 700 × 700 Yes 2.58% 4.71×103

GD00 a 352 × 352 No 0.37% inf

Table 1: The properties of different sparse matrices.

Jou rn

First, we test the GBK and GRK algorithms using the Florida sparse matrices in Table 1. Figure 1 displays the curves of the relative error (RE) in base-10 logarithm versus the number of Complex Flops (left panel of each group) and CPU times (right panel of each group) for both GRK and GBK algorithms. We see that the GBK algorithm needs about 1 to 5 times the number of complex flops of the GRK algorithm. More interestingly, we discover that the GBK algorithm is about 1 to 10 times faster than the GRK algorithm. Finally, we test these algorithms using the randomly generated dense matrices. Figure 2 shows the results of this experiment. Note that, the trend of Figure 2 is similar to that of the Figure 1. The results in these figures show that GBK algorithm always approximates minimal norm solution of the linear systems (1.1) faster than the GRK algorithm. Although the GBK algorithm requires more the number of complex flops than the GRK algorithm for Florida sparse linear systems and dense linear systems in each iteration, the GBK algorithm requires much fewer the number of iteration than the GRK algorithm. So the GBK algorithm requires less CPU times than the GRK algorithm. The GBK algorithm can be implemented more efficiently than the GRK algorithm in many computer architectures. We regard this experiment as strongly evidence of the computational advantages of the GBK algorithm. 4. Conclusions

We describes a greedy block Kaczmarz algorithm for solving large-scale consistent linear systems, and present convergence analysis for this algorithm. The theoretical analysis and numerical results show that 5

Journal Pre-proof

100

100

100

GBK GRK

-4

RE

RE 10-6

10-6

10-8

10-8

4

6

8

10

12

14

16

10-10

18

0

0.03

0.06

105

Complex Flops

0.09

0.12

CPU (s)

100

-6

10

10

10-8

10-8

-10

-10

6

8

10

12

14

10

16

0

0.05

0.1

0.15

0.2

0.25

0.3

CPU (s)

10

10

10-6

-8

-8

10

10

1

1.5

2

10

10-2

10-4

10-4

10

-6

10-6

10-8

10-8

10

0.35

0

2

4

6

8

10

12

10-10

2.5

0

0.3

10-10

14

0

0.05

0.4

0.5

0.1

106

0.6

0.7

GBK GRK

0.15

0.2

CPU (s)

100

GBK GRK

10-2

10-2

10-4

10-4

10-6

10-6

-8

10-8

10

107

0.2

100

al P

10-6

0.1

GBK GRK

-2

RE

10-4

0

CPU (s)

10

RE

10-4

RE

10-2

10-10

10

106

GBK GRK

10-2

Complex Flops

8

0

GBK GRK

0.5

6

(d) GL7d26

0

0

4

Complex Flops

(c) cari 0

10-10

2

-10

106

Complex Flops

0

RE

10-4

-6

4

10-10

0.15

100

RE

10-4

RE

10

2

10-8

GBK GRK -2

10

0

10-6

10-8

(b) relat6

GBK GRK -2

10

10-6

Complex Flops

(a) ash958 100

-4

RE

2

10

0.05

0.1

0.15

0.2

0.25

10-10

0.3

GBK GRK

RE

0

-4

10

RE

10

10

GBK GRK

10-2

10

repro of

-4

10

GBK GRK -2

RE

10

-2

10-10

100

GBK GRK

-2

0

2

4

CPU (s)

6

8

10

10-10

12

0

0.05

0.1

0.15

106

Complex Flops

(e) Trefethen 700

0.2

0.25

0.3

CPU (s)

(f) GD00 a

0

10

0

10

10

10

GBK GRK

10

10

10-8

-10

10

0

2

4

6

Complex Flops

8

10

12

GBK GRK

-2

-2

10

10

10-4

10-4

RE

RE

-6

10

GBK GRK

-2

10-4

RE

10-4

0

10

GBK GRK

-2

-6

RE

0

Jou rn

Figure 1: RE versus the number of Complex Flops and CPU times of both GRK and GBK algorithms for Florida sparse linear systems.

-6

-6

10

10

10-8

10-8

10-8

-10

-10

10

0

2

107

4

6

8

10

10

12

CPU (s)

-10

0

1

2

3

Complex Flops

(a) 5000 × 1000

4

5

6

10

0

5

108

10

15

20

25

CPU (s)

(b) 1000 × 5000

Figure 2: RE versus the number of Complex Flops and CPU times of both GRK and GBK algorithms for dense linear systems.

6

Journal Pre-proof

our proposed algorithm has faster convergence rate and more efficient than the known greedy randomized Kaczmarz (GRK) algorithm if parameter η is chosen appropriately. Acknowledgment

repro of

This work was supported by the National Natural Science Foundation of China (Grant No. 11571004). [1] S. Kaczmarz, Angen¨ aherte aufl¨ osung von systemen linearer gleichungen, Bull. Int. Acad. Polon.Sci. Lett. A 35 (1937) 355–357. [2] R. Gordon, R. Bender, G. Herman, Algebraic reconstruction techniques (ART) for three-dimensional electron microscopy and X-ray photography, J. Theor. Biol. 29 (3) (1970) 471 – 481. [3] C. Popa, R. Zdunek, Kaczmarz extended algorithm for tomographic image reconstruction from limiteddata, Math. Comput. Simulat. 65 (6) (2004) 579 – 598. [4] P. Eggermont, G. Herman, A. Lent, Iterative algorithms for large partitioned linear systems, with applications to image reconstruction, Linear Algebra Appl. 40 (1981) 37 – 67. [5] S. F. McCormick, An iterative procedure for the solution of constrained nonlinear equations with application to optimization problems, Numer. Math. 23 (5) (1975) 371–385. [6] R. Ansorge, Connections between the Cimmino-method and the Kaczmarz-method for the solution of singular and regular systems of equations, Computing 33 (3) (1984) 367–375. [7] T. Strohmer, R. Vershynin, A randomized Kaczmarz algorithm with exponential convergence, J. Fourier Anal. Appl. 15 (2) (2009) 262–278.

al P

[8] D. Leventhal, A. S. Lewis, Randomized methods for linear constraints: Convergence rates and conditioning, Math. Oper. Res. 35 (3) (2008) 641–654. [9] A. Zouzias, N. Freris, Randomized extended Kaczmarz for solving least squares, SIAM J. Matrix Anal. Appl. 34 (2012) 773–793. [10] A. Ma, D. Needell, A. Ramdas, Convergence properties of the randomized extended Gauss-Seidel and Kaczmarz methods, SIAM J. Matrix Anal. Appl. 36 (4) (2015) 1590–1604. [11] K. Du, Tight upper bounds for the convergence of the randomized extended Kaczmarz and Gauss-Seidel algorithms, Numer. Linear Algebra Appl. 26 (3) (2019) e2233.

Jou rn

[12] Z. Bai, W. Wu, On greedy randomized Kaczmarz method for solving large sparse linear systems, SIAM J. Sci. Comput. 40 (1) (2018) A592–A606. [13] T. Elfving, Block-iterative methods for consistent and inconsistent linear equations, Numer. Math. 35 (1980) 1–12. [14] D. Needell, J. A. Tropp, Paved with good intentions: Analysis of a randomized block Kaczmarz method, Linear Algebra Appl. 441 (2014) 199 – 221. [15] D. Needell, R. Zhao, A. Zouzias, Randomized block Kaczmarz method with projection for solving least squares, Linear Algebra Appl. 484 (2015) 322 – 343. [16] J. Nutini, B. Sepehry, I. Laradji, M. Schmidt, H. Koepke, A. Virani, Convergence rates for greedy Kaczmarz algorithms, and faster randomized Kaczmarz rules using the orthogonality graph, Jersey City, New Jersey, USA (2016) 547–556. [17] Y. Eldar, D. Needell, Acceleration of randomized Kaczmarz method via the Johnson-Lindenstrauss lemma, Numer. Algor. 58 (2011) 163–177. [18] ˚ A. Bj¨orck, Numerical Methods for Least Squares Problems, SIAM, 1996, pp. 269–316. [19] T. A. Davis, Y. Hu, The university of florida sparse matrix collection, ACM Trans. Math. Softw. 38 (1) (2011) 1–25. 7

Credit Author Statement

Journal Pre-proof

The work in this paper was done mainly by Yuqi Niu under

Prof. Bing Zheng’s

supervision, including the Data curation, Formal analysis, Validation, Validation et. al.. Yuqi Niu made the original draft writing and then Bing Zheng reviewed and edited. This work was supported by National Natural Science Foundation of China which was

Jou rn

al P

repro of

received by Prof. Bing Zheng. Yuqi Niu is now a graduate student of Prof. Bing Zheng.