Linear Algebra and its Applications 571 (2019) 41–57
Contents lists available at ScienceDirect
Linear Algebra and its Applications www.elsevier.com/locate/laa
Resistance matrices of graphs with matrix weights Fouzul Atik a , R.B. Bapat b , M. Rajesh Kannan c,∗ a
Department of Mathematics, SRM University - AP, Amaravati, Andhra Pradesh 522502, India b Theoretical Statistics and Mathematics Unit, Indian Statistical Institute, New Delhi 110 016, India c Department of Mathematics, Indian Institute of Technology Kharagpur, Kharagpur 721 302, India
a r t i c l e
i n f o
Article history: Received 9 November 2018 Accepted 11 February 2019 Available online 21 February 2019 Submitted by R. Brualdi MSC: 05C50 Keywords: Resistance matrix Laplacian matrix Matrix weighted graph Inverse Inertia Moore–Penrose inverse
a b s t r a c t The resistance matrix of a simple connected graph G is denoted by R, and is defined by R = (rij ), where rij is the resistance distance between the vertices i and j of G. In this paper, we consider the resistance matrix of weighted graph with edge weights being positive definite matrices of same size. We derive a formula for the determinant and the inverse of the resistance matrix. Then, we establish an interlacing inequality for the eigenvalues of resistance and Laplacian matrices of tree. Using this interlacing inequality, we obtain the inertia of the resistance matrix of tree. © 2019 Elsevier Inc. All rights reserved.
1. Introduction Consider a connected graph G = (V, E) on n vertices, where V = {1, 2, . . . , n} is the vertex set and E = {e1 , e2 , . . . , em } is the edge set. If two vertices i and j of G are * Corresponding author. E-mail addresses:
[email protected] (F. Atik),
[email protected] (R.B. Bapat),
[email protected],
[email protected] (M. Rajesh Kannan). https://doi.org/10.1016/j.laa.2019.02.011 0024-3795/© 2019 Elsevier Inc. All rights reserved.
42
F. Atik et al. / Linear Algebra and its Applications 571 (2019) 41–57
adjacent, we denote it by i ∼ j. A (positive scalar) weighted graph is graph in which every edge is assigned a weight which is a positive number. If the weight of each edge is 1, the graph is called an unweighted graph. The weight of a path is the sum of the weights of the edges lies in the path. The distance matrix D(G) of a weighted graph G is an n × n matrix (dij ), where dij is the minimum of weights of all paths from the vertex i to the vertex j. The Laplacian matrix L of a weighted graph G is an n × n matrix defined as follows: For i, j ∈ [n] = {1, . . . , n}, i = j, the (i, j)th -element of L is − w1ij if the vertices i and j are adjacent, where wij denotes the weight of the edge joining the vertices i and j, and it is zero, otherwise. For i ∈ [n], the (i, i)th -entry of the matrix L is j∼i w1ij . For i = j, let eij denote the n × 1 vector with 1 at the ith place, −1 at the j th place, and zero elsewhere. A matrix H is called a generalized inverse, or g-inverse of L if LHL = L. It is known [4] that for a graph with positive edge weights, the scalar eij Heij is same for any g-inverse H of L where eij is the transpose of eij . The resistance distance between the distinct vertices i and j, denoted by r(i, j), defined as r(i, j) = eij Heij = hii + hjj − hij − hji , where H is a g-inverse of L. If i = j, then we set r(i, j) = 0. In particular, if L† = (kij ) is the Moore–Penrose inverse (defined in Section 2) of L, then r(i, j) = kii + kjj − 2kij . The resistance matrix R of a graph is an n × n matrix defined as follows: For i, j ∈ [n], the (i, j)th -entry of R is defined as rij = r(i, j). For a tree, the resistance distance coincides with the classical distance [4, Theorem 10.6]. The distance matrix of a connected graph, in particular of a tree, has been studied by researchers for many years. One of the remarkable result about the distance matrix of a tree states that, if T is a tree on n vertices, then the determinant of the distance matrix D of T is (−1)n−1 (n − 1)2n−2 , which is independent of the structure of the underlying tree [7]. In [6], a formula for the inverse of the distance matrix of a tree is obtained. An extension of these results for the weighted trees, where the weights are positive scalars, were considered in [5]. In [3], the formulae for determinant and inertia of the distance matrix of a weighted tree with edge weights being square matrices of same order have been established. Resistance distance matrices are one of the important classes of matrices, associated with graphs, considered in the chemical literature. The notion of Wiener index based on the resistance distance has proposed in [9]. The formulae for the determinant and the inverse of the resistance matrix of scalar weighted graph are obtained in [2]. Some inequalities for the eigenvalues of the distance matrix of a tree and the eigenvalues of the resistance matrix of any connected graph are established in [12] and [13], respectively. We refer to [10,11,14–16] for more information on the resistance distance. In this paper, we consider the resistance matrix of any weighted graph whose edge weights are positive definite matrices of same size. We derive a formula for the determinant (Theorem 4.2) and the inverse (Theorem 4.1) of the resistance matrix. We establish an interlacing inequality for the eigenvalues of resistance and Laplacian matrices of trees (Theorem 4.3). Using this, we obtain the inertia of the resistance matrices of trees (Theorem 4.4).
F. Atik et al. / Linear Algebra and its Applications 571 (2019) 41–57
43
This article is organized as follows: In section 2, we first recall some of the definitions and results, which will be used in the subsequent part of the paper. Then we establish two important properties, one of them related to the principal submatrix of Moore–Penrose inverse of a positive semidefinite matrix (Lemma 2.1) and the other is about the cofactor of block matrices (Lemma 2.2). In section 3, we derive some of the properties of the resistance matrix with positive definite matrix edge weights. Using these properties, in section 4, we obtain the formulae for the determinant, inverse and inertia of the resistance matrix with positive definite matrix edge weights. 2. Some useful results Let B be an n × n matrix and let S, K ⊆ [n]. Let B[S, K] defined as the matrix obtained by selecting the rows of B indexed by S, and the columns of B indexed by K. On the other hand, B(S, K) is the matrix obtained by deleting the rows of B indexed by S, and the columns of B indexed by K. Similarly, if x is an n × 1 vector, then x[S] will denote the subvector of x indexed by indices in S. The Kronecker product of matrices A = (aij ) of size m × n and B of size p × q, denoted by A ⊗ B, is defined to be the mp × nq block matrix (aij B). It is known that for matrices M , N , P and Q of suitable sizes, M N ⊗ P Q = (M ⊗ P )(N ⊗ Q) [8]. Let A be an n × n matrix partitioned as A1,1 A1,2 A= (1) A2,1 A2,2 such that the diagonal blocks A1,1 and A2,2 are square matrices and the matrix A1,1 is invertible. Then the Schur complement of A1,1 in A is defined to be the matrix A2,2 − A2,1 A−1 1,1 A1,2 . The determinant of the matrix A and determinant of the Schur complement of A1,1 in A has the following relationship: det A = det A1,1 det(A2,2 − A2,1 A−1 1,1 A1,2 ). This is called the Schur formula for the determinant. Let B denote the transpose of the matrix B. Let A be an m × n matrix. A matrix G of order n × m is said to be a generalized inverse (or a g-inverse) of A if AGA = A. A matrix G of order n × m is said to be the Moore–Penrose inverse of A if it satisfies (i) AGA = A, (ii) GAG = G, (iii) (AG) = AG and (iv) (GA) = GA. We denote the Moore–Penrose inverse of the matrix A by A† . In the next lemma, we prove that if a principal submatrix A[S, S] of a positive semidefinite matrix A is invertible, then the principal submatrix of A† corresponds to the same index set S is also invertible. i.e., A† [S, S] is invertible. Lemma 2.1. Let A be an n × n positive semidefinite matrix, whose rows and columns are indexed by [n]. For S ⊆ [n], if A[S, S] is nonsingular then A† [S, S] is also nonsingular. Proof. Let A be of rank r(≤ n). For i = 1, 2, . . . , r, let λi be the nonzero eigenvalues of A with corresponding eigenvectors xi . Then the spectral decomposition of A is given by
44
F. Atik et al. / Linear Algebra and its Applications 571 (2019) 41–57
A = λ1 x1 x1 + λ2 x2 x2 + · · · + λr xr xr .
(2)
Let S ⊆ [n] with |S| = p (≤ r) and A[S, S] is nonsingular. Then A[S, S] = λ1 x1 [S]x1 [S] + λ2 x2 [S]x2 [S] + · · · + λr xr [S]xr [S] ⎡√ ⎤ λ1 x1 [S] √ ⎥ ⎢ ⎢ λ2 x2 [S] ⎥ ⎢ ⎥. = λ1 x1 [S] λ2 x2 [S] . . . λr xr [S] ⎢ .. ⎥ . ⎦ ⎣ √ λr xr [S] Since the rank of the matrix A[S, S] is p, the rank of is also p. Again
√
√
λ1 x1 [S]
λ2 x2 [S] . . . λr xr [S] ⎡√ λ1 0 √ ⎢ λ2 ⎢ 0 = [x1 [S] x2 [S] . . . xr [S]] ⎢ .. ⎢ .. . ⎣ . 0 0
λ2 x2 [S] . . .
λ1 x1 [S]
⎤ ... 0 ⎥ ... 0 ⎥ .. ⎥ .. ⎥. . . ⎦ √ ... λr
Thus we get rank of [x1 [S] x2 [S] . . . xr [S]] is p. Then rank of ⎡ ⎢ ⎢ [x1 [S] x2 [S] . . . xr [S]] ⎢ ⎢ ⎣
√1 λ1
0 .. . 0
0 √1 λ2
.. . 0
... ... .. . ...
0 0 .. .
⎤ ⎥ ⎥ ⎥ ⎥ ⎦
√1 λr
1 1 1 = √ x1 [S] √ x2 [S] . . . √ xr [S] λ1 λ2 λr is also p. We note that the spectral decomposition of A† is A† =
1 1 1 x1 x1 + x2 x2 + · · · + xr xr . λ1 λ2 λr
Thus A† [S, S] =
1 1 1 x1 [S]x1 [S] + x2 [S]x2 [S] + · · · + xr [S]xr [S] λ1 λ2 λr
is of rank p and hence A† [S, S] is also nonsingular. 2
√
λr xr [S]
F. Atik et al. / Linear Algebra and its Applications 571 (2019) 41–57
45
Next we define the cofactor of a submatrix. Let S = {i1 , i2 , . . . , ik } and K = {j1 , j2 , . . . , jk } be subsets of [n]. For a matrix A of order n × n, the cofactor of the submatrix A[S, K] in A is ASK = (−1)i1 +i2 +···+ik +j1 +j2 +···+jk det A(S, K). Consider a square matrix A whose rows and columns are indexed by elements in X = {1, 2, . . . , ns} and a partition π = {X1 , X2 , . . . , Xn } of X where |Xi | = s for all i = 1, 2, . . . , n. We partition the matrix A according to π as ⎡
A1,1 ⎢ ⎢ A2,1 A=⎢ ⎢ .. ⎣ . An,1
A1,2 A2,2 .. . An,2
⎤ . . . A1,n ⎥ . . . A2,n ⎥ .. ⎥ .. ⎥, . . ⎦ . . . An,n
(3)
where each Ai,j = A[Xi , Xj ] is a submatrix (block) of A whose rows and columns are indexed by elements of Xi and Xj , respectively. Lemma 2.2. Let A be an ns × ns square matrix and it is partitioned as in (3). Let n n j=1 Ai,j = 0 i ∈ {1, 2, . . . , n}, and i=1 Ai,j = 0 for all j ∈ {1, 2, . . . , n}. Then the cofactor of any two blocks Ai,j and Ak,l in A are equal. Proof. We have ⎤ A2,2 A2,3 . . . A2,n ⎥ ⎢ ⎢ A3,2 A3,3 . . . A3,n ⎥ = (−1)1+2+···+s+1+2+···+s det ⎢ .. .. ⎥ .. ⎥ ⎢ .. . . . ⎦ ⎣ . An,2 An,3 . . . An,n ⎡ n ⎤ A2,j A2,3 . . . A2,n j=2 ⎢ n ⎥ ⎢ j=2 A3,j A3,3 . . . A3,n ⎥ ⎢ = det ⎢ .. .. .. ⎥ .. ⎥ . . . . ⎦ ⎣ n An,3 . . . An,n j=2 An,j ⎡ ⎤ −A2,1 A2,3 . . . A2,n ⎢ ⎥ ⎢ −A3,1 A3,3 . . . A3,n ⎥ n = det ⎢ .. .. ⎥ .. ⎢ .. ⎥ [As j=1 Ai,j = 0] . . . ⎦ ⎣ . ⎡
cofactor of A1,1
−An,1 ⎡
An,3
A2,1 ⎢ ⎢ A3,1 = (−1)s det ⎢ ⎢ .. ⎣ . An,1
. . . An,n
A2,3 A3,3 .. . An,3
⎤ . . . A2,n ⎥ . . . A3,n ⎥ .. ⎥ .. ⎥ . . ⎦ . . . An,n
46
F. Atik et al. / Linear Algebra and its Applications 571 (2019) 41–57
⎡
A2,1 ⎢ ⎢ A3,1 = (−1)(1+2+···+s)+(s+1+s+2+···+s+s) det ⎢ ⎢ .. ⎣ . An,1
A2,3 A3,3 .. . An,3
⎤ . . . A2,n ⎥ . . . A3,n ⎥ .. ⎥ .. ⎥ . . ⎦ . . . An,n
= cofactor of A1,2 . Similarly, we can prove that cofactor of Ai,j is equal to cofactor of Ai,k , for any i, j, k. n By using the assumption i=1 Ai,j = 0 and applying the above steps to the rows of A, we get the cofactor of Ai,j equals to cofactor of Ak,j , for any i, j, k. 2 3. Resistance matrix with matrix weights Let G be a connected weighted graph with n vertices, m edges and all the edge weights of √ G are positive definite matrices of order s × s. For a positive definite matrix A, let A denote the unique positive definite square root of A. The Laplacian matrix L of the graph G is the ns × ns block matrix whose row and column blocks are index by the vertices of G and (i, j)th block of L equals to the sum of the inverse of weight matrices of the edges incident with the vertex i if i = j, and is equals to −W −1 if there is an edge between the vertices i and j, where W is the weight of the edge, and zero matrix otherwise. We assign an orientation to each edge of G. Then the vertex edge incidence matrix Q of G is the ns × ms block matrix whose row and column blocks are index by the vertices and the edges of G, respectively. The (i, j)th block of Q is zero matrix if the −1 −1 ith vertex and the j th edge are not incident and it is Wj (respectively, − Wj ) if the ith vertex and the j th edge are incident, and the edge originates (respectively, terminates) at the ith vertex, where Wj is the weight of the j th edge. In this case, it is easy to see that, the Laplacian matrix L can be written as L = QQ . Let L† be the Moore–Penrose inverse of the Laplacian matrix L of a graph G, and let Ki,j , i, j = 1, 2, . . . , n, be the s × s blocks of L† . For the graph G, we define the resistance matrix R by an n × n block matrix whose (i, j)th block Ri,j is defined by Ri,j = Ki,i + Kj,j − 2Ki,j , i, j = 1, 2, . . . , n.
(4)
Since the Moore–Penrose inverse of a matrix always exists and it is unique, so the definition of resistance matrix is well defined. Also, if s = 1, then the above definition of resistance matrix coincides with the definition of resistance matrix of positive scalar weighted graphs. In the next theorem, we prove the matrix L + n1 J ⊗ Is is nonsingular. Theorem 3.1. Let G be a connected graph on n vertices such that the edge weights are s × s positive definite matrices. Then L + n1 J ⊗ Is is nonsingular. Proof. Since each weight matrix is positive definite, we get L is positive semidefinite and rank of L is ns − s [1]. Let the eigenvalues of L be λ1 , λ2 , . . . , λns−s , 0, 0, . . . , 0, where
F. Atik et al. / Linear Algebra and its Applications 571 (2019) 41–57
47
multiplicity of 0 is s. Now according to the construction of L, we have L(1 ⊗ Is ) = 0. Thus each of the column vector of 1 ⊗ Is , is an eigenvector of L corresponding to the eigenvalue 0. Note that column vectors of 1 ⊗ Is are linearly independent. Let xi be an eigenvector of L corresponding to the eigenvalue λi = 0 for i = 1, 2, . . . , ns − s. As L is symmetric, xi (1 ⊗ Is ) = 0 ⇒ (1 ⊗ Is )xi = 0 ⇒ (11 ⊗ Is )xi = 0 ⇒ (J ⊗ Is )xi = 0. Then (L +
1 1 J ⊗ Is )(1 ⊗ Is ) = L( 1 ⊗ Is ) + (J ⊗ Is )( 1 ⊗ Is ) n n 1 = (J1 ⊗ Is ) n = (1 ⊗ Is ).
Thus each of the column vector of 1 ⊗ Is , is an eigenvector of L + n1 J ⊗ Is corresponding to the eigenvalue 1. Now, for each i = 1, 2, . . . , ns − s, (L +
1 1 J ⊗ Is )xi = Lxi + (J ⊗ Is )xi n n = Lxi = λi x i .
Thus xi is also an eigenvector of L + n1 J ⊗ Is corresponding to the eigenvalue λi for i = 1, 2, . . . , ns − s. Hence the eigenvalues of L + n1 J ⊗ Is are λ1 , λ2 , . . . , λns−s , 1, 1, . . . , 1, where multiplicity of 1 is s. Thus L + n1 J ⊗ Is is invertible. 2 In the previous theorem, we have seen that L + n1 J ⊗ Is is nonsingular. Now set X = (L + n1 J ⊗ Is )−1 . Since L(J ⊗ Is ) = (J ⊗ Is )L = 0, we have 1 1 J ⊗ Is )L = L(L + J ⊗ Is ) n n 1 1 ⇒ X(L + J ⊗ Is )LX = XL(L + J ⊗ Is )X n n ⇒ LX = XL. (L +
(5)
Using this observation, in the next theorem, we derive a formula for the Moore–Penrose inverse of the Laplacian matrix associated with the graph.
48
F. Atik et al. / Linear Algebra and its Applications 571 (2019) 41–57
Theorem 3.2. Let G be a connected graph on n vertices such that the edge weights are s ×s positive definite matrices and let L be the Laplacian matrix associated with G. Then L† = X − n1 J ⊗ Is . Proof.
L(X −
1 J ⊗ Is )L = LXL n = L[Ins −
1 X(J ⊗ Is )] n
1 LX(J ⊗ Is ) n 1 = L − XL(J ⊗ Is ) n = L.
=L−
[By (5)] (6)
Again applying (5), we get X(J ⊗ Is ) = (J ⊗ Is )X ⇒ X(J ⊗ Is )(L +
1 1 J ⊗ Is ) = (J ⊗ Is )X(L + J ⊗ Is ) n n
1 ⇒ X[ (J ⊗ Is )(J ⊗ Is )] = (J ⊗ Is ) n ⇒ X(J ⊗ Is ) = (J ⊗ Is ).
(7)
Similarly, we get (J ⊗ Is )X = (J ⊗ Is ).
(8)
Now, (X −
1 1 J ⊗ Is )L(X − J ⊗ Is ) = XLX n n = (Ins −
1 XJ ⊗ Is )X n
1 X(J ⊗ Is )X n 1 = X − J ⊗ Is [By (7) and (8)]. n
=X−
Again, [(X −
1 1 J ⊗ Is )L] = L (X − J ⊗ Is ) n n 1 = L(X − J ⊗ Is ) n
(9)
F. Atik et al. / Linear Algebra and its Applications 571 (2019) 41–57
49
= LX = XL 1 J ⊗ Is )L. n
(10)
1 1 J ⊗ Is )] = L(X − J ⊗ Is ). n n
(11)
= (X − Similarly, [L(X −
By combining (6), (9), (10) and (11), we get the result. 2 Now, using Theorem 3.2 and equation (4), for i, j = 1, 2, . . . , n, we have Ri,j = Xi,i + Xj,j − 2Xi,j .
(12)
¯ denote the diagonal block matrix whose diagonal blocks are X1,1 , X2,2 , . . . , Xn,n . Let X Then, by using the previous equation we get, ¯ ⊗ Is ) + (J ⊗ Is )X ¯ − 2X. R = X(J
(13)
For i, j = 1, 2, . . . , n, we define the s × s matrices τi and the ns × s matrix τ as follows: τi = 2Is −
−1 Wi,j Rj,i ,
j∼i
τ = [τ1 τ2 . . . τn ] .
(14)
Theorem 3.3. Let G be a connected graph on n vertices such that the edge weights are s × s positive definite matrices and L be the Laplacian matrix associated with G. If the ¯ ⊗ Is ) + 2 (1 ⊗ Is ) = τ . ¯ and τ are defined as above, then LX(1 matrices X n Proof. We have (L +
1 J ⊗ Is )X = Ins . n
Equating the (i, i)th block in both sides, we get
−1 Wi,j Xi,i −
j∼i
⎛ ⇒⎝
j∼i
Now,
⎞
1 Is Xj,i = Is n j=1 n
−1 Wi,j Xj,i +
j∼i
−1 ⎠ Wi,j Xi,i −
j∼i
1 Xj,i = Is . n j=1 n
−1 Wi,j Xj,i +
(15)
50
F. Atik et al. / Linear Algebra and its Applications 571 (2019) 41–57
1 J ⊗ Is )(1 ⊗ Is ) = 1 ⊗ Is n 1 ⇒ X(L + J ⊗ Is )(1 ⊗ Is ) = X(1 ⊗ Is ) n ⇒ (1 ⊗ Is ) = X(1 ⊗ Is ) (L +
⇒ (1 ⊗ Is ) = (1 ⊗ Is )X n
⇒
Xj,i = Is for i = 1, 2, . . . , n.
j=1
From (15), we get ⎛
⎝
⎞ −1 ⎠ Wi,j Xi,i −
j∼i
−1 Wi,j Xj,i = (1 −
j∼i
1 )Is . n
Now, for i = 1, 2, . . . , n, τi = 2Is −
−1 Wi,j Rj,i
j∼i
= 2Is −
−1 Wi,j (Xi,i + Xj,j − 2Xj,i )
j∼i
1 −1 −1 Wi,j Xj,j + Wi,j Xj,i [By (16)]. = (1 + )Is − n j∼i j∼i ¯ ⊗ Is ) + 2 (1 ⊗ Is ). Then, for i = 1, 2, . . . , n, Let γi be the ith block of LX(1 n γi =
n k=1
=
n
¯ i,k Is + (LX) ⎛ ⎝
k=1
=
n
n
2 Is n ⎞
¯ j,k ⎠ + 2 Is Li,j X n j=1
Li,k Xk,k +
k=1
= Li,i Xi,i +
2 Is n
Li,k Xk,k +
k∼i
⎛
=⎝
⎞
−1 ⎠ Wi,j Xi,i −
j∼i
j∼i
2 Is n
−1 Wi,j Xj,j +
2 Is n
1 −1 −1 = (1 + )Is − Wi,j Xj,j + Wi,j Xj,i [By (16)] n j∼i j∼i = τi . 2
(16)
F. Atik et al. / Linear Algebra and its Applications 571 (2019) 41–57
51
Remark 3.1. Using the previous theorem, we get the following identities: ¯ ⊗ Is ) + 2 ( 1 ⊗ Is )] (1 ⊗ Is )τ = (1 ⊗ Is )[LX(1 n 2 = (1 ⊗ Is )(1 ⊗ Is ) n = 2Is ⇒ τ (1 ⊗ Is ) = 2Is . Theorem 3.4. Let G be a connected graph on n vertices such that the edge weights are s × s positive definite matrices and R be the resistance matrix of G. Then n
−1 Wi,j Rj,i = 2(n − 1)Is .
i=1 j∼i
Proof. From Theorem 3.2, we have L† = (X − n1 J ⊗ Is ). Now, LL† = L(X −
1 1 1 J ⊗ Is ) = LX = Ins − (J ⊗ Is )X = Ins − J ⊗ Is . n n n
From equation (13), we have ¯ ⊗ Is ) + (J ⊗ Is )X ¯ − 2X. R = X(J Thus ¯ ⊗ Is ) − 2LX LR = LX(J ¯ ⊗ Is ) − 2Ins + = LX(J
(17) 2 (J ⊗ Is ). n
(18)
Equating the (i, i)th block in both sides of (18), we get n
−1 Wi,j Rj,i = 2(n − 1)Is .
2
i=1 j∼i
Using Theorem 3.2, we have the following identity ¯ ⊗ Is ) + (J ⊗ Is )X ¯ − 2X)L LRL = L(X(J = −2LXL 1 (J ⊗ Is ))L n = −2LL† L = −2L. = −2L(L† +
(19)
In the next theorem, we establish the matrix τ Rτ is positive definite and derive a formula for it.
F. Atik et al. / Linear Algebra and its Applications 571 (2019) 41–57
52
Theorem 3.5. Let G be a connected graph on n vertices such that the edge weights are s ×s positive definite matrices, and let L and R be the Laplacian matrix and the resistance ¯ and τ are defined as in equations (13) and matrix of G, respectively. If the matrices X n (14), then the matrix τ Rτ is positive definite and is equal to 2¯ x L¯ x + n8 [ i=1 Xii − Is ], ¯ 1 ⊗ Is ). where x ¯ = X( Proof. We have, 2 ¯ ⊗ Is ) + 2 (1 ⊗ Is )] (1 ⊗ Is )]R[LX(1 n n 2 ¯ ¯ ⊗ Is ) + (1 ⊗ Is )RLX(1 ¯ ⊗ Is ) X(1 = (1 ⊗ Is )XLRL n 2 4 ¯ + (1 ⊗ Is )XLR(1 ⊗ Is ) + 2 (1 ⊗ Is )R(1 ⊗ Is ). n n
¯ + τ Rτ = [(1 ⊗ Is )XL
Now, ¯ ¯ ⊗ Is ) = (1 ⊗ Is )X(−2L) ¯ ¯ ⊗ Is ) (1 ⊗ Is )XLRL X(1 X(1 = −2¯ x L¯ x, and 2 2 ¯ ¯ X(J ¯ ⊗ Is ) − 2Ins (1 ⊗ Is )XLR(1 ⊗ Is ) = (1 ⊗ Is )X(L n n 2 + (J ⊗ Is ))(1 ⊗ Is ) [By (18)] n x. = 2¯ x L¯ Similarly, we can derive the following 2 ¯ ⊗ Is ) = 2¯ (1 ⊗ Is )RLX(1 x L¯ x. n Finally, 4 4 ¯ ⊗ Is ) + (J ⊗ Is )X ¯ − 2X)(1 ⊗ Is ) (1 ⊗ Is )R(1 ⊗ Is ) = 2 (1 ⊗ Is )(X(J n2 n 4 ¯ ⊗ Is ) + n(1 ⊗ Is )X(1 ¯ ⊗ Is ) = 2 [n(1 ⊗ Is )X(1 n − 2(1 ⊗ Is )X(1 ⊗ Is ))] n 8 Xi,i − Is . = n i=1 Then from (20) we get
(20)
F. Atik et al. / Linear Algebra and its Applications 571 (2019) 41–57
n 8 τ Rτ = 2¯ x L¯ x+ Xi,i − Is . n i=1
53
(21)
For i = 1, 2, . . . , n, the matrices Xi,i − n1 Is are the diagonal blocks of the matrix L† . n n Since i=1 Xi,i − Is = i=1 (Xi,i − n1 Is ), by applying Lemma 2.1, we get the matrices Xi,i − n1 Is are positive definite for i = 1, 2, . . . , n. Again, x ¯ L¯ x=x ¯ QQ x ¯ is also positive semidefinite matrix. Hence, from (21), we get τ Rτ is a positive definite matrix. 2 4. Determinant, inverse and inertia of the resistance matrix In this section, we derive formulae for determinant, inverse and inertia of the resistance matrix. In the next theorem, we establish that the resistance matrices are nonsingular and derive a formula for the inverse of them. Theorem 4.1. Let G be a connected graph on n vertices such that the edge weights are s ×s positive definite matrices, and let L and R be the Laplacian matrix and the resistance matrix of G, respectively. Then R is nonsingular and R−1 = − 12 L + τ (τ Rτ )−1 τ . Proof. From equation (18), we have ¯ ⊗ Is ) − 2Ins + 2 J ⊗ Is LR = LX(J n 2 ¯ ⊗ Is ) + (1 ⊗ Is )](1 ⊗ Is ) ⇒ LR + 2Ins = [LX(1 n = τ (1 ⊗ Is ) [By Lemma 3.3]
(22)
⇒ (LR + 2Ins )τ = τ (1 ⊗ Is )τ = 2τ [By Remark 3.1] ⇒ LRτ = 0.
(23)
From Theorem 3.5, we have τ Rτ is a positive definite matrix, so Rτ is nonzero. Since L has exactly s number of zero eigenvalues and L(1 ⊗ Is ) = 0, the column space of Rτ is a subspace of column space of (1 ⊗ Is ). Thus, there exists a matrix C such that Rτ = (1 ⊗ Is )C
⇒ τ Rτ = τ (1 ⊗ Is )C = 2C [By Remark 3.1] ⇒C= Now,
1 τ Rτ. 2
F. Atik et al. / Linear Algebra and its Applications 571 (2019) 41–57
54
1 (1 ⊗ Is )τ Rτ 2 1 ⇒ τ R = τ Rτ (1 ⊗ Is ), 2 Rτ =
(24)
and 1 1 (− L + τ (τ Rτ )−1 τ )R = − LR + τ (τ Rτ )−1 τ R 2 2 1 1 = − LR + τ (τ Rτ )−1 τ Rτ ( 1 ⊗ Is ) [By (24)] 2 2 1 1 = − LR + τ (1 ⊗ Is ) 2 2 = Ins [By (22)]. Thus, the matrix R is nonsingular, and R−1 = − 12 L + τ (τ Rτ )−1 τ .
2
Consider the partition of the Laplacian matrix L as in (3) ⎡
L1,1 ⎢ ⎢ L2,1 L=⎢ ⎢ .. ⎣ . Ln,1
L1,2 L2,2 .. . Ln,2
⎤ . . . L1,n ⎥ . . . L2,n ⎥ .. ⎥ .. ⎥. . . ⎦ . . . Ln,n
(25)
n n Then, by definition, we have j=1 Li,j = 0 for all i ∈ {1, 2, . . . , n}, and i=1 Li,j = 0 for all j ∈ {1, 2, . . . , n}. By applying Lemma 2.2, we get the cofactors of any two blocks Li,j and Lk,l in L are equal. Using this fact, first we derive a formula for the determinant of the resistance matrix. Theorem 4.2. Let G be a connected graph on n vertices such that the edge weights are s ×s positive definite matrices, and let L and R be the Laplacian matrix and the resistance Rτ ) matrix of G, respectively. Then det R = (−1)(n−1)s 2(n−3)s det(τ χ(G) , where χ(G) is the cofactor of any block of L. Proof. By Theorem 3.5, we have τ Rτ is a nonsingular matrix. Using the Schur formula for the determinant, we have
− 12 L det τ
τ −τ Rτ
1 = det(−τ Rτ ) det(− L + τ (τ Rτ )−1 τ ) 2 = det(−τ Rτ ) det R−1 = (−1)s det(τ Rτ ) det R−1 .
(26)
Let L(1, 1) be the submatrix obtained from L by deleting the matrices in the first row and first column. Using Remark 3.1, adding all the column blocks except the last column
F. Atik et al. / Linear Algebra and its Applications 571 (2019) 41–57
55
block to the first column block and adding all the row blocks except the last row block 1 τ −2L to the first row block of the matrix , we get −τ Rτ τ det
τ − 12 L −τ Rτ τ
⎡
0s×s
⎢ ⎢ = det ⎢ 0(n−1)s×s ⎣ 2Is ⎡ ⎢ ⎢ = (−1)s det ⎢ ⎣
⎤
0s×(n−1)s
2Is
− 12 L(1, 1)
∗
∗
−τ Rτ
2Is
0s×(n−1)s
∗ −τ Rτ
⎥ ⎥ ⎥ ⎦
0s×s
⎤
⎥ − 12 L(1, 1) 0(n−1)s×s ⎥ ⎥ ⎦ ∗ 2Is
1 = (−1)s det(2Is ) det(2Is ) det(− L(1, 1)) 2 (−1)ns = (n−3)s χ(G). 2 Using this in (26), we get det R = (−1)(n−1)s 2(n−3)s
det(τ Rτ ) . χ(G)
2
Note that, unlike the unweighted case, the resistance matrix of a tree with matrix weights defined by equation (4) need not coincide with the distance matrix of the tree with matrix weights (if s > 1). In [1, Theorem 3.4], it is proved that for a tree with positive definite matrix weights the distance matrix D whose (i, j)th block is equal to Di,j = Ki,i + Kj,j − Ki,j − Kj,i which is not equal to Ki,i + Kj,j − 2Ki,j in general. In spite of this, we prove some of the properties of resistance matrix of a tree with matrix weights, which are similar to that of distance matrix of tree with matrix weights. Remark 4.1. Recall that Q is the vertex edge incidence matrix of the graph G in which each edge weight matrix is replaced by its inverse. Also we have from Theorem 2.1 of [1], the rank of Q is (n − 1)s. If G is a tree, then, using (22), we get LR = τ (1 ⊗ Is ) − 2Ins ⇒ LRQ = τ (1 ⊗ Is )Q − 2Q ⇒ QQ RQ = −2Q [As (1 ⊗ Is )Q = 0] ⇒ Q RQ = −2I(n−1)s . Interlacing inequality for the eigenvalues of distance and Laplacian matrices of a matrix weighted tree is given in [1].
F. Atik et al. / Linear Algebra and its Applications 571 (2019) 41–57
56
Using the fact Q RQ = −2I(n−1)s , we can prove the following theorem. The proof is similar to that of [1, Theorem 3.6]. For the sake of completeness, we include a proof here. Theorem 4.3. Let T be a weighted tree on n vertices, where each weight is a positive definite matrix of order s. Let R be the resistance matrix of T , and L denote the Laplacian matrix of T . Let μ1 ≥ μ2 ≥ . . . ≥ μns be the eigenvalues of R and λ1 ≥ λ2 ≥ . . . ≥ λns−s > λns−s+1 = . . . = λns = 0 be the eigenvalues of L. Then, μs+i ≤ −
2 ≤ μi for i = 1, 2, . . . , ns − s. λi
Proof. Let Q be the ns × (n − 1)s vertex edge incidence matrix of T . Since the columns of Q are linearly independent, by Gram-Schmidt orthogonalization process, there exists an (n − 1)s × (n − 1)s nonsingular matrix M such that the columns of the matrix QM are orthonormal. Also we have (1T ⊗ Is )Q = 0, which implies that the matrix U = QM, √1n (1 ⊗ Is ) is orthogonal. Thus, the spectrum of the matrices R and U T RU are same. Now, T
U RU = =
M T QT RQM √1 (1T n
⊗ Is )RQM
−2M T M √1 (1T n
⊗ Is )RQM
√1 M T QT R(1 ⊗ Is ) n T 1 (1 ⊗ Is )R(1 ⊗ Is ) n √1 M T QT R(1 ⊗ Is ) n T 1 n (1 ⊗ Is )R(1 ⊗ Is )
[By Remark 4.1].
(27)
Let K = QT Q. Then K is nonsingular. Also M T KM = M T QT QM = I implies K −1 = M M T . As M is nonsingular, K −1 and M T M have the same eigenvalues. Since the nonzero eigenvalues of L = QQT are the eigenvalues of K, we have − λ21 ≥ − λ22 ≥ . . . ≥ 2 − λ(n−1)s are the eigenvalues of −2K −1 and −2M T M . Now, by interlacing theorem, we have μs+i ≤ −
2 ≤ μi for i = 1, 2, . . . , ns − s. λi
2
For an n × n symmetric matrix A, let p+ (A), p− (A) and p0 (A) denote the number of positive, negative and zero eigenvalues of A, respectively. Then the 3-tuple (p+ (A), p− (A), p0 (A)) is called the inertia of the matrix A. Next we derive the formula for the inertia of the resistance matrix of a tree with matrix weights. Theorem 4.4. Let T be a weighted tree on n vertices, where each weight is a positive definite matrix of order s and R be the resistance matrix of G. Then inertia of R is (s, ns − s, 0).
F. Atik et al. / Linear Algebra and its Applications 571 (2019) 41–57
57
Proof. Let μ1 ≥ μ2 ≥ . . . ≥ μns be the eigenvalues of R. Using the previous theorem we have μs+i ≤ 0, for all i = 1, 2, . . . , (n − 1)s. Again in the matrix R, 0s×s is a principal submatrix. Since R is nonsingular, using the interlacing theorem, we have μi ≥ 0, for all i = 1, 2, . . . , s. Hence the inertia of R is (s, ns − s, 0). 2 Acknowledgements The authors thank the anonymous referee for her/his helpful comments and suggestions. R. B. Bapat acknowledges the support of the JC Bose Fellowship, Department of Science Technology, Government of India. M. Rajesh Kannan would like to thank the Department of Science and Technology, India, for financial support through the Early Carrier Research Award (ECR/2017/000643). References [1] Fouzul Atik, M. Rajesh Kannan, R.B. Bapat, On distance and Laplacian matrices of trees with matrix weights, arXiv:1710.10097. [2] R.B. Bapat, Resistance matrix of a weighted graph, MATCH Commun. Math. Comput. Chem. (50) (2004) 73–82. [3] R.B. Bapat, Determinant of the distance matrix of a tree with matrix weights, Linear Algebra Appl. 416 (1) (2006) 2–7. [4] Ravindra B. Bapat, Graphs and Matrices, second edition, Universitext, Springer, Hindustan Book Agency, London, New Delhi, 2014. [5] R.B. Bapat, S.J. Kirkland, M. Neumann, On distance matrices and Laplacians, Linear Algebra Appl. 401 (2005) 193–209. [6] R.L. Graham, L. Lovász, Distance matrix polynomials of trees, Adv. Math. 29 (1) (1978) 60–88. [7] R.L. Graham, H.O. Pollak, On the addressing problem for loop switching, Bell Syst. Tech. J. 50 (1971) 2495–2519. [8] Roger A. Horn, Charles R. Johnson, Topics in Matrix Analysis, Cambridge University Press, Cambridge, 1994, Corrected reprint of the 1991 original. [9] Douglas J. Klein, Milan Randić, Resistance distance, J. Math. Chem. 12 (1) (1993) 81–95. [10] Jack H. Koolen, Greg Markowsky, Jongyook Park, On electric resistances for distance-regular graphs, European J. Combin. 34 (4) (2013) 770–786. [11] Xiaogang Liu, Jiang Zhou, Changjiang Bu, Resistance distance and Kirchhoff index of R-vertex join and R-edge join of two graphs, Discrete Appl. Math. 187 (2015) 130–139. [12] Russell Merris, The distance spectrum of a tree, J. Graph Theory 14 (3) (1990) 365–369. [13] Lizhu Sun, Wenzhe Wang, Jiang Zhou, Changjiang Bu, Some results on resistance distances and resistance matrices, Linear Multilinear Algebra 63 (3) (2015) 523–533. [14] Wenjun Xiao, Ivan Gutman, On resistance matrices, MATCH Commun. Math. Comput. Chem. (49) (2003) 67–81. [15] Wenjun Xiao, Ivan Gutman, Resistance distance and Laplacian spectrum, Theor. Chem. Acc. 110 (4) (2003) 284–289. [16] Jiang Zhou, Zhongyu Wang, Changjiang Bu, On the resistance matrix of a graph, Electron. J. Combin. 23 (1) (2016) 1.41, 10.