Finite Elements in Analysis and Design 45 (2009) 149 -- 155
Contents lists available at ScienceDirect
Finite Elements in Analysis and Design journal homepage: w w w . e l s e v i e r . c o m / l o c a t e / f i n e l
A general theorem for adjacency matrices of graph products and application in graph partitioning for parallel computing A. Kaveh ∗ , B. Alinejad Centre of Excellence for Fundamental Studies in Structural Engineering, Iran University of Science and Technology, Narmak, Tehran 16, Iran
A R T I C L E
I N F O
Article history: Received 7 March 2008 Received in revised form 15 July 2008 Accepted 15 September 2008 Available online 13 November 2008 Keywords: Graph products Non-complete extended P-sum Generalized direct product of graphs Adjacency matrices Eigenvalues Bisection of regular models
A B S T R A C T
Many regular models can be viewed as the graph products of two ore more subgraphs know as their generators. In this paper, a general theorem is presented for the formation of adjacency matrices using a series of algebraic relationships. These operations are performed on the adjacency matrices of the generators. The Laplacian matrix of the graph product is then formed and the second eigenvalue and the corresponding eigenvector are used for the bisection of the regular graphs associated with space structures or finite element models. © 2008 Elsevier B.V. All rights reserved.
1. Introduction Graph theory has been successfully applied to many problems in computational mechanics, Kaveh [1,2]. Graph products, are employed in configuration processing, node/element ordering and partitioning of regular space structures and finite element models. These products are also employed in easy calculation of eigenvalues, and employed in calculation of buckling load and eigenfrequencies of regular structures. A structure is called regular if it can be generated by the product of two or more subgraphs. The data structure of a graph can be represented by different means. The adjacency matrix is one such a representation often used in algebraic graph theory. Algebraic graph theory is a branch of graph theory, where eigenvalues and eigenvectors of adjacency and Laplacian matrices are employed to deduce the principal properties of a graph. In fact eigenvalues are closely related to most of the invariants of a graph, linking one extremal property to another. These eigenvalues play a central role in our fundamental understanding of graphs. There are interesting books on algebraic graph theory such as Biggs [3], Cvetkovic et al. [4], and Godsil and Royle [5]. One of the major contributions in algebraic graph theory is due to Fiedler [6], where the properties of the second eigenvalue and eigenvector of the Laplacian of a graph have been introduced. This eigenvector, known as the Fiedler vector is used in graph nodal ordering and bipartition, Refs. [7–9].
∗ Corresponding author. Tel.: +98 21 44202710; fax: +98 21 77240398. E-mail address:
[email protected] (A. Kaveh). 0168-874X/$ - see front matter © 2008 Elsevier B.V. All rights reserved. doi:10.1016/j.finel.2008.09.002
General methods are available in literature for calculating the eigenvalues of matrices, however, for matrices corresponding to special models, it is beneficial to make use of their extra properties. These methods have many applications in computational mechanics, such as ordering, graph partitioning, and subdomaining finite element models, Kaveh and Rahami [10,11]. At the beginning of the seventh decade of this century special graph products, known as the non-complete extended P-sum (NEPS), were introduced by Cvetkovic [4]. Generalization of these products can be found in the work of Sokarovski [12]. In this paper, the formation of adjacency matrices is studied. More general definitions are introduced for graph products than NEPS, and then a theorem is proved which can be used to find the properties of the adjacency matrices of the existing graph products. Once the adjacency matrix for a graph product is formed, then the Laplacian matrix for this graph can be constructed. The second eigenvalue of this matrix is calculated and the corresponding eigenvector is used for the bisection of the finite element models. 2. Graph products 2.1. Basic definitions from graph theory A graph S consists of a set N(S) of elements called nodes (vertices or points) and a set M(S) of elements called members (edges or arcs) together with a relation of incidence which associated with each member a pair of nodes, called its ends. The connectivity properties of a skeletal structure can simply be transformed into that of a graph S; the joints and the members of the structure correspond
150
A. Kaveh, B. Alinejad / Finite Elements in Analysis and Design 45 (2009) 149 -- 155
to the nodes and the edges of S, respectively. Two or more members joining the same pair of nodes are known as multiple members, and a member joining a node to itself is called a loop. If N(S) and M(S) are countable sets, then the corresponding graph S is finite. A graph Si is a subgraph of S if N(Si ) ⊆ N(S), M(Si ) ⊆ M(S) and each member of Si has the same end nodes as in S. A path of S is a finite sequence Pi = {n0 , m1 , n1 , . . . , mp , np } whose terms are alternately distinct nodes ni and distinct members mi of S for 1 i p, and ni −1 and ni are the two ends of mi. The length of a path Pi , denoted by L(Pi ), is taken as the number of its members. A cycle is a path (n0 ,m1 ,n1 , . . . ,mp ,np ) for which n0 = np and p 3; i.e., a cycle is a closed path.
Some well-known graph products belonging to NEPS group are Cartesian, Strong Cartesian and Direct products. However, another important graph product is Lexicographic product that is not a member of NEPS group. Šokarowski [12] improved NEPS theorem by defining a new symbol as ui vi and ui vi ∈/ M(Gi ). He provided the following theorem:
2.2. Kronecker product
where = (1 , . . . ,n ), Ai is the adjacency matrix of the complement Gi of G and ⊗ denotes the Kronecker multiplication of matrices.
The Kronecker product of two matrices A and B, is the matrix obtained by replacing the ij-th entry of A by aij B, for all i and j. As an example: ⎡ ⎤ a b a b ⎢c d c d⎥ 1 1 a b ⎥ ⊗ =⎢ ⎣a b 0 0⎦ 1 0 c d c d 0 0 The Kronecker product has the property that if B, C, D and E are four matrices, such that BD and CE exists, then: (B ⊗ C)(D ⊗ E) = BD ⊗ CE Thus if u and v are vectors of the correct dimensions, then (B ⊗ C)(u ⊗ v) = Bu ⊗ Cv If u and v are eigenvectors of B and C, corresponding to eigenvalues
and , respectively, then Bu ⊗ Cv = (u ⊗ v) where u ⊗ v is an eigenvector of B ⊗ C corresponding to eigenvalues . 3. A general theorem for graph products
Theorem. Let A1 , . . . ,An be the adjacency matrices of graphs G1 , . . . ,Gn . Then G = GDP(B;G1 , . . . ,Gn ) has the adjacency matrix as A=
n
⊗
∈B i=1
1 1 (1 + i )Ai i + (1 − i )Ai i 2 2
Now, we introduce a theorem that is more general than NEPS theorem and covers all the above mentioned products. These graph products can be referred to by the following four symbols: (i) (ii) (iii) (iv)
ui = vi ui vi ∈ M(Gi ) ui vi and ui vi ∈/ M(Gi ) (vacant space)
Similar to the NEPS, here we consider a base set B with members for any graph product and in which, every has “0”, “1”, “−1” and “” as its members. The member “0” corresponds to u = v, the member “1” corresponds to u ∈ M(G), and “−1” corresponds to u v ∈/ v(G) and finally “” corresponds to vacant space in the definition of graph product. In other words, in each row of definition that we have n propositions (equal to the number of generators of product) if any location is vacant, the corresponding location can be identified by “”. For example, if in the definition of a graph product with four generators, in the first row we have propositions only for the first and third subgraphs as u1 = v 1
u3 v3 ∈ M(G3 )
and
Then we can rewrite it as follows: For graph products and their adjacency matrices we have some theorems. The most well-known theorem of this kind is NEPS product of graphs. NEPS products have a special property containing one or more rows which are separated by “or” and the number of proposition in each row is equal to the number of generators. These propositions are separated by “and” from each other. In every row, two formal propositions are ui = vi and ui vi ∈ M(Gi ). For these products two theorems are proved by Cvetkovic [4] as follows: Theorem 1. (Cvetkovic [13]): The NEPS G with basis B of graphs G1 , . . . ,Gn , whose adjacency matrices are A1 , . . . ,An has the following adjacency matrix: A=
A1 1 ⊗ · · · ⊗ A n n ∈B
Theorem 2. (Sokarovski [12]): For i = 1,2, . . . ,n, let Gi be a graph with ni vertices, and let i , . . . , in be the spectrum of Gi . Then the spectrum 1 of the NEPS with basis B of the graphs G1 , . . . ,Gn consists of all possible values of i , ...,in , where 1
i1 , ...,in =
1
∈B
i . . . i n for (ik = 1, 2, . . . , nk ; k = 1, 2, . . . , n) 1
n
u1 = v 1
and . . . and
u3 v3 ∈ M(G3 ) and . . .
Then can be written as
i = {(0, , 1, )} i.e., for each vacant location, we allocate an “”. General Theorem. The adjacency matrix for graph products of graphs G1 , . . . ,Gn with the adjacency matrices A1 , . . . ,An can be expressed as ⎧ ⎫ ⎪ ⎪ n−1 ⎨ ⎬
(−1)i A= A1 1 ⊗ . . . ⊗ A n n + A1 1 ⊗ . . . ⊗ A n n ⎪ ⎪ ⎩ ⎭ i=1 ∈B ∈i where = (1 , . . . ,n ), each member of B can have n number of members as i , and ⎧ A ⎪ ⎪ ⎨ i I Ai i = i ⎪ Ai ⎪ ⎩ Oi
if if if if
i = 1 i = 0 i = −1 i =
Here, Oi is an i×i square matrix with all entries being equal to “1” and Ai is the adjacency matrix of the complement Gi of G.
A. Kaveh, B. Alinejad / Finite Elements in Analysis and Design 45 (2009) 149 -- 155
Proof. First a standard model is introduced for any normal graph product. A graph product is called normal if in its definition four symbols “0”, “1”, “−1” and “” are encountered. In this definition we have one or more rows and in each row we have n location corresponding to n subgraphs which separated from each other by “and”. Thus for instance in graph product of n subgraphs if we have four propositions in its row, in i for these four factors we see “0” or “1” or “−1” and for other n−4 vacant locations we have “”. ui = vi is logically true when two corresponding nodes in Gi are identical. Therefore, for a graph with n nodes we have n×n logical positions. Thus In is the matrix that can explain correctness and incorrectness of this phrase. Here In is called the value matrix for ui = vi in graph Gi . For ui vi ∈ M(Gi ), we can say that An is the value matrix. All the entries of An show the entire logical conditions of ui vi ∈ M(Gi ). Also for ui vi and ui vi ∈/ M(Gi ), the An is considered as the value matrix. For “” from i that is a null condition, On is the value matrix. Thus, in every row with some propositions we can consider a value matrix. Where for “0” from i we have In , for “1” we have An , for “−1” we have An and for “” from i we have On as value matrices. Now, in each row of the definition, for an entry of the first proposition's value matrix, we have n2×n2 different conditions for the second proposition, . . . and nn×nn different conditions for the n-th proposition. It is clear that this can be a definition for Kronecker product of these value matrices. Also between the rows of definition we have “or”. This means that if i belongs to i-th row of the definition of the product, we can write the entire adjacency matrix of the product by sum of the value matrices corresponding to these i minus something.
A= A1 1 ⊗ . . . ⊗ An n − {. . .} ∈B
1 : if i , j ∈ B
then 1 =
2 : if i , j , k ∈ B
then 2 =
·
· · then n−1 = ∩{1 , 2 , . . . , n }
n−1 : if 1 , 2 , . . . , n ∈ B
{i , j }, i j
i,j=1:n−1
i,j,k=1:n−1
{i , j , k }, i j k
The value matrix corresponding to the set of 1 , must be subtracted from the initial value matrix that had been written directly from the definition, since it is repeated twice. After this subtraction some members belong to 2 are subtracted twice, and similarly their corresponding value matrices are subtracted twice. Thus we must add them to the initial matrix. Repeating this process for 3 , . . . ,n −1 , the adjacency matrix can be formed as follows: ⎧ ⎫ ⎪ ⎪ n−1 ⎨ ⎬
(−1)i A1 1 ⊗ . . . ⊗ A n n + A1 1 ⊗ . . . ⊗ A n n A= ⎪ ⎪ ⎩ ⎭ i=1 ∈B ∈i and the proof is completed.
Now let us consider the product of four graphs. Assume that the definition of product makes the base B as follows: B = {(1, 0, , 1), (, 0, , 1), (1, 1, 0, 1), (1, , 0, )} Here, 1 is the set of common members for each pair of members of the base B which can be written as
1 = {(1, 0, , 1), (1, 0, 0, 1), (1, 0, 0, 1), (1, 1, 0, 1)} It should be noted that in the intersection set the repeated entries should not be deleted. Also 2 and 3 are equal to
2 = {(1, 0, 0, 1)} and 3 = Therefore from the above theorem, the adjacency matrix of this product can be written as A = A1 ⊗ I2 ⊗ O3 ⊗ A4 + O1 ⊗ I2 ⊗ O3 ⊗ A4 + A1 ⊗ A2 ⊗ I3 ⊗ A4 + A1 ⊗ O2 ⊗ I3 ⊗ O4 − {A1 ⊗ I2 ⊗ O3 ⊗ A4 + A1 ⊗ I2 ⊗ I3 ⊗ A4 + A1 ⊗ I2 ⊗ I3 ⊗ A4 + A1 ⊗ A2 ⊗ I3 ⊗ A4 } + A1 ⊗ I2 ⊗ I3 ⊗ A4 This results in A = O1 ⊗ I2 ⊗ O3 ⊗ A4 + A1 ⊗ O2 ⊗ I3 ⊗ O4 − A1 ⊗ I2 ⊗ I3 ⊗ A4 The definition of this product for four graphs can be written as follow, in which the base set B is as u1 v1 ∈ M(G1 )
This negative factor depends on whether any i , j from B contains some common members. Therefore, if these common members exist, then the value matrices corresponding to these members will be repeated twice in the sum of the value matrices of the rows for constructing the adjacency matrix of the product and thus we must subtract these extra factors. We denote the first common members for any two members of the base B by 1 . The set of common members of any three members of the base B is designated by 2 , . . . and finally the set of common members of n members (whole members) of the base B is denoted as n −1 .
The intersection of “0” and “1”, “0” and “−1”, and “1” and “−1” are empty (), and for the intersection of “” and “0” and for “” and “1” we have “−1”.
151
and u2 = v2
and
u4 v4 ∈ M(G4 )
or u2 = v 2
and
u4 v4 ∈ M(G4 )
or u1 v1 ∈ M(G1 ) and
and u2 v2 ∈ M(G2 )
and
u3 = v3
u4 v4 ∈ M(G4 )
or u1 v1 ∈ M(G1 )
and u3 = v3
For NEPS group of products we have no common members in the base B. Thus i = , i = 1:n−1 and the adjacency matrix for graph product can be written as A=
A1 1 ⊗ . . . ⊗ A n n ∈B
4. Examples Example 1. The adjacency matrix of Lexicographic product of two graphs K, H is studied employing the general theorem of the previous section. Lexicographic product of two graphs K and H with degrees n and m is a graph S = K ◦ H such that for any two nodes u = (u1 ,u2 ) and v = (v1 ,v2 ) in N(K)×N(H) , the member uv is in M(S) if: u1 v1 ∈ M(K) or u1 = v 1
and
u2 v2 ∈ M(H)
Solution: First one should find out the base B and any , involved in the general theorem B = {(1, ), (0, 1)},
1 =
152
A. Kaveh, B. Alinejad / Finite Elements in Analysis and Design 45 (2009) 149 -- 155
Then (In ⊗ Om )(Um ⊗ Um ) = (In Um ) ⊗ (Om Um ) = (n Um ) ⊗ (m ⊗ Um ) = n m (Um ⊗ Um ) It can be observed that n m are the eigenvalues of the adjacency matrix. Here n are the eigenvalues of In , and all the entries of n have unit values. Fig. 1. Lexicographic product of P3 and P3 .
Therefore, considering the general theorem we can write the adjacency matrix for this product as follow: A(S) = An ⊗ Om + In ⊗ Am As an example, the Lexicographic product of P3 and P3 is illustrated in Fig. 1. In relation with the eigenvalues of this adjacency matrix, the following theorem can be employed. Theorem. ([14]): The necessary condition for: ⎛ ⎞ n n
eig ⎝ Ai ⊗ B i ⎠ = eig Ai ⊗ Bi i=1
i=1
Bi Bj = Bj Bi ,
i, j = 1 : n
A proof can be found in [14]. Here for the Lexicographic product we have A(S) = An ⊗ Om + In ⊗ Am with An In = In An , and one condition of the above theorem is satisfied. We can write: eig(A(S)) =
n
{eig(i (An )Om + Am )}.
i=1
Example 2. Another product for two graphs is considered with the following definition: u1 = v 1 or u1 = v1
and u2 v2 ∈ M(H)
Now we want to calculate the adjacency matrix of this product and the corresponding eigenvalues. Solution: Similar to the previous example, B = {(0, ), (0, 1)},
u1 v1 ∈ M(K) or
u2 v2 ∈ M(H)
In this product two subgraphs as K,H with degrees n,m are multiplied. We want to find the adjacency matrix of this product. Solution: First the sets B and are calculated as B = {(1, ), (, 1)} 1 = {(1, 1)} 2 = Using the general theorem we have A = A n ⊗ Om + O n ⊗ Am − A n ⊗ Am
is: Ai Aj = A j Ai ,
Example 3. In this example, the disjunctive product of two graphs, known as co-normal product, is considered. The definition of this product is as follows:
1 = {(0, 1)}
Once the adjacency matrix for a graph product is written, the Laplacian matrix for this graph can be constructed. The entries of the weighted Laplacian matrix L of a weighted graph is defined as L=D−A The entries of L are as follows: ⎧ −ewij = −ewji if nodes ni and nj are adjacent ⎪ ⎪ ⎪ ⎨ Di lij = ewii = ewij for i = j ⎪ ⎪ j=1 ⎪ ⎩ 0 otherwise where ewij is the weight of the edge eij , and Di is the degree of the node ni . For a non-weighted graph, the degree matrix D = [dij ]n×n is a diagonal matrix of node degrees. Here, the ith diagonal entry dii is equal to the degree of the node i. Therefore, the entries of L are as ⎧ if nodes ni and nj are adjacent ⎨ −1 lij = deg(ni ) for i = j ⎩ 0 otherwise Consider the following eigenproblem: LVi = i Vi where i is the eigenvalue, and Vi is the corresponding eigenvector. As for A, all the eigenvalues of L are real. It can be shown that the matrix L is positive semi-definite with V1t = {1, 1, . . . , 1}
And using the general theorem we have
0 = 1 2 . . . n
A = In ⊗ Om + In ⊗ Am − In ⊗ Am = In ⊗ Om
The second eigenvalue 2 and the corresponding eigenvector V2 have attractive properties. Fiedler [6] has investigated various properties of 2 . This eigenvalue is known as the algebraic connectivity of a graph, and the corresponding eigenvector V2 is known as the Fiedler vector. Mohar [7] has applied (2 ,V2 ) to different problems such as graph partitioning and ordering. Paulino et al. [15], Simon [16], Seale and Topping [17], Kaveh and Davaran [18] and Kaveh and
For the eigenvalues of this matrix, employing the properties of Kronecker product, if m is an eigenvalue of Om , and Um is the corresponding eigenvector, we have Om Um = m Um
and
A. Kaveh, B. Alinejad / Finite Elements in Analysis and Design 45 (2009) 149 -- 155
153
Fig. 2. Cartesian product of P10 and C20 .
Fig. 4. Bisection of the graph as the strong Cartesian product of P12 and C24 .
Solution: This definition for graph product corresponds to the strong Cartesian product. Using the general theorem we have B = {(1, 0), (0, 1), (1, 1)},
1 =
and then Fig. 3. Bisection of the model.
A(S) = An ⊗ Im + In ⊗ Am + An ⊗ Am
Rahimi Bondarabady [19] have used the properties of V2 , for partitioning graphs.
As an example, the strong Cartesian product of P12 and C24 is bisected as illustrated in Fig. 4. For partitioning, the second eigenvalue of the Laplacian matrix is calculated as 2 = 0.1927.
Example 4. In this example, the product of two graphs is considered as follows:
Example 6. In this example, the product of two graphs is considered as follows:
u1 v1 ∈ M(K) and
u1 v1 ∈ M(K) and
u 2 = v2
or
or u1 = v1
u 2 = v2
and
u2 v2 ∈ M(H)
Solution: This definition for graph product corresponds to the Cartesian product. Using the theorem we have: B = {(1, 0), (0, 1)},
1 =
u1 v1 ∈ M(K) and
u2 v2 ∈ M(H)
Solution: This product is named as converse skew product. Using the general theorem we have B = {(1, 0), (1, 1)},
1 =
and
and then
A(S) = An ⊗ Im + In ⊗ Am
A(S) = In ⊗ Am + An ⊗ Am
As an example, the Cartesian product of P10 and C20 is illustrated in Fig. 2. For this model, the Fiedler vector [6] is used and the model is decomposed into two submodels. For this purpose, the second eigenvalue of Laplacian matrix for this graph is calculated as 2 = 0.0979. The bisected model is illustrated in Fig. 3.
For bisecting the converse skew product of C14 and C30 , the second eigenvalue of Laplacian matrix is employed as 2 = 0.0874. The decomposed model is shown in Fig. 5. As another example, the bisection of the converse skew product of P15 and C20 , using 2 = 0.1311, is shown in Fig. 6.
Example 5. In this example, the product of two graphs is considered as u1 v1 ∈ M(K) and
u 2 = v2
or u1 = v1
Example 7. In this example, the product of two graphs is considered as follows: u1 = v 1
and
u2 v2 ∈ M(H)
or u1 v1 ∈ M(K) and
and
u2 v2 ∈ M(H)
or u1 v1 ∈ M(K) and
u2 v2 ∈ M(H)
u2 v2 ∈ M(H)
Solution: This product is named as the skew product. Using the theorem we have B = {(0, 1), (1, 1)},
1 =
154
A. Kaveh, B. Alinejad / Finite Elements in Analysis and Design 45 (2009) 149 -- 155
and then
As another example, the bisection of the skew product of P14 and C20 , using 2 = 0.1003, is shown in Fig. 8.
A(S) = In ⊗ Am + An ⊗ Am The bisection of the skew product of P8 and P20 , using 2 = 0.0676, is shown in Fig. 7.
Fig. 5. Bisection of converse skew product of C14 and C30 .
Fig. 8. Bisection of the skew product of P14 and C20 .
Fig. 6. Bisection of the converse skew product of P15 and C20 .
Fig. 9. Bisection of the skew product of P14 and C40 .
Fig. 7. Bisection of the skew product of P8 and P20 .
A. Kaveh, B. Alinejad / Finite Elements in Analysis and Design 45 (2009) 149 -- 155
Also the bisection of the skew product of P14 and C40 , using
2 = 0.0721, is shown in Fig. 9. 5. Conclusions
Using the general theorem proved in this paper, one can find the adjacency matrix for any graph product. Forming the adjacency matrix of a product, many properties of the graph can easily be explored. The importance of the application of this theorem becomes more apparent when the subgraphs contain large number of members. The theorem helps to find the adjacency matrix of the graph products rapidly and easily. Furthermore, having the adjacency matrix, one can easily evaluate its eigenvalues and eigenvectors. One can also construct the Laplacian matrix of graph from the adjacency matrix. The second eigenvector of the Laplacian matrix can be employed in partitioning and ordering of the finite element models. It can easily be recognized that the general theorem of this paper can be a short cut to perform eigensolution of many problems involved in structural mechanics. Acknowledgement The first author is grateful to the Iran National Science Foundation for support. References [1] A. Kaveh, Structural Mechanics: Graph and Matrix Methods, third ed., Research Studies Press, Somerset, UK, 2004.
155
[2] A. Kaveh, Optimal Structural Analysis, second ed., Research Studies Press, Somerset, UK, 2006. [3] N.L. Biggs, Algebraic Graph Theory, second ed., Cambridge University Press, Cambridge, 1993. [4] D.M. Cvetkovic, M. Doob, H. Sachs, Spectra of Graphs, Theory and Applications, Academic Press, New York, 1980. [5] C. Godsil, G. Royle, Algebraic Graph Theory, Springer Verlag, New York, 2001. [6] M. Fiedler, Algebraic connectivity of graphs, Czech. Math. J. 23 (1973) 298–305. [7] B. Mohar, The Laplacian Spectrum of Graphs, Entitled Graph Theory, Combinatorics and Applications, in: Y. Alavi et al (Eds.), vol. 2, Wiley, NY, 1991, pp. 871–898. [8] A. Pothen, H. Simon, K.P. Liou, Partitioning sparse matrices with eigenvectors of graphs, SIAM J. Matrix Anal. Appl. 11 (1990) 430–452. [9] B.H.V. Topping, J. Sziveri, Parallel Subdomain Generation Method, in: Proceedings of Civil Comp, Edinburgh, UK, 1995. [10] A. Kaveh, H. Rahami, A new spectral method for nodal ordering of regular space structures, Finite Elem. Anal. Des. 40 (2004) 1931–1945. [11] A. Kaveh, H. Rahami, A unified method for eigen-decomposition of graph products, Commun. Numer. Methods Eng. 21 (2005) 377–388. [12] R. Sokarovski, A generalized direct product of graphs, Publ. Inst. Math. (Beograd) 22 (36) (1977) 267–269. [13] D.M. Cvetkovic, Graphs and their spectra, University of Beograd, Publ. Elektrontechn. Fak. Ser. Mat. Fiz., vol. 354–356, 1971, 1–50. [14] A. Kaveh, H. Rahami, Block diagonalization of adjacency and Laplacian matrices for graph product; applications in structural mechanics, Int. J. Numer. Methods Eng. 68 (2006) 33–63. [15] G.H. Paulino, I.F.M. Menezes, M. Gattass, S. Mukherjee, Node and element resequencing using Laplacian of a finite element graph, part-I—general concepts and algorithms, Int. J. Numer. Methods Eng. 37 (1974) 1511–1530. [16] H.D. Simon, Partitioning of unstructured problems for parallel processing, Comput. System Eng. 2 (1991) 135–148. [17] C. Seale, B.H.V. Topping, Parallel Implementation of Recursive Spectral Bisection, in: Proceedings of Civil Comp 95, Edinburgh, UK, 1995, pp. 459–473. [18] A. Kaveh, A. Davaran, Spectral bisection of adaptive finite element meshes for parallel processing, Comput. Struct. 70 (1999) 315–327. [19] A. Kaveh, H.A. Rahimi Bondarabady, A multi-level finite element nodal ordering using algebraic graph theory, Finite Elem. Anal. Des. 38 (2002) 245–261.