Accepted Manuscript On the Lie Algebra Associated to S-unitary matrices
Jonathan Caalim, Clarisson Rizzie Canlubo, Yu-ichi Tanaka
PII: DOI: Reference:
S0024-3795(18)30241-6 https://doi.org/10.1016/j.laa.2018.05.010 LAA 14581
To appear in:
Linear Algebra and its Applications
Received date: Accepted date:
25 October 2017 5 May 2018
Please cite this article in press as: J. Caalim et al., On the Lie Algebra Associated to S-unitary matrices, Linear Algebra Appl. (2018), https://doi.org/10.1016/j.laa.2018.05.010
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
On the Lie Algebra Associated to S-unitary matrices Jonathan Caalima,∗, Clarisson Rizzie Canluboa , Yu-ichi Tanakab a
b
University of the Philippines-Diliman, Quezon City, Philippines 1101 Joso Gakuin High School, Tsuchiura City, Ibaraki Prefecture, Japan 1010
Abstract A natural generalization of unitary groups arising from sesquilinear forms which are assumed neither Hermitian nor skew-Hermitian is considered. Let S ∈ Mn (C). An S-unitary matrix A is a matrix A ∈ GLn (C) such that ASA∗ = S. The set US of all S-unitary matrices is a matrix Lie group. A formula for the real dimension of the associated Lie algebra uS when S is nonsingular and normal is derived. When S is invertible and unitary, it is shown that uS is the direct sum of some Lie algebras associated to the indefinite unitary groups. Finally, the dimension formula is applied to a class of permutation matrices. Mathematics Subject Classification 2010: 22D99, 22E30, 43A80, 20H20 Keywords: S-unitary matrices, matrix Lie groups and Lie algebras, generalized unitary groups 1. Introduction For a matrix S ∈ Mn (C), an S-unitary matrix A is an n × n nonsingular matrix A satisfying ASA∗ = S. Such matrices share a lot of linear algebraic properties with the usual unitary matrices; see for example [1, 6, 2, 3, 5]. In these papers, a generalized polar decomposition of complex matrices based on a fixed invertible matrix S has been considered. In particular, the papers have looked at the problem of determining whether a matrix A can be decomposed into a product ZY of two matrices Z and Y where SZ −1 = Z T S and SY = ∗
Corresponding author Email addresses:
[email protected] (Jonathan Caalim),
[email protected] (Clarisson Rizzie Canlubo),
[email protected] (Yu-ichi Tanaka)
Preprint submitted to Elsevier
May 8, 2018
Y T S. With regards to this problem, the following cases are already examined: S is symmetric, S is skew-symmetric, S is a real involution, S is a real skew involution, and S such that S −T S is normal or non-derogatory. Observe that (A, S) −→ ASA∗ defines an action of the group of n × n nonsingular matrices, GLn (C), on the set Mn (C). Hence, the stabilizer US := {A ∈ GLn (C) | ASA∗ = S} of S ∈ Mn (C) in GLn (C) is a subgroup of GLn (C). Moreover, US is a Lie subgroup of GLn (C). This follows immediately from the fact that US is the preimage of 0 under the map A −→ (ASA∗ − S) which is smooth since it only involves conjugation and the group operations. Note that if S is the zero matrix, then US is GLn (C). Meanwhile, if S is the identity matrix, then US is the classical unitary group. In the next section, the Lie algebra uS associated to US is computed. A formula for the real dimension of these Lie algebras is derived in Section 3. This formula is used as a criterion to differentiate uS from one another. Through this criterion, a classification, albeit incomplete, of the Lie groups US arises, i.e., whenever two Lie algebras uS and uS are of different real dimensions, the corresponding Lie groups US and US are distinct. Moreover, the structure constants of uS when S is invertible and unitary is obtained in Section 4. 2. Matrix Lie groups and their Lie algebras The Lie algebra g of a Lie group G is by definition the algebra of leftinvariant vector fields on G with the usual bracket of vector fields. For matrix Lie groups G, this simplifies into the collection of all matrices A such that etA ∈ G for all t ∈ R; see, for example, [4]. The Lie algebra uS of US has a form reminiscent to that of US as stated in the following proposition. Proposition 2.1. Let S ∈ Mn (C). Then the Lie algebra uS of US is the following algebra uS = {A ∈ Mn (C)|SA∗ = −AS} , where the bracket is the usual commutator of matrices. To see this, let A ∈ uS . Then 2
tA ∗
S(e ) = S
∞ (tA∗ )n n=0
n!
=
∞ (−1)n (tA∗ )n
n!
n=0
S = e−tA S = (etA )−1 S.
Thus, for any t ∈ R, etA ∈ US . Conversely, let A ∈ Mn (C) such that etA ∈ US for all t ∈ R, i.e. ∗
et(SA ) = e−tAS . Differentiating with respect to t and then evaluating at t = 0 gives SA∗ = −AS. Thus, A ∈ uS . The next theorem is a well-known result in the theory of Lie groups. It answers the natural question of how two Lie groups having isomorphic Lie algebras are related. See [7]. Theorem 2.2. Let G1 and G2 be connected Lie groups with isomorphic Lie algebras. If G1 is simply connected, then there is a Lie group surjection G1 −→ G2 . As a matter of fact, more is true. The surjection asserted in Theorem 2.2 is a covering map. In the language of covering spaces, G1 is the universal cover of G2 . An immediate corollary of the above theorem follows by universality. Corollary 2.3. Under the assumptions of Theorem 2.2, if, in addition, G2 is simply connected, then G1 and G2 are isomorphic as Lie groups. By an isomorphism of Lie groups we mean a group isomorphism that is smooth with smooth inverse. The corollary above says that simply connected Lie groups are completely classified by their Lie algebras. In particular, two Lie groups are not isomorphic whenever their Lie algebras are different regardless whether they are simply connected or not. In any case, the groups US are not simply connected. Consider the following short exact sequence of Lie groups /
ι
/ US / S1 /1 SUS where S 1 = z ∈ C|z| = 1 , the maps ι and det are the inclusion map and the determinant map, respectively and SUS is the subgroup of US consisting of elements of determinant 1. This sequence, being a sequence of Lie groups,
1
det
3
implies that the middle two arrows constitute a fibration of spaces. Using the long exact sequence in homotopy gives the following sequence of groups
...
/ π1 (SUS )
ι∗
/ π1 (US ) det∗ / π1 (S 1 )
/ π0 (SUS )
/
... ,
where π1 (SUS ), π1 (US ) and π1 (S 1 ) are the fundamental groups of SUS , US and S 1 , respectively. Moreover, ι∗ and det∗ are the maps induced by ι and det, respectively. Since π1 (S 1 ) = Z, the group π1 (US ) contains the subgroup nZ of Z of index n. Hence, π1 (US ) is non-trivial. Thus, the proposition below follows. Proposition 2.4. For S ∈ Mn (C), the group US is not simply connected. It is interesting to study the topology of the Lie groups US . In particular, the fundamental group of US is unknown at the moment. In case, SUS is connected, one may attack this problem by determining the fundamental group of SUS . Note that the map det admits a section. For example, consider the map γ : S 1 −→ US , λ → λIn . Thus, the sequence of Lie groups given in the proof of Proposition 2.4 splits. This splitting induces a splitting of the corresponding sequence of homotopy groups. Hence, π1 (US ) is the semidirect product π1 (SUS ) Z. The following is a well-known result in matrix analysis stated in Lie theoretic terms. Proposition 2.5. Let P ∈ GLn (C) such that P ∗ = P −1 . Let S, T ∈ Mn (C) with S = P −1 T P . Then the map from US to UT given by A −→ P AP −1 is an isomorphism of Lie groups. Moreover, the corresponding Lie algebras uS and uT are isomorphic. 3. Real dimension of the Lie algebras uS We study the structure of the Lie algebra uS associated to a Lie group US where S ∈ Mn (C) in this section. In particular, a formula for the real dimension of the algebra uS for a certain class of matrices is derived. Throughout,
4
the standard notation Un denotes the group UIn where In is the n × n identity matrix. Similarly, un denotes uIn . The dimension of un is n2 and the following set BIn of matrices is a basis for un √ √ { −1Eii , Eij − Eji , −1 (Eij + Eji ) | i = 1, . . . , n; 1 ≤ i < j ≤ n}. Here, Eij denotes the matrix whose entries are all zero except for a 1 at the position (i, j). Proposition 3.1. Let S ∈ GLn (C). Suppose S ∗ = aS + bS −1 for some a, b ∈ C where b = 0. Then uS = uS −1 . Proof. Let A ∈ uS . Then SA∗ = −AS. By getting the conjugate transpose, we obtain AS ∗ = −S ∗ A∗ . Hence, we have aAS + bAS −1 = −aSA∗ − bS −1 A∗ . So, bAS −1 = −bS −1 A∗ . Since b = 0, we get AS −1 = −S −1 A∗ , i.e., A ∈ uS −1 . Similarly, we get uS −1 ⊆ uS . The property uS = uS −1 is used later in obtaining a formula for the real dimension of the Lie algebra uS . The equality uS = uS −1 does not always hold. The following propositions illustrate this. Proposition 3.2. Let S ∈ GLn (C) and ε ∈ C such that |ε| = 1. Let εSun = {εSX|X ∈ un }. Then S ∗ = ε2 S if and only if uS = εSun . Proof. Suppose S ∗ = ε2 S. Let A ∈ uS . Then SA∗ = −AS. Write A = εS(ε−1 S −1 A). We have ε−1 S −1 A ∈ un since (ε−1 S −1 A)∗ = ε−1 A∗ (S −1 )∗ = −ε−1 S −1 A. Suppose now that A ∈ un . Then A∗ = −A. It follows that S(εSA)∗ = SεA∗ S ∗ = −εSAS ∗ = −εSAε2 S = −εSAS. So, εSA ∈ uS . On the other√hand, suppose that uS √= εSun . Note√ that for all S ∈ GLn (C), we −1In ∈ uS . Writing −1In = εS( −1ε−1 S −1 ), we have that √ have −1ε−1 S −1 ∈ un . This implies that S ∗ = ε2 S. For S ∈ GLn (C), S is Hermitian √ if and only if uS = Sun , and S is skew-Hermitian if and only if uS = −1Sun . Proposition 3.3. Let S ∈ GLn (C) such that S ∗ = ε2 S for some ε ∈ C with |ε| = 1. Then uS = uS −1 if and only if S 2 commutes with every element of un .
5
Proof. By Proposition 3.2, uS = εSun and uS −1 = ε−1 S −1 un . Suppose uS = uS −1 . Then ε2 S 2 un = un . Let A ∈ un . Then ε2 S 2 A ∈ un . So, −ε2 S 2 A = (ε2 S 2 A)∗ = ε2 A∗ (S ∗ )2 = −ε2 AS 2 and S 2 A = AS 2 . Suppose S 2 commutes with every element of un . Let A ∈ uS . There exists B ∈ un such that A = εSB. Hence, A = ε−1 S −1 (ε2 S 2 B). We show ε2 S 2 B ∈ un . Indeed, (ε2 S 2 B)∗ = ε2 B ∗ (S ∗ )2 = −ε2 BS 2 = −ε2 S 2 B. Then A ∈ uS −1 , implying uS ⊆ uS −1 . Similarly, uS −1 ⊆ uS . Proposition 3.4. Let S ∈ Mn (C). Let S 2 = [sij ]. Then S 2 commutes with every element of un if and only if S 2 = s11 In . Proof. We first show the forward implication, for i = 1, . . . , n, we have √ −1Eii ∈ un , so that S 2 Eii = Eii S 2 . Then it follows that sij = 0 if i = j. Moreover, for all 0 ≤ i, j ≤ n with i = j, we have that Eij − Eji ∈ un . Then S 2 (Eij − Eji ) = (Eij − Eji )S 2 , which implies sii Eij − sjj Eji = sjj Eij − sii Eji . By looking at the coefficients of Eij , we see that s11 = · · · = snn . For the other direction, we note that if S 2 satisfies the two given conditions, then S 2 commutes with all the elements of the basis BIn of un . From Propositions 3.3 and 3.4, the following result follows. Corollary 3.5. Let S ∈ GLn (C) such that S ∗ = ε2 S for some ε ∈ C with |ε| = 1. Let S 2 = [sij ]. Then uS = uS −1 if and only if S 2 = s11 In . For S ∈ Mn (C), consider the two auxiliary subspaces HS and KS of uS defined as follows: HS := {A ∈ uS | A∗ = A},
KS := {A ∈ uS | A∗ = −A}.
Define the direct sum HS ⊕ XS of the R-modules HS and KS with coefficients in R as follows: HS ⊕ XS := {αA + βB | α, β ∈ R, A ∈ HS , B ∈ XS }. Lemma 3.6. For S ∈ GLn (C), we have 1. HS = HS −1 , 2. KS = KS −1 , 3. HS ⊕ KS ⊆ uS .
6
For S ∈ Mn (C), let C(S) := {A ∈ Mn (C) | AS = SA} Define HSC := C(S) ∩ {A ∈ Mn (C) | A∗ = A} and
KSC := C(S) ∩ {A ∈ Mn (C) | A∗ = −A}. √ √ Lemma 3.7. Let S ∈ Mn (C). Then KSC = −1HSC and −1KSC = HSC . Lemma 3.8. Let S ∈ Mn (C). Then KS = KSC .
Proof. Let A ∈ KS . Then A∗ = −A and SA∗ = −AS. It follows that AS = SA. So, A ∈ C(S). Conversely, let A ∈ KSC . Then AS = SA and A = −A∗ . Therefore, A ∈ KS . Let S ∈ Mn (C). Denote by C[S, S −1 ] the set of polynomials in S and S −1 with coefficients in C. Consider the similarly defined direct sum HSC ⊕ KSC . The following holds. Lemma 3.9. Let S ∈ Mn (C) such that S ∗ ∈ C[S, S −1 ]. Then C(S) = HSC ⊕ KSC . Proof. Let S ∗ ∈ C[S, S −1 ]. Let A ∈ C(S). Then A commutes with S ∗ . By taking the conjugate transpose, we have A∗ S = SA∗ . Write A = A1 + A2 where A1 = (A + A∗ )/2 and A2 = (A − A∗ )/2. Observe that A1 is Hermitian and A2 is skew-Hermitian and both commute with S so that A1 ∈ HSC and A2 ∈ KSC . On the other hand, if α, β ∈ R and if A ∈ HSC and B ∈ KSC then S(αA + βB) = (αA + βB)S. So, C(S) = HSC ⊕ KSC . Corollary 3.10. Let S ∈ Mn (C). Then dimR KS = dimR KSC = dimR HSC . If, in addition, S ∗ ∈ C[S, S −1 ], then 2 dimR KS = dimR C(S). Define ˜ C(S) := {A ∈ Mn (C) | AS = −SA} and ˜
˜
˜ | A∗ = A}, HSC := {A ∈ C(S)
˜ KSC := {A ∈ C(S) | A∗ = −A}.
The following are immediate. Lemma 3.11. Let S ∈ Mn (C). Then 7
1. 2. 3. 4.
˜
HS = HSC , √ ˜ ˜ −1HSC = KSC , and √ ˜ ˜ −1KSC = HSC . ˜ ˜ ˜ = HSC ⊕ KSC . If S ∗ ∈ C[S, S −1 ], then C(S)
Analogous to the previous corollary we have the following result. ˜
˜
Corollary 3.12. Let S ∈ Mn (C). Then dimR HS = dimR HSC = dimR KSC . ˜ If, in addition, S ∗ ∈ C[S, S −1 ], then 2 dimR HS = dimR C(S). Lemma 3.13. Let S ∈ GLn (C). Then uS ∩ uS −1 = HS ⊕ KS . Proof. Let A ∈ uS ∩ uS −1 . Let A1 = (A + A∗ )/2 and A2 = (A − A∗ )/2. Since A ∈ uS −1 , we have A∗ ∈ uS . Thus, A1 ∈ HS and A2 ∈ KS . Hence, uS ∩ uS −1 ⊆ HS ⊕ KS . On the other hand, Lemma 3.6 guarantees that HS ⊕KS = HS −1 ⊕KS −1 ⊂ uS −1 . Hence, we must have HS ⊕ KS ⊆ uS ∩ uS −1 . Lemma 3.13 and Proposition 3.1 immediately imply the following corollary. Corollary 3.14. Let S ∈ GLn (C) such that S ∗ = aS+bS −1 for some a, b ∈ C where b = 0. Then uS = HS ⊕ KS . Now, let S ∈ GLn (C) such that S ∗ = aS + bS −1 for some a, b ∈ C where b = 0. By Corollary 3.14, we have 1 1 ˜ dimR uS = dimR C(S) + dimR C(S). 2 2 Since S is normal, S is diagonalizable. That is, there exists an invertible P ∈ Mn (C) such that D := P SP −1 = diag(λ1 , . . . , λn ), where Λ = Λ(S) = {λ1 , . . . , λn } is the spectrum of S. Then C(S) = P −1 C(D)P and
˜ ˜ C(S) = P −1 C(D)P.
Define the function multS : C −→ Z as follows. If z ∈ Λ, define multS (z) as the algebraic multiplicity of z as an eigenvalue of S; if z ∈ / Λ, put multS (z) = 0. Then 8
1. dimC C(S) = λ∈Λ (multS (λ))2 ; ˜ 2. dimC C(S) = λ∈Λ (multS (λ))(multS (−λ)). ˜ ˜ Note that dimC C(S) = dim C C(D) and dimC C(S) = dimC C(D). To show (1), let A = aij Eij ∈ Mn (C) such that AD = DA. This implies that C1. aii is arbitrary C2. λi aij = λj aij if and only if λi = λj or aij = 0. Thus, dimC C(D) =
n
δ(λi , λj ) =
i,j=1
(multS (λ))2 ,
λ∈Λ
where δ is the Kronecker delta function. To show (2), we let A = aij Eij ∈ Mn (C) such that AD = −DA. Then, ˜ aii = 0 C1. ˜ (λi + λj ) = 0 if and only if λi = −λj or aij = 0. C2. So, ˜ dimC C(D) =
n
δ(λi , −λj ) =
i,j=1
multS (λ) multS (−λ).
λ∈Λ
Hence, we have the following theorem. Theorem 3.15. Let S ∈ GLn (C) such that S ∗ = aS +bS −1 for some a, b ∈ C where b = 0. Let Λ be the set of eigenvalues of S. Then dimR uS = multS (λ)2 + multS (λ) multS (−λ). λ∈Λ
λ∈Λ
The dimension formula given above holds for any diagonalizable S ∈ GLn (C) and uS = uS −1 . In this case, observe that the terms on the righthand side of the above expression can be written as (multS (λ) + multS (−λ))2 . Corollary 3.16. Let S ∈ GLn (C) such that S ∗ = aS+bS −1 for some a, b ∈ C where b = 0. Then dimR uS ≤ n2 . 9
Proof. If Λ is the set of eigenvalues of S, then we have n= multS (λ). λ∈Λ
It follows that n2 =
2 multS (λ)
λ∈Λ
=
(multS (λ))2 +
multS (λ) multS (λ )
λ=λ
λ∈Λ
≥ dimR uS .
It is easy to see that if S either has only one eigenvalue or has two eigenvalues where one is the negative of the other, then Theorem 3.15 implies that the maximum dimension n2 is attained. The next result tells us that this is also sufficient. Corollary 3.17. Let S ∈ GLn (C) such that S ∗ = aS + bS −1 for some a, b ∈ C where b = 0. Then dimR uS = n2 if and only if either S has exactly one eigenvalue or S has only two eigenvalues where one is the negative of the other. 2 Proof. Suppose k dimR uS = n . Let Λ = {λ1 , ..., λk } be the spectrum of S. From n = i=1 multS (λi ), we have
2
n =
k i=1
multS (λi )2 +
multS (λi ) multS (λj ).
i=j
Using the formula in Theorem 3.15, we have that multS (λi ) multS (λj ) = multS (λi ) multS (−λi ) i
i=j
In case there is no λi for which −λi ∈ Λ, the right-hand side then vanishes. Since multS (λi ) 1 for all i, the only instance the left-hand side vanishes is when the sum is vacuous, i.e. when there are no two distinct i and j that contribute to the sum. Thus, Λ has only one element. Now, suppose there are 10
some λi whose negative belong to Λ. After removing the terms that vanish, i.e. those in which −λi ∈ / Λ we see that the right-hand side is a sub-sum of the left-hand side. Fix λi ∈ Λ. From the above equation, we get multS (λi ) multS (λj ). multS (λi ) multS (−λi ) = j
Since the left side is a term of the right side and the terms of the right side are nonnegative, we must have j = −i. Since j ranges over all eigenvalues of S distinct from i, we see that S has only two eigenvalues, namely λi and −λi . Proposition 3.18. Let S ∈ GLn (C) be normal. Then there exists a unitary matrix S˜ ∈ GLn (C) such that uS ∼ = uS˜ . Proof. Since S is normal, there exists a unitary matrix P such that P −1 SP = diag(λ1 , . . . , λn ) where λ1 , . . . , λn are the eigenvalues of S. Let Q = diag(|λ1 |−1/2 , . . . , |λn |−1/2 ). Then S˜ = QDQ is a unitary matrix. Let A ∈ uS . Then SA∗ = −AS. It follows that D(P −1 AP )∗ = −P −1 AP D and
QDQQ−1 (P −1 AP )∗ Q = −QP −1 AP Q−1 QDQ.
Since Q is a real symmetric matrix, we have −1 ˜ ˜ AP Q−1 )∗ = −(QP −1 AP Q−1 )S. S(QP
This implies that
QP −1 uS P Q−1 ⊆ uS˜ .
Similarly, we can show that uS˜ ⊆ QP −1 uS P Q−1 . Thus, uS ∼ = uS˜ .
11
Note that S˜ = In if and only if the eigenvalues λ1 , . . . , λn of S are real and positive. In view of the unitary reduction, the dimension formula can be written using classes of complex numbers up to a real scalar. Explicitly, consider the quotient group C∗ /R∗ whose elements are written as [λ]. Let ˜ Λ(S) := {[λ] ∈ C∗ /R∗ | λ is an eigenvalue of S}. ˜ For [λ] ∈ Λ(S), define multS ([λ]) := λ ∈[λ] multS (λ ). The succeeding theorem follows from Theorem 3.15 and Proposition 3.18. Theorem 3.19. Let S ∈ GLn (C) be normal. Then dimR uS = (multS ([λ]))2 . ˜ [λ]∈Λ(S)
4. Structure of the Lie algebra uS Let S ∈ GLn (C) be unitary. Then S is unitarily similar to D = diag(λ1 , . . . , λn ) where λ1 , . . . , λn are the eigenvalues of S. By Proposition 3.18, uS ∼ = uD . By previous arguments, ˜ C C uD ∼ ⊕ KD , = HD where
˜
C = {A ∈ Mn (C)|A∗ = A and DA = −AD} HD
and
C = {A ∈ Mn (C)|A∗ = −A and DA = AD}. KD
Then {Eij + Eji ,
√ −1(Eij − Eji )|1 ≤ i < j ≤ n and λi = −λj }
˜
C is a basis of HD over R. Likewise, √ √ { −1Eii |1 ≤ i ≤ n}∪{ −1(Eij +Eji ), Eij −Eji |1 ≤ i < j ≤ n and λi = λj } C is a basis of KD over R. The union of these two bases gives a basis of uD over R. ˜ For [λ] ∈ Λ(D), define [λ]
/ [λ] or λj ∈ / [λ] implies that aij = 0}. uD = {A ∈ uD | λi ∈ 12
[λ] [λ ] ˜ For [λ], [λ ] ∈ Λ(D) such that [λ] = [λ ], if A ∈ uD and B ∈ uD , then the Lie bracket [A, B] of A and B is 0 because AB = 0. Hence, uD is the direct [λ] sum of the Lie subalgebras uD , i.e., [λ] uD ∼ uD . = ˜ [λ]∈Λ(D) [λ] ˜ For our purpose, it is enough to consider uD for [λ] ∈ Λ(D). For 1 ≤ i, j ≤ multS ([λ]), if i, j ≤ multS (λ) or i, j > multS (λ), put Sgn(i, j) = 1. Otherwise, put Sgn(i, j) = −1. Consider the Lie algebra u[λ] over R with the basis [λ]
[λ]
[λ]
{Ji = Ji , Kij = Kij , Lij = Lij | 1 ≤ i, j ≤ multS ([λ])} satisfying the following properties that for distinct i, j and k, • Kii = Lii = 0, Kij = −Kji , Lij = Lji • [Ji , Jj ] = 0,
[Ji , Kij ] = Lij , [Ji , Lij ] = Kij
if Sgn(i, j) = Sgn(i, k) = −1 Kjk • [Kij , Kik ] = −Kjk otherwise
if Sgn(i, j) = Sgn(i, k) = 1 Kjk • [Lij , Lik ] = −Kjk otherwise • [Kij , Lij ] = 2(Ji − Jj )
Ljk if Sgn(i, j) = −1 and Sgn(i, k) = 1 • [Kij , Lik ] = −Ljk otherwise • Apart from the reverse of the order of the basis elements in the Lie brackets above, the Lie bracket of any other pair of basis elements is 0. Without loss of generality, we assume that λi = λ for i ≤ multS (λ) [λ] and λi = −λ for i > multS (λ). Then uD is isomorphic to u[λ] where the isomorphism is given by the following correspondence: √ [λ] −1Eii → Ji [λ] Sgn(i, j)(Eij − Eji ) → Kij 13
[λ] − Sgn(i, j)(Eij + Eji ) → Lij Moreover, u[λ] is isomorphic to the Lie algebra u(p, q) of the indefinite unitary group U (p, q) where p = multS (λ) and q = multS (−λ). Therefore, the following holds. Theorem 4.1. Let S ∈ GLn (C) be unitary. Then uS ∼ u[λ] ∼ u(multS (λ), multS (−λ)). = = ˜ [λ]∈Λ(S)
˜ [λ]∈Λ(S)
5. Lie algebra of permutation matrices We look at the structure of the Lie algebras arising from certain permutation matrices. Let Sn denote the set of all permutation matrices of size n for n ∈ N. If S is a permutation matrix representing a transposition, i.e. a permutation matrix satisfying S 2 = In , then Proposition 3.2 implies that uS has the same dimension as un . Theorem 3.15 provides a parity criterion for the dimension of uS . Corollary 5.1. Let S ∈ GLn (C) such that S ∗ = aS + bS −1 for some a, b ∈ C where b = 0. Then dimR uS ≡ n mod 2. This follows from reducing the formula in Theorem 3.15 modulo 2 and noting that an integer and its square have the same parity. This result implies that a permutation matrix S decomposes into an even number of odd cycles if and only if the dimension dimR uS is even. Denote by σn the simple cyclic permutation matrix given by ⎤ ⎡ 0 0 ··· 0 1 ⎢ 1 0 · · · 0 0⎥ 0 1 ⎥ ⎢ . σn = ⎢ .. .. ⎥ = I ... ⎣. n−1 0 .⎦ 0 0 ··· 1 0 For brevity, write σ = σn . T For x ∈ C, put v(x) = xn xn−1 · · · x . Let ω be a primitive n-th root of unity. Then σv(ω k ) = ω k v(ω k ). In other words, ω k is an eigenvalue of σ and v(ω k ) is an eigenvector corresponding to ω k for k = 1, . . . , n. That is, the set Λ of eigenvalues of σ is given by {1, ω, . . . , ω n−1 }. 14
Note that if n is odd, −λ ∈ / Λ for all λ ∈ Λ. On the other hand, if n is even, then −λ ∈ Λ for all λ ∈ Λ. In other words, dimR Kσn = n for all n ∈ N and
dimR Hσn =
0 if n is odd . ˜ dimC C(σn ) = n if n is even
Theorem 5.2. For n ∈ N, we have that n if n is odd . dimR uσn = 2n if n is even If S is a direct sum of simple cyclic permutation matrices, Theorem 5.2 implies the following result. Theorem 5.3. Let S = ri=1 σni where ni ∈ N for i = 1, . . . , r. Then dimR uS =
r
dimR σni +
i=1
gcd(ni , nj ) gcd(ni nj , 2).
i=j
Proof. For i = 1, . . . , r,
√ ˜ n ) = {[exp(2π −1l/ni )] | 0 ≤ l ≤ ni − 1}. Λ(σ i
It follows that
˜ n )) = Card(Λ(σ i
ni ni /2
if ni is odd . if ni is even
Moreover, ˜ n )) = ˜ n ) ∩ Λ(σ Card(Λ(σ i j Suppose ni is even. Then multσni ([λ]) =
gcd(ni , nj )/2 gcd(ni , nj )
2 0 15
if ni , nj are even . otherwise
˜ n) if [λ] ∈ Λ(σ i . otherwise
On the other hand, if ni is odd, we have ˜ n) 1 if [λ] ∈ Λ(σ i . multσni ([λ]) = 0 otherwise By the general formula, dimR uS =
(multS ([λ]))2
˜ [λ]∈Λ(S)
=
=
= =
(
r
i=1 ˜ [λ]∈Λ(S) r i=1 [λ]∈Λ(S) ˜ r i=1 r
multσni ([λ]))2
(multσni ([λ]))2 +
dimR uσni + dimR σni +
i=1
˜ i=j [λ]∈Λ(S)
multσni ([λ]) multσnj ([λ])
˜ n ) ∩ Λ(σ ˜ n )) gcd(ni , 2) gcd(nj , 2) Card(Λ(σ i j
i=j
gcd(ni , nj ) gcd(ni nj , 2).
i=j
Theorem 5.2 implies that the dimension of the Lie algebras uσn where σn is a simple cyclic permutation is not the same as the dimension of uS when S is a transposition. In particular, this tells us that the Lie algebras uσn are not isomorphic to un . Hence, the Lie groups Uσn are not isomorphic to Un . Acknowledgement The authors are grateful to the anonymous referees for giving comments and suggestions which improved the paper. References [1] M.N.M. Abara, D.I. Merino, and A.T. Paras. φS -Orthogonal matrices. Linear Algebra and its Applications, 432(11):2834–2846, 2010. 16
[2] D.Q. Granario, D.I. Merino, and A.T. Paras. The φS polar decomposition. Linear Algebra and its Applications, 438(1):609–620, 2013. [3] D.Q. Granario, D.I. Merino, and A.T. Paras. The φS polar decomposition when the cosquare of S is normal. Linear Algebra and its Applications, 467:75–85, 2015. [4] B. Hall. Lie Groups, Lie Algebras, and their Representations, volume 222 of Graduate Texts in Mathematics. Springer International Publishing, 2 edition, 2015. [5] R.J. De la Cruz and D.Q. Granario. The φS polar decomposition when the cosquare of S is nonderogatory. Electronic Journal of Linear Algebra, 31(1):754–764, 2016. [6] R.J. De la Cruz, D.I. Merino, and A.T. Paras. The φS polar decomposition of matrices. Linear Algebra and its Applications, 434(1):4–13, 2011. [7] F. Warner. Foundations of Differentiable Manifolds and Lie Groups, volume 94 of Graduate Texts in Mathematics. Springer-Verlag New York, 1983.
17