The Segal-Bargmann transform on classical matrix Lie groups

The Segal-Bargmann transform on classical matrix Lie groups

JID:YJFAN AID:108430 /FLA [m1L; v1.261; Prn:16/12/2019; 10:27] P.1 (1-59) Journal of Functional Analysis ••• (••••) •••••• Contents lists available...

2MB Sizes 0 Downloads 126 Views

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.1 (1-59) Journal of Functional Analysis ••• (••••) ••••••

Contents lists available at ScienceDirect

Journal of Functional Analysis www.elsevier.com/locate/jfa

The Segal-Bargmann transform on classical matrix Lie groups Alice Z. Chan 1 Department of Mathematics, University of California, San Diego, La Jolla, CA 92093, United States of America

a r t i c l e

i n f o

Article history: Received 11 December 2018 Accepted 5 December 2019 Available online xxxx Communicated by L. Gross Keywords: Segal-Bargmann transform Heat kernel analysis on Lie groups

a b s t r a c t N We study the complex-time Segal-Bargmann transform BK s,τ on a compact type Lie group KN , where KN is one of the following classical matrix Lie groups: the special orthogonal group SO(N, R), the special unitary group SU(N ), or the compact symplectic group Sp(N ). Our work complements and extends the results of Driver, Hall, and Kemp on the Segal-Bargman transform for the unitary group U(N ). We provide an effective method of computing the action of the Segal-Bargmann transform on trace polynomials, which comprise a subspace of smooth functions on KN extending the polynomial functional calculus. Using these results, we show N that as N → ∞, the finite-dimensional transform BK s,τ has a meaningful limit Gs,τ which can be identified as an operator on the space of complex Laurent polynomials. © 2019 Published by Elsevier Inc.

Contents 1.

2.

1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . 1.1. The classical Segal-Bargmann transform 1.2. Main definitions and theorems . . . . . . . Background . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

E-mail address: [email protected]. Supported by NSF GRFP under Grant No. DGE-1650112.

https://doi.org/10.1016/j.jfa.2019.108430 0022-1236/© 2019 Published by Elsevier Inc.

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

2 2 3 8

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.2 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

2

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. 8 . 9 . 11 . 11 . 17

3.3. Intertwining formulas for As,τ ................ Limit theorems for the Segal-Bargmann transform on SO(N, R) . 4.1. Concentration of measures . . . . . . . . . . . . . . . . . . . . . . 4.2. The free Segal-Bargmann transform for SO(N, R) . . . . . . The Segal-Bargmann transform on Sp(N ) . . . . . . . . . . . . . . . . 5.1. Two realizations of the compact symplectic group . . . . . . 5.2. Magic formulas and derivative formulas . . . . . . . . . . . . . 5.3. Intertwining formulas for ΔSp(N ) . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

23 30 30 35 40 40 43 47

5.4. Intertwining formulas for As,τ ...................... 5.5. Limit theorems for the Segal-Bargmann transform on Sp(N ) . . . .  ) 5.6. Extending the magic formulas and intertwining formulas to Sp(N 6. The Segal-Bargmann transform on SU(N ) . . . . . . . . . . . . . . . . . . . . . . Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

50 53 54 56 58 59

3.

2.1. Heat kernels on Lie groups . . . . . . . . . 2.2. Notation and definitions . . . . . . . . . . . The Segal-Bargmann transform on SO(N, R) . . 3.1. Magic formulas and derivative formulas . 3.2. Intertwining formulas for ΔSO(N,R) . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

SO(N,C)

4.

5.

Sp(N,C)

1. Introduction 1.1. The classical Segal-Bargmann transform In this paper, we consider a complex-time generalization of the classical SegalBargmann transform on the matrix Lie groups SO(N, R), SU(N ), and Sp(N ), and analyze its limit as N → ∞. We begin with a brief discussion of the classical SegalBargmann transform and its generalizations to Lie groups of compact type. For t > 0, let ρt denote the heat kernel with variance t on Rd : ρt (x) = (2πt)−d/2 exp(−|x|2 /2t). This has an entire analytic continuation to C d , given by  z · z . (ρt )C (z) = (2πt)−d/2 exp − 2t If f ∈ L1loc (Rd ) is of sufficiently slow growth, we can define  (ρt )C (z − y)f (y) dy.

(Bt f )(z) :=

(1.1)

Rd

The map f → Bt f is called the classical Segal-Bargmann transform, due to the work of the eponymous authors in [2,1,17,15,16]. If we let μt denote the heat kernel on C d , now with variance 2t : μt (z) = (πt)−d exp(−|z|2 /t),

(1.2)

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.3 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

3

then the main result is that the map Bt : L2 (Rd , ρt ) → HL2 (C d , μt ) is a unitary isomorphism, where HL2 (C d , μt ) is the space of square-integrable holomorphic functions on C d . 1.2. Main definitions and theorems In [10], Hall showed that the Segal-Bargmann transform can be extended to compact Lie groups, and in [6], the transform was further extended by Driver to Lie groups of compact type. Later, the behavior of the transform on the unitary group U(N ) as N → ∞ was analyzed independently by Cébron in [5] and Driver, Hall, and Kemp in [8]. The authors of the latter paper subsequently introduced a complex-time generalization of the Segal-Bargmann transform for all compact type Lie groups in [9]. We use this version of the transform (see Definition 1.1) in our analysis of the special orthogonal group SO(N, R), the special unitary group SU(N ), and the compact symplectic group Sp(N ). We now outline the main results of this paper, deferring a fuller discussion of the requisite background and definitions to Section 2. Let K denote a Lie group of compact type with Lie algebra k. Then K possesses a left-invariant metric whose associated Laplacian ΔK is bi-invariant (cf. (2.1)). For each ∞ s > 0, there is an associated heat kernel ρK s ∈ C (K, (0, ∞)) satisfying 

e

s 2 ΔK





 f (xk)ρK s (k)

f (x) =

dk =

K

−1 f (k)ρK ) dk s (xk

for all f ∈ L2 (K).

(1.3)

K

Let KC denote the complexification of K and C+ = {τ = t +iu : t > 0, u ∈ R} denote the right half-plane. As shown in [9, Theorem 1.2], the heat kernel (ρK s )s>0 possesses a unique entire analytic continuation ρK C : C + × KC → C such that ρK C (s, x) = ρs (x) for all s > 0 and x ∈ K ⊆ KC . Thus we can proceed as in the classical case and define the Segal-Bargmann transform for compact type Lie groups using a group convolution formula similar to (1.1) (see [6] for details). Definition 1.1 (Complex-time Segal-Bargmann transform). Let s > 0 and τ = t + iθ ∈ K 2 K D(s, s), and let ρK s and ρC be as above. For z ∈ KC and f ∈ L (K, ρs ), define  K (Bs,τ f )(z) := K

−1 ρK )f (k) dk C (τ, zk

for z ∈ KC .

(1.4)

JID:YJFAN 4

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.4 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

K The complex-time Segal-Bargmann transform Bs,τ is an isometric isomorphism be2 2 tween certain L and holomorphic L spaces on K and KC :

Theorem 1.2 (Driver, Hall, and Kemp, [9, Theorem 1.6]). Let K be a connected, compactC type Lie group, and set s > 0 and τ ∈ D(s, s). Let μK s,τ denote the (time-rescaled) heat K kernel on KC defined by (2.4). Then Bs,τ f is holomorphic on KC for each f ∈ L2 (K, ρK s ). Moreover, K 2 KC Bs,τ : L2 (K, ρK s ) → HL (KC , μs,τ )

(1.5)

is a unitary isomorphism. If we set K = Rd and s = τ > 0, we recover the main result for the classical Segal-Bargmann transform from Section 1.1. Every compact Lie group K has a faithful representation. Hence, viewing K as a K matrix group, we can extend Bs,τ to matrix-valued functions. Let MN (R) denote the algebra of N ×N real matrices with unit IN , and let MN (C) denote the algebra of N ×N complex matrices. Let GLN (C) denote the group of all invertible matrices in MN (C). Define a scaled Hilbert-Schmidt norm on MN (C) by A2MN (C) :=

1 Tr(AA∗ ), N

A ∈ MN (C),

(1.6)

where Tr denotes the usual trace. Definition 1.3 (Boosted Segal-Bargmann transform). Let K ⊆ GLN (C) be a compact matrix Lie group with complexification KC . Let F be an MN (C) valued function on K or KC , and let F MN (C) denote the scalar-valued function A → F (A)MN (C) . Fix s > 0 and τ ∈ D(s, s), and let 2 K L2 (K, ρK s ; MN (C)) = {F : K → MN (C); F MN (C) ∈ L (K, ρs )},

and

2 KC C L2 (KC , μK s,τ ; MN (C)) = {F : KC → MN (C); F MN (C) ∈ L (KC , μs,τ )}.

The norms defined by  F 2L2 (K,ρK := s ;MN (C))

F (A)2MN (C) ρK s (A) dA

(1.7)

K



F 2 2 K L (KC ,μs,τC ;MN (C))

C F (A)2MN (C) μK s,τ (A) dA

:=

(1.8)

KC 2 KC C make these spaces into Hilbert spaces. Let HL2 (KC , μK s,τ ; MN (C)) ⊆ L (KC , μs,τ ; MN (C)) denote the subspace of matrix-valued holomorphic functions.

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.5 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

5

The boosted Segal-Bargmann transform 2 K 2 KC BK s,τ : L (K, ρs ; MN (C)) → HL (KC , μs,τ ; MN (C)) N is the unitary isomorphism determined by applying Bs,τ componentwise; i.e., K BK s,τ (f · A) = Bs,τ f · A

for f ∈ L2 (K, ρK s ) and A ∈ MN (C).

In this paper, we consider the case when K = KN is one of SO(N, R), SU(N ), or N Sp(N ). We provide an effective method of computing BK s,τ on the space of polynomials, and show that in this setting, the transform has a meaningful limit as N → ∞. However, the boosted Segal-Bargmann transform typically does not preserve the space of polynomials on K. For example, consider the polynomial PN (A) = A2 on SO(N, R). Let Tr denote the usual trace and set tr = N1 Tr. From Example 4.11, we see that SO(N,R)

(Bs,t

PN )(A) =

2t 2t 1 1 N (1 − e−t )IN + e−t (1 + e N )A2 − e−t (−1 + e N )A tr A, (1.9) N 2 2

This example highlights that in general, BK s,τ maps the space of polynomial functions on K into the larger space of trace polynomials, first introduced in [8, Definition 1.7], which we now briefly describe (see Section 3.2 for the full details). Let C[u, u−1 ; v] denote the algebra of polynomials in the commuting variables u, u−1 , and v = {v±1 , v±2 , . . .}. For each P ∈ C[u, u−1 ; v], there is an associated trace (1) (1) polynomial PN on SO(N, R). For now, it is enough to think of PN (A) as a certain −1 “polynomial” function of A and tr(A). If P ∈ C[u, u ; v] is a function of the variable u (1) alone, then PN is simply the corresponding polynomial on SO(N, R). For example, the (1) trace polynomial on SO(N, R) associated with P (u; v) = u5 is PN (A) = A5 . On the other hand, (1.9) is the trace polynomial on SO(N, R) associated with f (u; v) =

2t 2t 1 1 N (1 − e−t ) + e−t (1 + e N )u2 − e−t (−1 + e N )uv1 ∈ C[u, u−1 ; v]. N 2 2

In addition, for P ∈ C[u, u−1 ; v], there are analogous trace polynomials PN and PN on SU(N ), and Sp(N ), resp. (see Definition 3.5 for precise definitions). To simplify the statement of our results, we introduce the following notation: for N ∈ N, let (1)

KN = SO(N, R),

(2)

KN = SU(N ),

(2 )

KN

(2)

(4)

KN = Sp(N ),

(1.10)

(4)

= U(N ),

and set KN,C = (KN )C and ΔN = ΔK (β) for β = 1, 2, 2 , 4. In addition, we write (β)

(β)

(β)

N

N,(β)

ρs

K

(β)

N,(β)

= ρs N and μs,τ

K

(β)

(β)

(β)

= μs,τN,C for the heat kernels on KN and KN,C , respectively. (β)

(β)

Our first theorem is an intertwining result showing that ΔN PN can be computed by applying a certain pseudodifferential operator on C[u, u−1 ; v] to P and then considering the corresponding trace polynomial.

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.6 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

6

Theorem 1.4 (Intertwining formulas for ΔSO(N,R) , ΔSU(N ) , and ΔSp(N ) ). For β ∈ (β) {1, 2, 4} and N ∈ N, there are pseudodifferential operators DN acting on C[u, u−1 ; v] −1 (see Definition 3.7) such that, for each P ∈ C[u, u ; v], (β)

(β)

(β)

ΔN PN = [DN P ]N .

(1.11)

Moreover, for all τ ∈ C and P ∈ C[u, u−1 ; v], (β)

(β)

e 2 ΔN PN = [e 2 DN P ]N . τ

τ

(β)

(1.12)

(β)

For each such β, DN takes the form (β)

DN = L0 +

1 (β) 1 (β) L + 2 L2 , N 1 N

(1.13)

(β)

(β)

where L0 , L1 are first order pseudodifferential operators and L2 differential operator, all independent of N .

is a second order

(β)

The exponential e 2 DN is well defined on C[u, u−1 ; v] (see Corollary 3.11). The proof of Theorem 1.4 for SO(N, R) and Sp(N ) can be found on p. 20 and p. 47, resp. We do not provide a full proof of the SU(N ) version of the theorem, as it is very similar to the SO(N, R) and Sp(N ) cases; the proof is discussed on p. 58. (β) τ Our next result shows that we can use e 2 DN to explicitly compute the SegalBargmann transform on trace polynomials. τ

Theorem 1.5. Let P ∈ C[u, u−1 ; v], N ∈ N, s > 0 and τ ∈ D(s, s). For β ∈ {1, 2, 4}, the (β) Segal-Bargmann transform on the trace polynomial PN can be computed as (β)

(β)

Bs,τN PN = [e 2 DN P ]N . K

τ

(β)

(1.14)

The proof of Theorem 1.5 for SO(N, R) and Sp(N ) can be found on p. 23 and p. 49, resp. The proof of this theorem for the SU(N ) case is similar, and is discussed on p. 58. K

(β)

Our main result concerns the limit of Bs,τN on polynomials as N → ∞. While the range of the boosted Segal-Bargmann transform is the space of trace polynomials, as N → ∞ the image concentrates back onto the smaller subspace of Laurent polynomials. Theorem 1.6. Let s > 0, τ ∈ D(s, s), and f ∈ C[u, u−1 ]. Then there exist unique gs,τ , hs,τ ∈ C[u, u−1 ] such that K



(β)

Bs,τN fN − [gs,τ ]N 2L2 (K (β)

N,(β) ;MN (C)) N,C ,μs,τ

(β)

K (Bs,τN

for β ∈ {1, 2, 4}.

−1

)

fN −

[hs,τ ]N 2L2 (K (β) ,ρN,(β) ;M (C)) s N N

=O  =O

1 N2 1 N2

,

(1.15)

.

(1.16)



JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.7 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

7

Definition 1.7. The map Gs,τ : C[u, u−1 ] → C[u, u−1 ] given by f → gs,τ is called the free Segal-Bargmann transform, and the map Hs,τ : C[u, u−1 ] → C[u, u−1 ] given by f → hs,τ is called the free inverse Segal-Bargmann transform. Explicit formulas for Gs,τ and Hs,τ are given in (4.22) and (4.23), respectively. The proof of Theorem 1.6 for SO(N, R) can be found on p. 37. The proof of the theorem for the SU(N ) and Sp(N ) cases is entirely analogous to the SO(N, R) case, so we do not provide the full details; the proofs for SU(N ) and Sp(N ) are discussed on pp. 58 and 53. Remark 1.8. The fact that the Segal-Bargmann transform on matrices in MN (C) has a meaningful limit G t as N → ∞ is due to Biane [4, Proposition 13]. For the special case when s = t > 0, the free Segal-Bargmann transform Gt,t of this paper is equivalent to G t , which Biane called the free Hall transform. A two-parameter generalization of this transform, called the free unitary Segal-Bargmann transform, Gs,t , where s > t/2 > 0, was introduced in [8]. This version of the free Segal-Bargmann transform was also investigated by Ho in [11]; in particular, he gave an integral kernel for the large-N limit of the Segal-Bargmann transform over U(N ). In this paper, we study a slight generalization of Gs,t by allowing the time parameter τ to be complex, incorporating results from [9] on the complex-time Segal-Bargmann transform. When τ = t > 0, the free Segal-Bargmann transform Gs,τ of Definition 1.7 is precisely the same as the operator Gs,t of [8]. Remark 1.9. Many of the results in this paper on the Segal-Bargmann transform for SO(N, R), SU(N ), and Sp(N ) correspond to work done by Driver, Hall, and Kemp on the Segal-Bargmann transform on U(N ). The U(N ) analogue of our intertwining formulas, Theorem 1.4, is contained in [8, Theorem 1.18], where it is shown that ΔU(N ) has the intertwining formula (2 )

DN

= L0 +

1 (2 ) 1 (2 ) L1 + 2 L2 N N (2 )

(2 )

for certain pseudodifferential operators L0 , L1 , and L2 on the trace polynomial space (2 ) C[u, u−1 ; v] (cf. Remark 3.13), where the operator L1 is identically zero (as it is for (2) L1 in the SU(N ) case). The same operator L0 appears in the intertwining formulas for SO(N, R), U(N ), SU(N ), and Sp(N ). As N → ∞, it is the operator L0 that drives the large-N behavior of the transform; this is the key to our main result on the free Segal-Bargmann transform, Theorem 1.6, which states that the large-N limit of the Segal-Bargmann transform on SO(N, R), SU(N ), and Sp(N ) is the same operator Gs,τ . Moreover, we emphasize that the rate of convergence in all cases is O(1/N 2 ), as it is in the analogous result for the U(N ) case, [8, Theorem 1.11]. This is somewhat surprising, since there is a nontrivial term of order 1/N in the intertwining formulas for SO(N, R) and Sp(N ) that does not appear in the U(N ) and SU(N ) cases. That we still obtain O(1/N 2 ) convergence for

JID:YJFAN

AID:108430 /FLA

8

[m1L; v1.261; Prn:16/12/2019; 10:27] P.8 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

SO(N, R) and Sp(N ) is due to the fact that in the scalar version of their intertwining formulas (cf. Theorems 3.20 and 5.5), the term of order 1/N is a first order differential operator. As a result, the 1/N terms are “washed away” in the covariance estimates (cf. the proof of Theorem 4.10 in the SO(N, R) case for a more precise explanation; the Sp(N ) case is similar). 2. Background In this section, we expand on the background required to prove our main results. In particular, we provide a brief overview of the heat kernel results used in constructing the Segal-Bargmann transform, and conclude by setting some notation that will be used for the remainder of the paper. 2.1. Heat kernels on Lie groups Definition 2.1. Let K be a Lie group with Lie algebra k. An inner product ·, · k on k is Ad-invariant if, for all X1 , X2 ∈ k and all k ∈ K, Adk X1 , Adk X2 k = X1 , X2 k . A group whose Lie algebra possesses an Ad-invariant inner product is called compact type. The Lie groups SO(N, R), SU(N ), and Sp(N ) studied in this paper are compact type (and are, in fact, compact). Compact type Lie groups have the following property. Proposition 2.2 ([[14], Lemma 7.5]). If K is a compact type Lie group with a fixed Ad-invariant inner product, the K is isometrically isomorphic to a direct product group, i.e. K ∼ = K0 × Rd for some compact Lie group K0 and some nonnegative integer d. Let K be a compact type Lie group with Lie algebra k, and fix an Ad-invariant inner product on k. Choose a right Haar measure λ on K; we will write dx for λ(dx) and L2 (K) for L2 (K, λ). If βk is an orthonormal basis for k, we define the Laplace operator ΔK on K by ΔK =



2, X

(2.1)

X∈βk

is the left-invariant vector field given by where for any X ∈ k, X )(k) = (Xf

d f (ketX ) dt t=0

(2.2)

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.9 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

9

for any smooth real or complex-valued function f on K. The operator is independent of orthonormal basis chosen. For a detailed overview of left-invariant Laplacian operators, see [14]; see also [6,7,9] for further background on heat kernels on compact type Lie groups. The operator ΔK is left-invariant for any Lie group K. When K is compact type, ΔK is also bi-invariant. In addition, it is well known that ΔK on K is elliptic and essentially self-adjoint on L2 (K) with respect to any right invariant Haar measure (see ∞ [9, Section 3.2]). As a result, there exists an associated heat kernel ρK t ∈ C (K, (0, ∞)) satisfying (1.3). Definition 2.3. Let s > 0 and τ = t + iθ ∈ D(s, s). We define a second order left-invariant C differential operator AK s,τ on KC by C AK s,τ =

d 

j=1

s−

t 2



 2 + t Y 2 − θX j Y j , X j 2 j

(2.3)

where {Xj }dj=1 is any orthonormal basis of k, and Yj = JXj where J is the operation of multiplication of i on kC = Lie(KC ). 2 C The operator AK s,τ |Cc∞ (KC ) is essentially self-adjoint on L (KC ) with respect to any right Haar measure (see [9, Section 3.2]). There exists a corresponding heat kernel density ∞ C μK s,τ ∈ C (KC , (0, ∞)) such that

  1 KC  C μK e 2 As,τ f (z) = s,τ (w)f (zw) dz

for all f ∈ L2 (KC ).

(2.4)

KC

2.2. Notation and definitions The classical Segal-Bargmann transform is an isometric isomorphism from L2 (Rd ) onto HL2 (C d ). In order to extend the Segal-Bargmann transform to Lie groups, we note that every connected Lie group has a complexification defined by a certain representationtheoretic universal property, mimicking the relationship between Rd and C d (see [9, Section 2]). For the present purposes, it is enough to know the concrete complexifications of SO(N, R), SU(N ), and Sp(N ), which we describe below. Let SO(N, R) denote the special orthogonal group, defined by SO(N, R) = {A ∈ MN (R) : AA = IN , det(A) = 1}. Its complexification, SO(N, R)C , is given by SO(N, C) = {A ∈ MN (C) : AA = IN , det(A) = 1}. The Lie group SO(N, R) has Lie algebra so(N, R) = {X ∈ MN (R) : X  = −X}, while SO(N, C) has Lie algebra so(N, C) = {X ∈ MN (C) : X  = −X}. The unitary group, U(N ), is defined by U(N ) = {A ∈ MN (C) : AA∗ = IN }. The Lie algebra of U(N ) is u(N ) = {X ∈ MN (C) : X ∗ = −X}. The special unitary group SU(N ) is the subgroup of U(N ) consisting of matrices with determinant 1. The complexification

JID:YJFAN 10

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.10 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

of SU(N ) is the special linear group SL(N, C) = {A ∈ GLN (C) : det(A) = 1}. The Lie algebra of SU(N ) is su(N ) = {X ∈ MN (C) : X ∗ = −X, Tr(X) = 0}, and the Lie algebra of SL(N, C) is sl(N, C) = {X ∈ MN (C) : Tr(X) = 0}. There are two standard realizations of the compact symplectic group Sp(N ), first as a group of N × N matrices over the quaternions, and second as a group of 2N × 2N matrices over C. It is more convenient for us to work with the latter realization, which we introduce here; we address the former realization in Section 5.6. Our choice of realization means that we will often have to add a normalization factor of 12 when working with Sp(N ) ⊆ M2N (C). Set  Ω0 =

 0 1 −1 0

∈ M2 (C).

For N ∈ N, we define Ω = ΩN := diag(Ω0 , Ω0 , . . . , Ω0 ) ∈ M2N (C). We define the    N times

complex symplectic group by Sp(N, C) = {A ∈ M2N (C) : A ΩA = Ω}. The Lie algebra of Sp(N, C) is sp(N, C) = {X ∈ M2N (C) : ΩX  Ω = X}. The compact symplectic group Sp(N ) consists of the elements of Sp(N, C) which are also unitary, i.e. Sp(N ) = Sp(N, C) ∩ U(2N ) = {A ∈ M2N (C) : A ΩA = Ω, A∗ A = I2N }.

(2.5)

The Lie algebra of Sp(N ) is sp(N ) = {X ∈ M2N (C) : ΩX  Ω = X, X ∗ = −X}. The complexification of Sp(N ) is Sp(N, C). The groups SO(N, R), SU(N ), and Sp(N ) can be thought of as (special) unitary groups over R, C, and H, resp. For notational convenience, throughout the paper we associate the parameters β = 1, 2, 2 , 4 with SO(N, R), SU(N ), U(N ), and Sp(N ), resp., where β refers to the dimension of R, C, and H as associative algebras over R. In order for the large-N limit of the Segal-Bargmann transform to converge in a meaningful way, we require the following normalizations of the trace and Hilbert-Schmidt inner products. Notation 2.4. For A ∈ MN (C), define 1 Tr(A), N 1 Tr(A), tr(A) = 2N tr(A) =

(2.6) (2.7)

where Tr denotes the usual trace. We use tr when applying the Segal-Bargmann trans when applying the Segal-Bargmann transform on form on SO(N, R) and SU(N ), and tr Sp(N ).

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.11 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

11

We also define inner products on so(N, R), su(N ), and sp(N ) by X, Y so(N,R) =

N N2 Tr(XY ∗ ) = tr(XY ∗ ), 2 2

X, Y su(N ) = N Tr(XY ∗ ) = N 2 tr(XY ∗ ), ∗ X, Y sp(N ) = N Tr(XY ∗ ) = 2N 2 tr(XY ),

X, Y ∈ so(N, R), X, Y ∈ su(N ), X, Y ∈ sp(N ).

(2.8) (2.9) (2.10)

3. The Segal-Bargmann transform on SO(N, R) In this section and the next, we prove Theorem 1.4, Theorem 1.5, and Theorem 1.6 for the Segal-Bargmann transform on SO(N, R). We begin with a set of “magic formulas” which are the key ingredient to proving the SO(N, R) version of the intertwining formula for ΔSO(N,R) of Theorem 1.4. These formulas, which give simple expressions for certain quadratic matrix sums, are the analogues of the “magic formulas” of [8, Proposition 3.1] for U(N ). They allow us to compute the Segal-Bargmann transform for trace polynomials on SO(N, R) (Theorem 1.5 for SO(N, R)). We then prove a second intertwining formula SO(N,C) for the operator As,τ which is used in proving the limit theorems of Section 4. 3.1. Magic formulas and derivative formulas Lemma 3.1 (Elementary matrix identities). Let Ei,j ∈ MN (C) denote the N × N matrix with a 1 in the (i, j)th entry and zeros elsewhere. For 1 ≤ i, j, k,  ≤ N and A = (Ai,j ) ∈ MN (C), we have Ei,j Ek, = δj,k Ei,

(3.1)

Ei,j AEi,j = Aj,i Ei,j

(3.2)

Ei,j AEj,i = Aj,j Ei,i

(3.3)

Tr(AEi,j ) = Aj,i

(3.4)

Proof. For 1 ≤ a, b, ≤ N , [Ei,j Ek, ]a,b =

N

[Ei,j ]a,h [Ek, ]h,b .

h=1

Note that [Ei,j ]a,h [Ek, ]h,b equals 1 precisely when a = i, b = , and j = k, and h = j = k, and equals 0 otherwise, so [Ei,j Ek, ]a,b = δj,k δa,i δb, , which proves (3.1). We now make the following observations: for any N ×N matrix A, AEi,j is the matrix which is all zeros except for the jth column, which is equal to the ith column of A. Hence the only (possibly) nonzero diagonal entry is the (j, j)th entry, which is Aj,i . This proves (3.4). Similarly, Ei,j A is the matrix which is all zeros except for the i row, which is the jth row of A. Putting these observations together yields (3.2) and (3.3). 2

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.12 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

12

Proposition 3.2 (Magic formulas for SO(N, R)). For any A, B ∈ MN (C),

X2 = −

X∈βso(N,R)



XAX =

X∈βso(N,R)



tr(XA)X =

X∈βso(N,R)



tr(XA) tr(XB) =

X∈βso(N,R)

N −1 1 IN = −IN + IN N N

(3.5)

1  A − tr(A)IN N

(3.6)

1 (A − A) N2

(3.7)

1 (tr(A B) − tr(AB)) N2

(3.8)

 Proof. As shown in Theorem 3.3 of [8], the quantity X∈βso(N,R) XAX is independent of the choice of orthonormal basis. Hence to see (3.6), we may set βso(N,R) to be the orthonormal basis  βso(N,R) =



1 √ (Ei,j − Ej,i ) N

. 1≤i
By Lemma 3.1,

(Ei,j − Ej,i )A(Ei,j − Ej,i ) =

i


[Ei,j AEi,j − Ei,j AEj,i − Ej,i AEi,j + Ej,i AEj,i ]

i
=



(Ei,j AEi,j − Ei,j AEj,i )

i=j

=



Aj,i Ei,j −

i=j

=





Aj,j Ei,i

i=j

Aj,i Ei,j +

N

i=1

i=j

Ai,i Ei,i −

N

i=1

Ai,i Ei,i −



Aj,j Ei,i

i=j

= A − Tr(A)IN , and dividing by N proves (3.6). Equation (3.5) then follows from (3.6) by setting A = IN . Similarly, we compute

i
Tr(A(Ei,j − Ej,i ))(Ei,j − Ej,i ) =

[Tr(AEi,j )Ei,j − Tr(AEi,j )Ej,i i
− Tr(AEj,i )Ei,j + Tr(AEj,i )Ej,i ]

= [Tr(AEi,j )Ei,j − Tr(AEj,i )Ei,j ] i=j

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.13 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

=



Aj,i Ei,j −

i=j

=





13

Ai,j Ei,j

i=j

Aj,i Ei,j +

N

Ai,i Ei,i

i=1

i=j



N

Ai,i Ei,i −

i=1



Ai,j Ei,j

i=j

= A − A, which is equivalent to (3.7). Equation (3.8) now follows from (3.7) by multiplying by B and taking tr. 2 Remark 3.3. Our proof of the magic formulas for SO(N, R) is primarily computational, relying on the elementary matrix identities of Lemma 3.1. These results can also be (β) obtained more abstractly. For β ∈ {1, 2, 2 , 4}, let kN denote so(N, R), su(N ), u(N ), (β) and sp(N ), resp., and let βk(β) be an orthonormal basis for kN . In [13, Lemma 1.2.1], N Lévy provides an explicit decomposition of the Casimir element Ck(β) , defined as the N tensor

Ck(β) = X ⊗ X. N

X∈β

(β) kN

Let T =

N

Ea,b ⊗ Eb,a

and

a,b=1

P =

N

Ea,b ⊗ Ea,b .

a,b=1

For K ∈ {R, C, H}, set I(K) = {1, i, j, k} ∩ K and define two elements ReK and CoK of K ⊗R K by



ReK = γ ⊗ γ −1 and CoK = γ ⊗ γ. γ∈I(K)

γ∈I(K)

Then for β ∈ {1, 2 4} (where 2 corresponds to the value 2), the Casimir element Ck(β) N is given by Ck(β) = N

 1  −T ⊗ ReK +P ⊗ CoK , βN

and for β = 2, Csu(N ) = Cu(N ) − N12 iIN ⊗ iIN . This decomposition can be used to  compute any expression of the form X∈β (β) B(X, X), where B is an R-bilinear map. kN

In particular, the magic formulas for SO(N, R), SU(N ), and Sp(N ) (cf. Propositions 3.2, 6.2, and 5.10) can be obtained in this way.

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.14 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

14

Proposition 3.4 (Derivative formulas for SO(N, R)). Let X ∈ βso(N,R) . The following identities hold on SO(N, R) and SO(N, C): m

m= XA

Aj XAm−j ,

m≥0

(3.9)

j=1 0

m=− XA

Aj XAm−j ,

m<0

(3.10)

j=m+1

ΔSO(N,R) Am

tr(Am ) = m · tr(XAm ), m∈Z (3.11) X ⎡ ⎤ m−1 m−1

1

= 21m≥2 ⎣ (m − j)Am−2j − (m − j) tr(Aj )Am−j ⎦ N j=1 j=1 − ⎡

ΔSO(N,R) Am = 21m≤−2 ⎣

1 N

−1

m(N − 1) m A , N

(−m + j)Am−2j −

j=m+1

m≥0 −1

(3.12) ⎤

(−m + j) tr(Aj )Am−j ⎦

j=m+1

m(N − 1) m A , m<0 N p = mp (Ap−m − Ap+m ), tr(Am ) · XA X N2 +

X∈βso(N,R)

(3.13) m, p ∈ Z.

(3.14)

We emphasize that these derivative formulas hold true only in the case on SO(N, R) and SO(N, C), in which case A−1 = A . Proof. The proof of (3.9), (3.10), and (3.11) for the U(N ) case are contained in [8]; the proof of these equations for the SO(N, R) case is identical. Next, by (3.9), we have, for m ≥ 0,

2 Am = 21m≥2 X

Aj XAk XA +

j,k=0 j+k+=m

m

Aj X 2 Am−j .

(3.15)

j=1

Summing over all X ∈ βSO(N,R) and using the magic formulas (3.5) and (3.6), we have ΔSO(N,R) Am = 21m≥2

= 21m≥2





X∈βso(N,R)

k,j=0 k+j+=m

k,j=0 k+j+=m

Ak XAj XA +



m

Aj X 2 Am−j

X∈βso(N,R) j=1

 m(N − 1) m 1 k j   A (A ) A − tr(Aj )Ak+ − A N N

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.15 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

15



⎤ m−1 m−1



1 = 21m≥2 ⎣ (m − j)Am−2j − (m − j) tr(Aj )Am−j ⎦ N j=1 j=1 −

m(N − 1) m A , N

which proves (3.12). Similarly, if m < 0, we use (3.10) to get

2 Am = 21m≤−2 X

0

Aj XAk XA +

Aj X 2 Am−j .

j=m+1

k,<0,j≤0 j+k+=m

Summing over all X ∈ βSO(N,R) and using the magic formulas (3.5) and (3.6), we have

ΔSO(N,R) Am = 21m≥2



= 21m≥2

k,<0,j≤0 j+k+=m



1 = 21m≥2 ⎣ N +

0

Aj X 2 Am−j

X∈βso(N,R) j=m+1

X∈βso(N,R) k,<0,j≤0 j+k+=m





Ak XAj XA +

 m(N − 1) m 1 k j   j k+ + A (A ) A − tr(A )A A N N

−1

(−m − j)Am−2j −

j=m+1

−1

⎤ (−m − j) tr(Aj )Am−j ⎦

j=m+1

m(N − 1) m A , N

which proves (3.13). For (3.14), we first assume m, p ≥ 0. Using equation (3.7) and the fact that A = A−1 for A ∈ SO(N, R),

tr(Am ) · XA p= X

X∈βso(N,R)

⎡ ⎛ ⎞ !⎤ p m



⎣tr ⎝ Aj XAm−j ⎠ Ak XAp−k ⎦



j=1

X∈βso(N,R)

=

p

k=1

=

p

k=1

=

Ak







X∈βso(N,R)



mAk ⎝

m



k=1

tr(XAm )⎠ XAp−k

j=1



X∈βso(N,R)

mp p−m (A − Ap+m ). N2

⎞ tr(XAm )X ⎠ Ap−k

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.16 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

16

For m < 0 and p ≥ 0, we have

p= tr(Am ) · XA X

X∈βso(N,R)

⎡ ⎛



⎣tr ⎝−

p



Ak

k=1

=−

p





(−m)Ak ⎝

k=1

!⎤ Ak XAp−k ⎦

k=1

tr(XAm )⎠ XAp−k

j=m+1

X∈βso(N,R)





0



p

Aj XAm−j ⎠

j=m+1

X∈βso(N,R)

=−



0



tr(XAm )X ⎠ Ap−k

X∈βso(N,R)

mp = 2 (Ap−m − Ap+m ). N For m ≥ 0 and p < 0, we have

p= tr(Am ) · XA X

X∈βso(N,R)

⎡ ⎛ ⎞⎛ ⎞⎤ m 0



⎣tr ⎝ Aj XAm−j ⎠ ⎝− Ak XAp−k ⎠⎦



j=1

X∈βso(N,R) 0

=−

k=p+1 0

=−



Ak

X∈βso(N,R)



j=1



mAk ⎝

k=p+1

k=p+1

⎞ ⎛ m

⎝ tr(XAm )⎠ XAp−k ⎞

tr(XAm )X ⎠ Ap−k

X∈βso(N,R)

mp = 2 (Ap−m − Ap+m ). N Finally, for m, p < 0, we have

p tr(Am ) · XA X

X∈βso(N,R)





=



⎣− tr ⎝

0

k=p+1

=

0

k=p+1

=

Ak

Aj XAm−j ⎠ ⎝−

j=m+1

X∈βso(N,R)

=

⎞⎛

0







X∈βso(N,R)



(−m)Ak ⎝



0



tr(XAm )X ⎠ Ap−k

X∈βso(N,R)

(−m)(−p) p−m (A − Ap+m ). N2

This proves (3.14). 2

k=p+1

tr(XAm )⎠ XAp−k

j=m+1



0

⎞⎤ Ak XAp−k ⎠⎦

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.17 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

17

3.2. Intertwining formulas for ΔSO(N,R) In this section, we describe the trace polynomial functional calculus that will be necessary for the proof of Theorem 1.4, which contains the intertwining formulas for ΔSO(N,R) , ΔSU(N ) , and ΔSp(N ) . The space of trace polynomials was first introduced in [8, Definition 1.7], where it was used to prove analogous intertwining results for ΔU(N ). Using this framework, we then prove Theorem 1.4 for the SO(N, R) case. The computations in this section are related to, but not quite the same as, those used to establish the intertwining formulas for ΔU(N ) in [8, Sections 3-4]. Definition 3.5. Let C[u, u−1 ] denote the algebra of Laurent polynomials in a single variable u: " #

−1 k C[u, u ] = ak u : ak ∈ C, ak = 0 for all but finitely many k , (3.16) k∈Z

with the usual polynomial multiplication. Let C[u] and C[u−1 ] denote the subalgebras consisting of polynomials in u and u−1 , respectively. Let C[v] denote the algebra of polynomials in countably many commuting variables v = {v±1 , v±2 , . . .}, and let C[u, u−1 ; v] denote the algebra of polynomials in the commuting variables {u, u−1 , v±1 , v±2 , . . .}. We now define the trace polynomial functional calculus: for P ∈ C[u, u−1 ; v], we have (1)

A ∈ SO(N, R) or SO(N, C),

(2)

A ∈ SU(N ) or SL(N, C),

(4)

A ∈ Sp(N ) or Sp(N, C).

PN (A) = [P (1) ]N (A) := P (u; v)|u=A,vk =tr(Ak ),k=0 , PN (A) = [P (2) ]N (A) := P (u; v)|u=A,vk =tr(Ak ),k=0 , PN (A) = [P (4) ]N (A) := P (u; v)|u=A,vk =tr(A k ),k=0 ,

Functions of the form PN , where P ∈ C[u, u−1 ; v], are called trace polynomials. We (β) will often suppress the superscripts for PN and write PN when it is clear from context which version of the trace polynomial functional calculus is being used. (β)

(1)

As an illustrative example, the trace polynomial PN = PN on SO(N, R) associated 6 with P (u; v) = v1 v−3 u2 − 3v52 is PN (A) = tr(A) tr(A−3 )6 A2 − 3 tr(A5 )2 IN . We now introduce the pseudodifferential operators on C[u, u−1 ; v] that appear in our intertwining formulas. Definition 3.6. Let R± denote the positive and negative projection operators R+ : C[u, u−1 ; v] → C[u; v]

and

R− : C[u, u−1 ; v] → C[u−1 ; v]

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.18 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

18

defined by !



R+

uk qk (v)

=

k=−∞



uk qk (v),

R−

k=0

!



uk qk (v)

=

k=−∞

−1

uk qk (v).

k=0−∞

In addition, for any k ∈ Z, let Muk denote the multiplication operator defined by Muk P (u; v) = uk P (u; v). Definition 3.7. Define the following operators on C[u, u−1 ; v]:

N = N0 + N1 =

|k|vk

|k|≥1

Y2 = Y2+ − Y2−

(3.17)

−1

∂ ∂ Mu−k R+ − vk uR− Mu−k R− , ∂u ∂u k=1 k=−∞ ⎛ ⎞ ⎛ ⎞ ∞ k−1 −2 −1





∂ ∂ ⎝ ⎝ = jvj vk−j ⎠ − jvj vk−j ⎠ , ∂v ∂v k k j=1

Y1 = Y1+ − Y1− =



∂ ∂ + u (R+ − R− ), ∂vk ∂u

k=2

Z1 = Z1+ − Z1− =



k=2

vk uR+



k=−∞



(3.18)

(3.19)

j=k+1

⎛ ⎞ −2 −1



∂ ∂ ⎝ (k − j)vk−2j ⎠ ⎝ − (k − j)vk−2j ⎠ , ∂vk ∂vk j=1 k−1

k=−∞

j=k+1

(3.20) Z2 = Z2+ − Z2− =



u−k+1 R+

k=1

K1 = K1+ − K1− =



ku−k+1

∂2 ∂2 − , kuk+1 ∂vk ∂u ∂vk ∂u

|k|≥1

ku



∂ + ∂u∂vk

(3.22)

|k|≥1

jkvk−j

|j|,|k|≥1

J =2

(3.21)

k=−∞

|k|≥1

K2 = K2+ − K2− =

−1

∂ ∂ Mu−k R+ − u−k+1 R− Mu−k R− , ∂u ∂u



∂2 − ∂vj ∂vk

jkvj vk

|j|,|k|≥1



jkvk+j

|j|,|k|≥1

∂2 , ∂vj ∂vk

(3.23)

∂2 ∂ ∂2 ∂ + k2 vk + u 2 + u . (3.24) ∂vj ∂vk ∂vk ∂u ∂u |k|≥1

We set L0 = −N − 2Y1 − 2Y2 , (1)

L1 = N + 2Z1 + 2Z2 , (2)

L1 = 0,

(3.25) (1)

L2 = 2K1 + K2 ,

L2 = −2K1− − K2− + J ,

1 (1) (4) L1 = − L1 , 2

(2)

(4)

L2 =

1 (1) L . 4 2

(3.26) (3.27) (3.28)

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.19 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

19

For β ∈ {1, 2, 4}, we define (β)

DN = L0 +

1 (β) 1 (β) L + 2 L2 . N 1 N

(3.29)

Notation 3.8. For m ∈ Z and A ∈ MN (C), let Wm (A) = Am , Vm (A) = tr(Am ), and V(A) = {Vm (A)}|m|≥1 . The functions Wm , Vm , and V implicitly depend on N , but we suppress the index, which should not cause confusion. Theorem 3.9 (Partial product rule). Let α, β ∈ C. For P ∈ C[u, u−1 ; v] and Q ∈ C[v], (1) the operator αL0 + βL1 satisfies 

(1)

αL0 + βL1

 (P Q) =

      (1) (1) αL0 + βL1 P Q + P αL0 + βL1 Q .

(3.30)

Proof. From Definition 3.7, recall that L0 = −(N0 + N1 ) − 2Y1 − 2Y2 ,

(1)

L1 = N0 + N1 + 2Z1 + 2Z2 .

Observe that N0 , Y2 , Z1 are first order differential operators on C[v], and so satisfy the product rule on C[u, u−1 ; v]. On the other hand, N1 , Y1 , and Z2 annihilate C[v] and satisfy, for P ∈ C[u, u−1 ; v] and Q ∈ C[v], N1 (P Q) = (N1 P )Q,

Y1 (P Q) = (Y1 P )Q,

Z2 (P Q) = (Z2 P )Q.

Putting this together shows (3.30). 2 Definition 3.10. The trace degree of a monomial in C[u, u−1 ; v] is defined to be   k−1 km k−m deg uk0 v1k1 v−1 · · · vm v−m = |k0 | +



|j|kj .

(3.31)

1≤|j|≤m

We define the trace degree of any element in C[u, u−1 ; v] to be the highest trace degree of any of its monomial terms. For m ≥ 0, we define the subspace Cm [u, u−1 ; v] ⊆ C[u, u−1 ; v] to be Cm [u, u−1 ; v] = {P ∈ C[u, u−1 ; v] : deg P ≤ m},

(3.32)

consisting of polynomials of trace degree ≤ m. Note that Cm [u, u−1 ; v] is finite$ dimensional and C[u, u−1 ; v] = m≥0 Cm [u, u−1 ; v]. (1)

(1)

(2)

Corollary 3.11. Let m, N ∈ N and α, β ∈ C. The operators αL0 + βL1 , DN , DN , and (4) DN preserve the finite dimensional subspace Cm [u, u−1 ; v] ⊆ C[u, u−1 ; v]. It follows that (1) (1) (2) (4) eαL0 +βL1 , eαDN , eαDN , and eαDN are well-defined operators on each Cm [u, u−1 ; v] (via power series, cf. Remark 3.14) and hence are well-defined on all of C[u, u−1 ; v].

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.20 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

20

(1)

(1)

(2)

(4)

Proof. Recall that αL0 +βL1 , DN , DN , and DN are linear combinations of the operators J , N , Y1 , Y2 , Z1 , Z2 , K1 , and K2 described in Definition 3.7. Each of these operators, in turn, are linear combinations of compositions of multiplication, differentiation, and projection operators. The multiplication and differentiation operators raise and lower trace degree, respectively, while projection operators R+ and R− leaves Cm [u, u−1 ; v] invariant. Keeping track of multiplication and differentiation operators in J , N , Y1 , Y2 , Z1 , (1) (1) Z2 , K1 , and K2 shows that these operators preserve trace degree. Hence αL0 +βL1 , DN , (2) (4) DN , and DN preserve the finite dimensional subspace Cm [u, u−1 ; v] ⊆ C[u, u−1 ; v], and (1) (1) (2) (4) the result for eαL0 +βL1 , eαDN , eαDN , and eαDN follows. 2 Corollary 3.12. For any P ∈ C[u, u−1 ; v], Q ∈ C[v], and α, β, τ ∈ C, (1)

τ

τ

(1)

(1)

τ

e 2 (αL0 +βL1 ) (P Q) = e 2 (αL0 +βL1 ) P · e 2 (αL0 +βL1 ) Q.

(3.33)

Proof. Suppose L is a linear operator on C[u, u−1 ; v] which leaves C[v] and Cm [u, u−1 ; v] invariant and satisfies the partial product rule (3.30). Applying (3.30) repeatedly, we get k ∞ ∞ k  ∞





1 k 1 k 1 L (P Q) = (L P )(Lk− Q) (L P )(Lk− Q) = k! k! !(k − )!  k=0 k=0 =0 k=0 =0 ⎛ ⎞ ! ∞ ∞ ∞

1

1

1 j k j k = (L P )(L Q) = ⎝ L P⎠ L Q j!k! j! k! j=0 j,k=0

k=0

= eL P · eL Q.   (1) By Definition 3.7, it is clear that τ2 αL0 + βL1 leaves C[v] and Cm [u, u−1 ; v] invari  (1) ant, so setting L = τ2 αL0 + βL1 concludes the proof. 2 We can now prove our main intertwining result for ΔSO(N,R) . Proof of Theorem 1.4 for SO(N, R). It suffices to show that ΔSO(N,R) PN = [DN P ]N holds for P (u; v) = um q(v), where m ∈ Z \ {0} and q ∈ C[v]. Then PN = Wm · q(V), and by the product rule,

2 (Wm · q(V)) ΔSO(N,R) PN = X X∈βso(N,R)

=



  (XW m ) · q(V) + Wm · Xq(V) X

X∈βso(N,R)

=



2 Wm ) · q(V) + 2XW m · Xq(V) 2 q(V) (X + Wm · X

X∈βso(N,R)

= (ΔSO(N,R) Wm ) · q(V)

m · Xq(V) +2 + Wm · (ΔSO(N,R) q(V)) XW X∈βso(N,R)

(3.34)

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.21 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

21

For the first term in (3.34), we consider the cases m ≥ 0 and m < 0 separately. For m ≥ 0, we apply (3.12) to get (ΔSO(N,R) Wm ) · q(V) m(N − 1) =− Wm · q(V) N   m−1 m−1

1

+ 21m≥2 (m − k)Wm−2k − (m − k)Vk Wm−k q(V) N k=1 k=1  ∞ !



∂ + 2 N −1 −k+1 + ∂ + u R P M −k R P + u R =− N ∂u N ∂u u N k=1 N  ∞ ! 

∂ −2 vk uR+ Mu−k R+ P ∂u k=1 N

 ∂ + N −1 2 + + u R P =− + [Z2 P ]N − 2[Y1 P ]N . (3.35) N ∂u N N A similar computation, using (3.13), shows that for m < 0,

 ∂ − N −1 2 u R P (ΔSO(N,R) Wm ) · q(V) = − [Z2− P ]N + 2[Y1− P ]N . N ∂u N N We observe that Y1+ , Z2+ annihilates C[u−1 ] while Y1− , Z2− annihilates C[u], so for all m ∈ Z, (ΔSO(N,R) Wm ) · q(V) = −

N −1 2 [N1 P ]N + [Z2 P ]N − 2[Y1 P ]N . N N

(3.36)

Next, by the chain rule and (3.14), the middle term in (3.34) is

m · Xq(V) XW X∈βso(N,R)

 ∂ k q (V) · XV ∂vk X∈βso(N,R) |k|≥1 ⎛ ⎞ 



∂ ⎝ ⎠ = XWm · XVk q (V) ∂vk |k|≥1 X∈βso(N,R) 

mk ∂ = (W − W ) q (V) m−k m+k N2 ∂vk |k|≥1



   1

∂ m ∂ m ∂ = 2 u u k W−k+1 − Wk+1 q (V) N ∂u ∂u ∂vk N N =



m· XW

|k|≥1

⎡⎛

=

(3.37)

⎞ ⎤

1 ⎣⎝

∂2 ⎠ ⎦ −k+1 k+1 k(u − u ) P N2 ∂u∂vk |k|≥1

N

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.22 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

22

=

1 [K1 P ]N . N2

(3.38)

% ∂ m& where we have used the fact that mWm+j = Wj+1 ∂u u N in line (3.37). Finally, for the last term in (3.34), we have, for each X ∈ βso(N,R) , ⎛



 ∂ 2 q(V) = X ⎝ k⎠ X q (V) · XV ∂vk |k|≥1

=

 ∂ 2 Vk + q (V) · X ∂vk

|k|≥1

 |j|,|k|≥1

∂2 j · XV k . (3.39) q (V) · XV ∂vj ∂vk

In summing the first term in (3.39) over X ∈ βso(N,R) , we break up the sum for positive and negative k. Using (3.12) and (3.13), we have

∞ 

∂ 2 Vk q (V) · X ∂vk

X∈βso(N,R) k=1



=−

N −1

kVk N



k=1

−2

∞ k−1



 ∞ k−1 ∂ ∂ 2

q (V) + (k − )Vk−2 q (V) ∂vk N ∂vk k=2 =1

 (k − )V Vk−

k=2 =1

∂ q (V) ∂vk

(3.40)

and

−1 

∂ 2 Vk q (V) · X ∂vk

X∈βso(N,R) k=−∞

 −1 −2 N −1

2

∂ = kVk q (V) − N ∂vk N k=−∞

 (k − )Vk−2

k=−∞ =k+1



−1

−2

−2

−1

(−k + )V V−k−

k=−∞ =k+1

∂ q (V) ∂vk

∂ q (V). ∂vk

(3.41)

Summing the second term in (3.39) over X ∈ βso(N,R) and using (3.14) yields





X∈βso(N,R) |j|,|k|≥1

=

1 N2

∂2 j · XV k q (V) · XV ∂vj ∂vk

|j|,|k|≥1

 jk(Vk−j − Vk+j )

∂2 q (V) ∂vj ∂vk

(3.42)

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.23 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

23

Adding (3.40), (3.41), and (3.42) together and multiplying by Wm , we get

 Wm · (ΔSO(N,R) q(V)) =

 N −1 2 1 N0 + Z1 − 2Y2 + 2 K2 P − N N N N

(3.43)

Combining (3.36), (3.38), and (3.43) proves (1.11) for SO(N, R) (β = 1). Equation (1.12) now follows from Corollary (3.11). 2 Remark 3.13. A similar intertwining formula is proven in [8] for the U(N ) case, where for any P ∈ C[u, u−1 ; v],

 ΔU(N ) PN =

 1 − − L0 − 2 (2K1 + K2 ) P , N N

(3.44)

and, subsequently, for τ ∈ C, −



e 2 ΔU(N ) PN = [e 2 (L0 − N 2 (2K1 +K2 )) P ]N . τ

τ

1

(3.45)

Remark 3.14. In this paper, we are primarily concerned with computing the SegalBargmann transform on trace polynomials. In this setting, if D is a left-invariant differential operator on K, where K is a compact type matrix Lie group, then we can compute the action of eD on a trace polynomial f via the power series D

(e f )(x) =



1 n D f n! n=0

! (x).

(3.46)

τ

In particular, if D = τ2 ΔK , we can define e 2 ΔK f as a function on K using the power τ series definition. In [9], it is shown that e 2 ΔK f has a unique analytic continuation to KC . K Moreover, this analytic continuation is equal to Bs,τ f. Proof of Theorem 1.5 for SO(N, R). By the intertwining formula (1.12), we have (1)

e 2 ΔSO(N,R) PN = [e 2 DN P ]N . τ

τ

(1)

Corollary 3.11 shows that [e 2 DN P ]N is a trace polynomial on SO(N, R), so its analytic continuation to SO(N, C) is given by the same trace polynomial function. Hence (1) τ [e 2 DN P ]N , viewed as a holomorphic function on SO(N, C), is the analytic continuation τ SO(N,R) of e 2 ΔSO(N,R) PN , and so is equal to Bs,τ PN . 2 τ

SO(N,C)

3.3. Intertwining formulas for As,τ

We now prove an intertwining formula for SO(N, C) that is analogous to the intertwining formula for SO(N, R) in Theorem 1.4. To do so, we use the word polynomial

JID:YJFAN 24

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.24 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

space introduced in [8, Notation 3.21], which was used to prove a similar intertwining formula for GLN (C) = U(N )C . Notation 3.15. For m ∈ N, define Em to be the set of words ε : {1, . . . , m} → {±1, ±∗}. We denote the length of a word ε ∈ Em by |ε| = m and set E = ∪m≥0 Em . We define the word polynomial space W to be W = C [{vε }ε∈E ] ,

the space of polynomials in the commuting variables {vε }ε∈Em . For j, k ∈ Z not both zero, we define the words |j| times

|k| times

     ε(j, k) = (±1, . . . , ±1, ±∗, . . . , ±∗) ∈ E|j|+|k| ,

(3.47)

where the first |j| slots contain +1 if j > 0 and −1 if j < 0, and the last |k| slots contain +∗ if k > 0 and −∗ if k < 0. We set vε(0,0) = 1. Notation 3.16. For ε = (ε1 , . . . , εm ) ∈ Em and A ∈ SO(N, C), we define Aε = Aε1 · · · Aεm , where A+∗ := A∗ and A−∗ := (A∗ )−1 . If P ∈ W , we define a function PN : SO(N, C) → C by PN (A) = P (V(A)), where V(A) = {Vε (A) : ε ∈ E } and Vε (A) = tr(Aε ) = tr(Aε1 · · · Aεm ). Notation 3.17. We define the inclusion maps ι, ι∗ : C[v] → W , with ι linear and ι∗ conjugate linear, by ι∗ (vk ) = vε(0,k) .

ι(vk ) = vε(k,0) ,

(3.48)

The maps ι and ι∗ intertwine with the evaluation map in the following way: for Q ∈ C[v], [ι∗ (Q)]N (A) = QN (A)∗ .

[ι(Q)]N (A) = QN (A),

Definition 3.18. The trace degree of a monomial ⎛ deg ⎝

m ( j=1

⎞ vεkjj ⎠ =

'm

m

j=1

j=1

k

vεjj ∈ W is defined to be

|kj ||εj |,

(3.49)

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.25 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

25

and the trace degree of an arbitrary element of W is the highest trace degree of any of its monomial terms. For each m ∈ N, we let Wm denote the finite dimensional subspace of W Wm = {P ∈ W : deg(P ) ≤ m} ⊆ C[{vε }|ε|≤m ],

so that W =

$∞ m=1

Wm .

Theorem 3.19. Fix s)∈ R and τ = * t + iθ ∈ C. There are collections {Qs,τ : ε ∈ E }, ε s,τ s,τ {Rε : ε ∈ E }, and Sε,δ : ε, δ ∈ E in W such that: and Rεs,τ are certain finite sums of monomials of trace degree 1. for each ε ∈ E , Qs,τ ε |ε| such that ASO(N,C) Vε = [Qs,τ s,τ ε ]N +

1 s,τ [R ]N , N ε

(3.50)

s,τ 2. for ε, δ ∈ E , Sε,δ is a certain finite sum of monomials of trace degree |ε| + |δ| such that dSO(N,R)

=1

 s−

t 2



  Vε )(X  Vδ ) + t (Y  Vε )(Y  Vδ ) − θ(X  Vε )(Y  Vδ ) = 1 [S s,τ ]N . (X 2 N 2 ε,δ (3.51) d

SO(N,R) For the proof below, we use the following conventions: let βso(N,R) = {X }=1 denote an orthonormal basis for so(N, R), with β+ = βso(N,R) and β− = iβso(N,R) . If X ∈ MN (C) and A ∈ SO(N, C),

(AX)−1 := −XA−1 ,

(AX)1 := AX, (AX)∗ := X ∗ A∗ ,

(AX)−∗ := −A∗ X ∗ .

(3.52)

In addition, we will be liberal in our use of ± to denote a sign that may vary for different terms and on different sides of an equation, since we will not require a precise formula for our word polynomials beyond what is described in the theorem statement. Proof. Fix a word ε = (ε1 , . . . , εm ) ∈ E . Then for each X ∈ β± and A ∈ SO(N, C), ε )(A) = (XV

m

j=1

so

tr(Aε1 · · · (AX)εj · · · Aεm ),

(3.53)

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.26 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

26

2 Vε )(A) = (X

m

j=1

+2

tr(Aε1 · · · (AX 2 )εj · · · Aεm )

(3.54)

tr(Aε1 · · · (AX)εj · · · (AX)εk · · · Aεm )

(3.55)

1≤j
Applying magic formula (3.5) to sum each term in (3.54) over β± , we have

tr(Aε1 · · · (AX 2 )εj · · · Aεm ) = ±

X∈β±

N −1 N −1 tr(Aε1 · · · Aεj · · · Aεm ) = ± Vε (A). N N (3.56)

Summing over 1 ≤ j ≤ m now gives m



tr(Aε1 · · · (AX 2 )εj · · · Aεm ) =

X∈β± j=1

N −1 n± (ε)Vε (A), N

(3.57)

where n± (ε) ∈ Z and |n± (ε)| ≤ |ε|. For (3.55), we can express each term in the sum as 0

1

2

tr(Aε1 · · · (AX)εj · · · (AX)εk · · · Aεm ) = ± tr(Aεj,k XAεj,k XAεj,k ),

(3.58)

so ε0j,k , ε1j,k , and ε2j,k are substrings of ε such that ε0j,k ε1j,k ε2j,k = ε. Summing (3.58) over X ∈ β± by magic formula (3.6), we have

tr(Aε1 · · ·(AX)εj · · · (AX)εk · · · Aεm )

X∈β±

=± =±



0 1 2 0 2 1 1 tr(Aεj,k (Aεj,k )−1 Aεj,k ) − tr(Aεj,k Aεj,k ) tr(Aεj,k ) N

1 V 3 (A) ± Vε0j,k ε2j,k (A)Vε1j,k (A), N εj,k

(3.59) (3.60)

0

1

2

where ε3j,k is the word of length |ε| such that Vε3j,k (A) = tr(Aεj,k (Aεj,k )−1 Aεj,k ). Thus summing (3.55) over X ∈ β± gives



tr(Aε1 · · · (AX)εj · · · (AX)εk · · · Aεm )

X∈β± 1≤j
=

1 N





±Vε3j,k (A) +

1≤j
±Vε0j,k ε2j,k (A)Vε1j,k (A).

1≤j
We define word polynomials corresponding to (3.61) by Q± ε,0 =

1≤j
±vε3j,k

(3.61)

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.27 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••



Q± ε,1 =

±vε0j,k ε2j,k vε1j,k .

27

(3.62)

1≤j
Now if X ∈ β+ and Y = iX ∈ β− , then (Y Vε )(A) =

m

tr(Aε1 · · · (AY )εj · · · Aεm ) = i

j=1

m

± tr(Aε1 · · · (AX)εj · · · Aεm ), (3.63)

j=1

and Y Vε )(A) =i (X

m

j=1

+ 2i

± tr(Aε1 · · · (AX 2 )εj · · · Aεm )

± tr(Aε1 · · · (AX)εj · · · (AX)εk · · · Aεm ).

(3.64) (3.65)

1≤j
A similar argument to the above shows that summing (3.64) over X ∈ β+ gives m



± tr(Aε1 · · · (AX 2 )εj · · · Aεm ) =

X∈β+ j=1

N −1 η(ε)Vε (A) N

(3.66)

where η(ε) ∈ Z and |η(ε)| ≤ |ε|. In addition, summing (3.65) over X ∈ β+ , we have



± tr(Aε1 · · · (AX)εj · · · (AX)εk · · · Aεm )

X∈β+ 1≤j
=

1 N



Vε3j,k (A) +

1≤j


(3.67) ±Vε0j,k ε2j,k (A)Vε1j,k (A).

1≤j
We define corresponding word polynomials Qε,2 = i



±vε3j,k

(3.68)

±vε0j,k ε2j,k vε1j,k .

(3.69)

1≤j
Qε,3 = i



1≤j
Putting this together, we define  Qs,τ ε

=

t s− 2

 Rεs,τ =

s−

t 2





 t  n+ (ε)vε + 2Q+ n− (ε)vε + 2Q− ε,1 + ε,1 − θ (iη(ε) + 2Qε,3 ) , (3.70) 2



 t  −n+ (ε)vε + 2Q+ −n− (ε)vε + 2Q− ε,0 + ε,0 − θ (−iη(ε) + 2Qε,2 ) . 2 (3.71)

By construction, Qs,τ and Rεs,τ are of the required form and satisfy (3.50). ε

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.28 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

28

For part (2), fix δ ∈ E with n = |δ|. For each X ∈ β± , δ )(A)(XV ε )(A) = (XV

m

n

tr(Aε1 · · · (AX)εj · · · Aεm ) tr(Aδ1 · · · (AX)δk · · · Aδn ).

j=1 k=1

(3.72) Using the invariance of the trace under cyclic permutations, we can write the terms in the above sum as ± tr(XAε(j) ) tr(XAδ(k) ), where ε(j) and δ(k) are particular cyclic permutations of ε and δ. Summing over X ∈ β± and applying magic formula (3.8), along with the fact that X ∈ so(N, C) is skewsymmetric,

X∈β±

m

n  

ε(j) −1 δ(k) ε(j) δ(k) δ )(A)(XV ε )(A) = 1 (XV ± tr((A ) A ) − tr(A A N 2 j=1 k=1

=

1 N2

m

n

±(Vε( j) δ(k) (A) − Vε(j)δ(k) (A)),

(3.73)

j=1 k=1

where ε(j) is the word satisfying Vε(j) (A) = Vε(j) (A)−1 . (For example, if ε(j) = (−∗, −1, ∗, 1, 1), then ε(j) = (−1, −1, −∗, 1, ∗).) We define the associated word polynomials ± Sε,δ =

m

n

±(vε( j) δ(k) − vε(j)δ(k) ).

(3.74)

j=1 k=1

Next, if X ∈ β+ and Y = iX ∈ β− , then ε )(A)(Y Vδ )(A) = i (XV

m

n

± tr(XAε(j) ) tr(XAδ(k) ).

(3.75)

j=1 k=1

Using magic formula (3.8) to sum over X ∈ β+ gives dSO(N,R)

=1

m n i

(X Vε )(A)(Y Vδ )(A) = 2 ±(Vε( j) δ(k) (A) − Vε(j)δ(k) (A)). (3.76) N j=1 k=1

Hence we define the associated word polynomial  Sε,δ =i

m

n

j=1 k=1

±(vε( j) δ(k) − vε(j)δ(k) )

(3.77)

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.29 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

29

(where the signs in (3.77) do not necessarily correspond to those in (3.74)). Finally, we define  t t − s,τ +  Sε,δ Sε,δ = s− + Sε,δ − θSε,δ , (3.78) 2 2 s,τ where we have constructed Sε,δ so that it is a sum of monomials of trace degree |ε| + |δ| and satisfies (3.51). 2 SO(N,C)

). Fix*s ∈ R and τ = t + iθ ∈ C, and Theorem 3.20 (Intertwining formula for)As,τ s,τ s,τ s,τ let {Qε : ε ∈ E }, {Rε : ε ∈ E }, and Sε,δ : ε, δ ∈ E be as in Theorem 3.19. Define first and second order differential operators 1 s,τ ∂ L s,τ Qε 0 = 2 ∂vε

(3.79)

1 s,τ ∂ L s,τ Rε 1 = 2 ∂vε

(3.80)

1 s,τ ∂ 2 L s,τ Sε,δ 2 = 2 ∂vε ∂vδ

(3.81)

ε∈E

ε∈E

ε,δ∈E

Then for all N ∈ N and P ∈ W , 1 SO(N,C) A PN = 2 s,τ



1 s,τ 1 L1 + 2 L s,τ L s,τ 0 + N N 2

 P .

(3.82)

N

Proof. For each X ∈ MN (C), we apply the chain rule to get  ∂P (V) · (XVε ) ∂vε ε∈E

 ∂P

 ∂2P 2 Vε ) + ε )(XV δ ). (V) · (X (V) · (XV = ∂vε ∂vε ∂vδ

2 PN = X



X



ε∈E

ε,δ∈E

Hence ASO(N,C) PN = s,τ

 ∂P ε∈E

+

∂vε

(V) · ASO(N,C) Vε s,τ

dSO(N,R) 

 ∂2P t  Vε )(X  Vδ ) (V) s− (X ∂vε ∂vδ 2

ε,δ∈E

=1

The result now follows from Theorem 3.19. 2

 t + (Y Vε )(Y Vδ ) − θ(X Vε )(Y Vδ ) . 2

JID:YJFAN 30

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.30 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

The following result will be useful for the proof of Theorem 1.6. It appears in [8] for the GL(N, C) case, with the proof virtually unchanged for SO(N, C). Lemma 3.21 ([8, Lemma 3.28]). There exists a sesquilinear form (with the second argument conjugate linear) B : C[u, u−1 ; v] × C[u, u−1 ; v] → W such that, for all P, Q ∈ C[u, u−1 ; v], we have deg(B(P, Q)) = deg(P ) + deg(Q) and [B(P, Q)]N (A) = tr[PN (A)QN (A)∗ ]

for all A ∈ SO(N, C).

4. Limit theorems for the Segal-Bargmann transform on SO(N, R) We now use the results from the previous section to identify the limit of the SegalBargmann transform on SO(N, R) as N → ∞. In Section 4.1, we prove a few preliminary SO(N,R) SO(N,C) results regarding the way in which the heat kernel measures ρs and μs,τ concentrate their mass as N → ∞. Then, in Section 4.2, we prove the SO(N, R) version of SO(N,R) Theorem 1.6, the main limit theorem for Bs,τ . 4.1. Concentration of measures The following lemma is an essential component of our main limit theorems. It is a known result (see [12, Corollary 6.2.32]); we include a direct proof here for convenience. Lemma 4.1. Let X, Y ∈ M (N, C) and suppose  ·  is a submultiplicative matrix norm. Then eX+Y − eX  ≤ Y e X e Y . Proof. For n ≥ 0, note that (X + Y )n − X n  ≤ (X + Y )n − Xn , where the inequality follows by expanding (X +Y )n , which includes an X n , then applying the submultiplicativity of the matrix norm, and then recombining terms. Hence eX+Y

+∞ + ∞ + (X + Y )n

n+ X + + − − eX  = + + + + n! n! n=0 n=0 ∞

1 ≤ (X + Y )n − X n  n! n=0

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.31 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••



31



1 [(X + Y )n − Xn ] n! n=0

= e X + Y − e X . It remains to show that e X + Y −e X ≤ Y e X e Y , which is equivalent to showing that ey − 1 ≤ yey for y ≥ 0. Rearranging, this inequality is equivalent to 1 − y ≤ e−y , which holds, in fact, for all y ∈ R: by Bernoulli’s inequality,  y n ≥ 1 − y. 1− n→∞ n

e−y = lim

2

Corollary 4.2. Let V be a finite-dimensional normed vector space over C and suppose that that L0 , L1 , and L2 are operators on V . Then there exist constants C1 , C2 , C3 < ∞ depending on L0 , L1 , L2 ,  · V such that for N sufficiently large, C1 , N 1 1 1 C2 eL0 + N L1 + N 2 L2 − eL0 + N L1 End(V ) ≤ 2 N 1 C3 eL0 + N L1 − eL0 End(V ) ≤ N 1

1

eL0 + N L1 + N 2 L2 − eL0 End(V ) ≤

(4.1) (4.2) (4.3)

where  · End(V ) is the operator norm on End(V ) induced by  · V . Hence if ϕ ∈ V ∗ is a linear functional, then for N sufficiently large, C1 ϕV ∗ v||V , N 1 1 1 C2 |ϕ(eL0 + N L1 + N 2 L2 v) − ϕ(eL0 + N L1 v)| ≤ 2 ϕV ∗ v||V N 1 C 3 ϕV ∗ v||V |ϕ(eL0 + N L1 v) − ϕ(eL0 v)| ≤ N 1

1

|ϕ(eL0 + N L0 + N 2 L2 v) − ϕ(eL0 v)| ≤

(4.4) (4.5) (4.6)

where  · V ∗ is the dual norm on V ∗ . Proof. We set X = L0 and Y =

1 N L1

+

1 N 2 L2

as in Lemma 4.1, so +

1

+

+1 + + L1 + N12 L2 + 1 End(V ) L1 End(V ) e L0 End(V ) e N N 1 1 1 ≤ L1 End(V ) e L0 End(V ) e N L1 End(V ) + N 2 L2 End(V ) . N

1

eL0 + N L1 + N 2 L2 − eL0 End(V ) ≤

1

L1

+

1 2

L2

End(V ) End(V ) N0 Choose N0 ∈ N such that e N0 ≤ 2. Then (4.1) holds for all L0 End(V ) N ≥ N0 by setting C1 = 2L1 End(V ) e . Equation (4.4) then follows. The proofs of (4.2), (4.3), and subsequently (4.5) and (4.6), are similar. 2

JID:YJFAN 32

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.32 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

We now introduce notation and establish a few preliminary results which we will use for proving Theorem 1.6 for SO(N, R) in the next section. Definition 4.3. Define a family of holomorphic functions {νk }k∈Z on C by setting ν0 (τ ) ≡ 1 and for k = 0, − |k| 2 τ

νk (τ ) = e



(−τ )j |k| j−1 |k| . j! j+1 j=0

|k|−1

(4.7)

For τ ∈ C, define the trace evaluation map πτ : C[u, u−1 ; v] → C[u, u−1 ] by (πτ P )(u) = P (u; v)|vk =νk (τ ),k=0 .

(4.8)

2 5 For a concrete example, if P (u; v) = v3 v−7 u6 + 9v1 v−4 , then

(πτ P )(u) = ν3 (τ )ν−7 (τ )2 u6 + 9ν1 (τ )ν−4 (τ )5 . The following result is a key tool in proving our main limit theorems. Theorem 4.4 (Biane, [3]). For each s > 0 and k ∈ Z,  lim

N →∞ U(N )

) tr(U k ) ρU(N (U ) dU = νk (s). s

For P ∈ C[u, u−1 ; v], let P (u; 1) = P (u; v)vk =1,k=0 . Using the intertwining formula (3.45) for the U(N ) case and Theorem 4.4, we have  νk (s) = lim

N →∞ U(N )

s

) tr(U k ) ρU(N (U ) dU = lim (e 2 ΔU(N ) tr[(·)k ])(IN ) s N →∞





= lim (e 2 (L0 − N 2 (2K1 +K2 )) vk )(1). s

1

N →∞

(4.9)

(4.10)

Since L0 , K1− , and K2− are linear operators which preserve the finite-dimensional vec− − s 1 tor space Ck [u, u−1 ; v], we consider (e 2 (L0 − N 2 (2K1 +K2 )) vk )(1) to be the evaluation of the linear functional ϕ(Q) := Q(1) on the finite-dimensional space Cm [v]. Hence Corollary 4.2 applies, so taking the limit as N → ∞, we have νk (s) = (e 2 L0 vk )(1). s

(4.11)

This yields the following version of Theorem 4.4 for the SO(N, R) case, which is due to Lévy [13].

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.33 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

33

Lemma 4.5 (Lévy, [13]). For s > 0 and k ∈ Z,  lim tr(Ak ) ρSO(N,R) (A) dA = νk (s) . s

(4.12)

N →∞ SO(N,R)

Proof. We have  s lim tr(Ak ) ρSO(N,R) (A) dA = lim (e 2 ΔSO(N,R) tr[(·)k ])(IN ) s N →∞ SO(N,R)

N →∞

(1)

= lim (e 2 (L0 + N L1 s

1

N →∞

(1)

+ N12 L2 )

vk )(1) = (e 2 L0 vk )(1) s

= νk (s) , where we have again applied Corollary 4.2 as in (4.11). 2 Lemma 4.6. For Q ∈ C[v] and τ ∈ C, we have  τ L0  e 2 Q (1) = πτ Q.

(4.13)

Proof. By Theorem 3.9, e 2 L0 is a homomorphism of C[v]. Hence it suffices to show that τ



 eτ L0 vk (1) = πτ (vk ) = νk (τ )

(4.14)

for each k ∈ Z. To see that it holds for all τ ∈ C, we expand the left-hand side as a power series in τ . Fix a norm  ·Wk on the finite-dimensional vector space Wk := Ck [u, u−1 ; v]. Let  · End(Wk ) denote the operator norm on End(Wk ) induced by  · Wk , and let  · Wk∗ denote the corresponding dual norm on Wk . Let ϕ be the linear functional acting on Wk defined by ϕ(P ) = P (1). Then for any τ ∈ C, ∞  ∞

 τ n  τ n

 τ L  1 1 n n 0 e 2 vk (1) = (L0 ) vk (1) = ϕ((L0 ) vk ) n! 2 n! 2 n=0 n=0  n ∞

1 |τ | ≤ ϕWk∗ L0 nEnd(Wk ) vk Wk , n! 2 n=0  τ  and the right-hand side converges by the ratio test. Thus the function τ → e 2 L0 vk (1) is analytic on the complex plane. By (4.11), this function agrees with the entire function τ → νk (τ ) for τ ∈ R, and so it also agrees for all τ ∈ C. 2 Lemma 4.7. Let s > 0 and τ = t + iθ ∈ D(s, s). For any Q ∈ C[v],  s,τ eL0 ι(Q) (1) = πs−τ Q,   s,τ eL0 ι∗ (Q) (1) = πs−τ Q. 

(4.15) (4.16)

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.34 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

34

, = iXf Proof. First, observe that if f : SO(N, C) → MN (C) is holomorphic, then JXf for all X ∈ so(N, R). Thus





ASO(N,C) f |SO(N,R) = s,τ

s−

X∈βso(N,R)

t 2



 2 − t X 2 − iθX 2 f = (s − τ )ΔSO(N,R) f. X 2

In particular, since QN is a trace polynomial, e 2 As,τ

SO(N,C)

1

1

QN = e 2 (s−τ )ΔSO(N,R) QN .

(4.17)

Applying intertwining formulas (3.82) and (1.12) to the left side and using (3.49) gives -

s,τ +

eL0

1 N

s,τ + 1 L s,τ L 1 N2 2

. ι(Q)

N

1

(1)

= [e 2 (s−τ )DN Q]N .

(4.18)

Hence for Q ∈ C[v], 

s,τ +

eL0

1 N

s,τ + 1 L s,τ L 1 N2 2

  1  (1) (1) 1 1 ι(Q) (1) = e 2 (s−τ )(L0 + N L1 + N 2 L2 ) Q (1).

(4.19)

If deg Q = m, then we can view the left and right sides as the evaluation of a linear functional ϕ(P ) = P (1) on the finite-dimensional spaces Wm and Cm [v], respectively. Thus we may apply Corollary 4.2 to take the limit as N → ∞, which yields 

  1  s,τ eL0 ι(Q) (1) = e 2 (s−τ )L0 Q (1).

(4.20)

By Lemma 4.6, the expression on the right is πs−τ Q. Equation (4.16) is proved similarly, , = −iXf using the fact that if f : SO(N, C) → MN (C) is antiholomorphic, then JXf for all X ∈ so(N, R). 2 Theorem 4.8. Let s > 0, τ ∈ D(s, s), and k ∈ Z. Then  lim

N →∞ SO(N,C)

tr(Ak ) μSO(N,C) (A) dA = νk (s − τ ) . s,τ

Proof. We use (2.4) and intertwining formula (3.82) to get  SO(N,C)

 1 SO(N,C)  k 2 As,τ tr(Ak ) μSO(N,C) (A) dA = e tr[(·) ] (IN ) s,τ  s,τ 1 s,τ 1 s,τ  = eL0 + N L1 + N 2 L2 ι(vk ) (1).

(4.21)

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.35 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

35

Viewing the last expression as a linear functional acting on ι(vk ) ∈ Wk , we apply Corollary 4.2 to take the limit as N → ∞. Combining this with Lemma 4.7, we have  lim

N →∞ SO(N,C)

 s,τ  L 0 ι(v ) tr(Ak ) μSO(N,C) (A) dA = e (1) = πs−τ vk = νk (s − τ ) . k s,τ

2

4.2. The free Segal-Bargmann transform for SO(N, R) In this section, we prove our main result on the large-N limit of the Segal-Bargmann transform for SO(N, R). The free Segal-Bargmann transform Gs,τ and the free inverse Segal-Bargmann transform Hs,τ appearing in Theorem 1.6 have explicit formulas, which we first describe. Definition 4.9. Let L0 be the pseudodifferential operator on C[u, u−1 ; v] from Definition 3.7. We define the free Segal-Bargmann transform to be the map Gs,τ : C[u, u−1 ] → C[u, u−1 ] given by Gs,τ = πs−τ ◦ e 2 L0 , τ

(4.22)

and we define the free inverse Segal-Bargmann transform to be the map Hs,τ : C[u, u−1 ] → C[u, u−1 ] given by Hs,τ = πs ◦ e− 2 L0 . τ

(4.23)

Before proving Theorem 1.6, we require the following result. Theorem 4.10. Let s > 0 and τ = t + iθ ∈ D(s, s). For any P ∈ C[u, u−1 ; v], 1 , and PN − =O N2  1 PN − [πs−τ P ]N 2L2 (μSO(N,C) ) = O . s,τ N2 

[πs P ]N 2L2 (ρSO(N,R) ) s

(4.24) (4.25)

This result shows that with respect to the heat kernel measure, the space of trace polynomials concentrates onto the space of polynomials as N → ∞. Proof. We first show (4.25). It suffices to prove the result for polynomials of the form P (u; v) = uk Q(v) for k ∈ Z and Q ∈ C[v]; (4.25) then holds on all of C[u, u−1 ; v] by the triangle inequality. For such a polynomial, P (u; v) − πs−τ P (u; v) = uk [Q(v) − πs−τ Q] = uk Rs−τ , where we let Rs−τ = Q − πs−τ Q. Hence for A ∈ SO(N, C),

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.36 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

36

  PN (A) − [πs−τ P ]N (A)2MN (C) = tr Ak [Rs−τ ]N (A)[Rs−τ ]N (A)∗ A∗k = tr(Ak A∗k )[Rs−τ ]N (A)[Rs−τ ]N (A)∗ = [vε(k,k) ι(Rs−τ )ι∗ (Rs−τ )]N (A).

(4.26)

Along with intertwining formula (3.82), this allows us to compute   SO(N,C) 1 PN − [πs−τ P ]N 2L2 (μSO(N,C) ) = e 2 As,τ (4.27) PN − [πs−τ P ]N 2MN (C) (IN ) s,τ  s,τ 1 s,τ 1 s,τ  = eL0 + N L1 + N 2 L2 (vε(k,k) ι(Rs−τ )ι∗ (Rs−τ )) (1) (4.28) By the triangle inequality, the last line is bounded by  s,τ   s,τ 1 s,τ  D N (vε(k,k) ι(Rs−τ )ι∗ (Rs−τ )) (1) − eL0 + N L1 (vε(k,k) ι(Rs−τ )ι∗ (Rs−τ )) (1) e (4.29)

 s,τ 1 s,τ  + eL0 + N L1 (vε(k,k) ι(Rs−τ )ι∗ (Rs−τ )) (1) .

(4.30)

If m = deg vε(k,k) ι(Rs−τ )ι∗ (Rs−τ ), we can interpret (4.29) as the evaluation of the linear functional ϕ(R) = R(1) on Wm , so by Corollary 4.2,  s,τ   s,τ 1 s,τ  D N (vε(k,k) ι(Rs−τ )ι∗ (Rs−τ )) (1) − eL0 + N L1 (vε(k,k) ι(Rs−τ )ι∗ (Rs−τ )) (1) e  1 =O . (4.31) N2 1 s,τ To bound (4.30), recall from Theorem 3.20 that L s,τ 0 + N L1 is a first-order differential s,τ 1 s,τ operator on Wm , so eL0 + N L1 is an algebra homomorphism, i.e. s,τ +

eL0

1 N

s,τ L 1

(vε(k,k) ι(Rs−τ )ι∗ (Rs−τ )) s,τ +

= eL0

1 N

s,τ L 1

s,τ +

vε(k,k) · eL0

1 N

s,τ L 1

s,τ +

ι(Rs−τ ) · eL0

1 N

s,τ ∗ L 1

ι (Rs−τ ). (4.32)

For any T ∈ W , s,τ 1 s,τ s,τ 1 s,τ s,τ s,τ L 0 + N L 1 T )(1) ≤ (eL0 + N L1 T )(1) − (eL0 T )(1) + (eL0 T )(1) (e

(4.33)

By Corollary 4.2, the first term on the right side of (4.33) is O(1/N ), while the second term is a constant not depending on N . Moreover, by Lemma 4.7, s,τ

s,τ

(eL0 ι(Rs−τ ))(1) = (eL0 ι∗ (Rs−τ ))(1) = 0.

(4.34)

Applying (4.33) to each term in the product in (4.32) and using (4.34), we have that (4.30) is O(1/N 2 ). Combining this with (4.31) shows (4.25).

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.37 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

SO(N,C)

Finally, observe that 2s ΔSO(N,R) = 12 As,0 to SO(N, R) shows (4.24). 2

37

. Setting τ = 0 in (4.27) and restricting

We can now prove Theorem 1.6 for SO(N, R). Proof of Theorem 1.6 for SO(N, R). Let f ∈ C[u, u−1 ]. Using Theorem 1.5 to rewrite SO(N,R) Bs,τ fN and applying the triangle inequality, we have BSO(N,R) fN − [Gs,τ f ]N L2 (μSO(N,C) s,τ ) s,τ (1)

= [e 2 DN f ]N − [πs−τ ◦ e 2 L0 f ]N L2 (μSO(N,C) ) s,τ τ

τ

(1)

≤ [e 2 DN f ]N − [e 2 L0 f ]N L2 (μSO(N,C) ) s,τ τ

τ

(4.35)

. + [e 2 L0 f ]N − [πs−τ ◦ e 2 L0 f ]N L2 (μSO(N,C) ) s,τ τ

τ

(4.36)

By Theorem 4.10, (4.36) is O(1/N ), so it remains to show that [e

(1) τ 2 DN

f ]N − [e

τ 2 L0

 f ]N 2L2 (μSO(N,C) ) s,τ (1)

=O

1 N2

.

(4.37)

To this end, let m = deg f and R(N ) = e 2 DN f − e 2 L0 f ∈ Cm [u, u−1 ; v]. Recalling (1.8) and the sesquilinear form B of Lemma 3.21, we have τ

(1)

τ

[e 2 DN f ]N − [e 2 L0 f ]N 2L2 (μSO(N,C) ) = [R(N ) ]N 2L2 (μSO(N,C) ) s,τ s,τ   N 1 A (N ) = e 2 s,τ [R ]N 2MN (C) (IN )   s,τ = eDN B(R(N ) , R(N ) ) (1). τ

τ

(4.38)

Fix norms  · W2m and  · Cm [u,u−1 ;v] on W2m and Cm [u, u−1 ; v], respectively. By Corollary 4.2 applied to the linear functional ϕ(P ) = P (1) on W2m , there exists a constant C = C(m, s, τ ) (not dependent on N ) such that  s,τ   s,τ  C D N B(R(N ) , R(N ) ) (1) − eL0 B(R(N ) , R(N ) ) (1) ≤ ||B(R(N ) , R(N ) )W2m . e N (4.39)  τ  Next, using the linear functional ψ(P ) = e 2 L0 P (1) on W2m , we have  s,τ  L 0 (N ) ∗ B(R B(R(N ) , R(N ) ) (1) ≤ ψW2m , R(N ) )W2m . e

(4.40)

Putting (4.37), (4.39), and (4.40) together yields (1)

[e 2 DN f ]N − [e 2 L0 f ]N 2L2 (μSO(N,C) ) ≤ τ

τ

s,τ

 ∗ + ψW2m

C N

B(R(N ) , R(N ) )W2m .

(4.41)

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.38 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

38

Since B : Cm [u, u−1 ; v] × Cm [u, u−1 ; v] → W2m is a sesquilinear form with finite domain and range, there exists a constant C  = C  (m) (not dependent on N ) such that B(P, Q)W2m ≤ C  P Cm [u,u−1 ;v] QCm [u,u−1 ;v]

for all P, Q ∈ Cm [u, u−1 ; v]. (4.42)

A final application of Corollary 4.2 shows that there exists a constant C  depending on (1) (1) L0 , L1 , L2 , τ ,  · Cm [u,u−1 ;v] , and f , but again, not on N , such that (1)

R(N ) Cm [u,u−1 ;v] = e 2 DN f − e 2 L0 f Cm [u,u−1 ;v] ≤ τ

τ

C  . N

(4.43)

Hence B(R(N ) , R(N ) )W2m ≤ R(N ) 2Cm [u,u−1 ;v] ≤

(C  )2 . N2

(4.44)

Combining (4.41) and (4.44) proves (4.37). τ SO(N,R) −1 To prove (1.16), we observe that (Bs,τ ) fN |SO(N,R) = e− 2 ΔSO(N,R) fN . By a similar triangle inequality argument using (4.24), it suffices to show [e

(1)

− τ2 DN

− τ2 L0

f ]N − [e

 f ]N 2L2 (ρSO(N,R) ) s

=O

1 N2

.

(4.45)

Replacing τ with −τ in the definition of R(N ) and (s, τ ) with (s, 0) from (4.38) onwards, the same argument as above proves (4.45). For uniqueness, we first define seminorms on C[u, u−1 ; v] by (1)

P s,N = PN L2 (ρU(N ) , ) s

(2)

(1)

P s,τ,N = PN L2 (μGLN (C) ) .

P s,N = PN L2 (ρSO(N,R) , ) s

(2)

P s,τ,N = PN L2 (μSO(N,C) , ) s,τ

s,τ

(4.46) (4.47)

In addition, for β = 1, 2, define the seminorms P s(β) = lim P s,N

(β)

(4.48)

(β)

(4.49)

N →∞

(β) = lim P s,τ,N P s,τ N →∞

In [8, Lemma 4.7], it was shown that for β = 2, the seminorms (4.48) and (4.49) are norms when restricted to C[u, u−1 ]. Since (2) P (1) s = P s

(4.50)

it follows that the seminorm (4.48) is also a norm on C[u, u−1 ] for β = 1. An argument (2) completely analogous to the one used in [8, Lemma 4.7] to prove that P s,τ is a norm

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.39 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

39

 on C[u, u−1 ] shows that P s,τ is a norm on C[u, u−1 ]. Hence, if gs,τ , gs,τ ∈ C[u, u−1 ] both satisfy (1)

 BSO(N,R) fN − [gs,τ ]N 2L2 (μSO(N,C) ) = O s,τ s,τ

1 N2



 = BSO(N,R) fN − [gs,τ ]N 2L2 (μSO(N,C) ) , s,τ s,τ

the triangle inequality implies that gs,τ −

 gs,τ L2 (μSO(N,C) ) s,τ

 =O

1 N2

.

(4.51)

 Taking N → ∞ shows that gs,τ − gs,τ s,τ = 0. Since  · s,τ is a norm on C[u, u−1 ], it  follows that gs,τ = gs,τ . A similar argument shows that Hs,τ f is the unique polynomial satisfying (1.16). 2 (1)

(1)

Example 4.11. Set P (u; v) = u2 ∈ C[u, u−1 ]. Then we can show that τ

BSO(N,R) PN (A) = e 2 ΔSO(N,R) PN (A) s,τ =

2τ 2τ 1 1 N (1 − e−τ )IN + e−τ (1 + e N )A2 − e−τ (−1 + e N )A tr A, N 2 2 τ

via the same method used in [8, Example 3.5]. By the intertwining formula e 2 ΔSO(N,R) PN (1) τ = [e 2 DN P ]N , this corresponds to the trace polynomial 2τ 2τ 1 1 N (1 − e−τ ) + e−τ (1 + e N )u2 − e−τ (−1 + e N )uv1 N 2 2  1 . = e−τ (u2 − τ uv1 ) + O N2

(1)

(e 2 DN P )(u; v) = τ

(1)

By Theorem 4.10, we compute Gs,τ P by evaluating the traces in e 2 DN P at the moments νs−τ and taking the large-N limit of the resulting polynomial. Since ν1 (s−τ ) = e−(s−τ )/2 , this yields τ

Gs,τ P (u) = e−τ (u2 − τ e−(s−τ )/2 u).

In [8, Example 3.5], it was shown that for the unitary case (with τ = t > 0), (2 )

(e 2 DN P )(u; v) = e−t cosh(t/N )u2 − N e−t sinh(t/N )uv1  1 −τ 2 , = e (u − τ uv1 ) + O N2 t

which also results in the same expression for Gs,t P .

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.40 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

40

5. The Segal-Bargmann transform on Sp(N ) In this section, we prove the Sp(N ) version of Theorems 1.4, 1.5, and 1.6. which comprise our main results. We begin with a slight digression and show that Sp(N ) can also be realized as a subset of N × N quaternion matrices. This allows us to easily identify a particular orthonormal basis of sp(N ) ⊆ M2N (C) which we use to compute τ a set of magic formulas used in the proofs of our intertwining formulas for e 2 ΔSp(N ) Sp(N,C) 1 A and e 2 s,τ . These magic formulas share key similarities with the magic formulas for SO(N, R). Consequently, many of the results for the Sp(N ) case can proven using techniques similar to those used in the SO(N, R) case, and so we do not always provide the full details. We conclude by showing that when Sp(N ) is realized as a subset of MN (H), a version of the magic formulas and intertwining formulas for ΔSp(N ) also hold. 5.1. Two realizations of the compact symplectic group The quaternion algebra H is the four-dimensional associative algebra over R spanned by 1, i, j, and k satisfying the relations i2 = j2 = k2 = −1 and ij = k,

ji = −k,

jk = i,

kj = −i,

ki = j,

ik = −j.

We denote the quaternion conjugate of q ∈ H by q ∗ . That is, if q = a1 + bi + cj + dk, then q ∗ = a1 − bi − cj − dk. If A ∈ MN (H), we define the adjoint of A to be the matrix  ) as A∗ defined by (A∗ )i,j = (Aj,i )∗ . For each N ≥ 1, we define Sp(N  ) = {A ∈ MN (H) : A∗ A = IN }. Sp(N

(5.1)

) = {X ∈ MN (H) : X ∗ = −X}, sp(N

(5.2)

Its Lie algebra is

which we endow with the real inner product X, Y sp(N ) = 2N Re Tr(X ∗ Y )

). for all X, Y ∈ sp(N

(5.3)

Let  CN =



1 √ (Ea,b + Eb,a ) 4N

 ∪ 1≤a
1 √ Ea,a 2N

 , 1≤a≤N

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.41 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

41

where Ea,b ⊆ MN (H) is the N ×N matrix with a 1 in the (a, b) entry and zeros elsewhere. ) with respect to this inner product is An orthonormal basis for sp(N  βsp(N ) =



1 (Ea,b − Eb,a ) 4N

 ∪ iCN ∪ jCN ∪ kCN .

(5.4)

1≤a
 ) is the unitary group over the quaternions. We now From (5.1), we see that Sp(N  ) and Sp(N ). While explicitly construct a (real) Lie group isomorphism between Sp(N  ) is, in some sense, the more natural realization of the compact symplectic group, Sp(N for our purposes it is easier to work with the definition given in (2.5). In particular, the complexification of the compact symplectic group is more readily understood when Sp(N ) is realized as a subset of M2N (C) rather than of MN (H). The material presented below is standard (cf. [18]); we include it here for convenience. First, observe that we can realize H as a subalgebra of M2 (C). We do so by letting ψ : H → M2 (C) denote the injective homomorphism  H  q = a1 + bi + cj + dk

→

a + di −b − ci b − ci a − di

 ∈ M2 (C).

(5.5)

The map ψ preserves conjugation: ψ(q ∗ ) = ψ(q)∗ for all q ∈ H, where we take the quaternion conjugate on the left and the complex conjugate transpose on the right. Moreover, note that we can write a matrix of the form (5.5) as  α β

−β α

 (5.6)

for some α, β ∈ C, and any matrix of this form corresponds to a q ∈ H. Now define a map Φ : MN (H) → M2N (C) by Φ([qi,j ]1≤i,j≤N ) = [ψ(qi,j )]1≤i,j≤N . By the properties of block multiplication, Φ is an algebra homomorphism. Since ψ is one-to-one, it follows that Φ is also one-to-one. Moreover, since ψ preserves conjugation, so does Φ: if A ∈ MN (H), then Φ(A∗ ) = Φ(A)∗ , where again we take the quaternion conjugate transpose on the left and the complex conjugate transpose on the right.  ) ⊆ MN (H), so that A∗ A = IN , then Consequently, if A ∈ Sp(N Φ(A)∗ Φ(A) = Φ(A∗ )Φ(A) = Φ(A∗ A) = I2N . Hence  )) = {Z ∈ M2N (C) : Z ∗ Z = I2N and ∃ A ∈ MN (H) such that Φ(A) = Z}. Φ(Sp(N (5.7)

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.42 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

42

 )  )) = Sp(N ), which would imply that Sp(N ) ∼ We would like to show that Φ(Sp(N = Sp(N −1 as groups. Recall that since A ∈ Sp(N ) is unitary and Ω = −Ω, we can rewrite the definition of Sp(N ) as follows: Sp(N ) = {Z ∈ M2N (C) : Z ∗ Z = I2N , −ΩZΩ = Z}.

(5.8)

Comparing (5.7) and (5.8), we see that it suffices to show that Z = Φ(A) for some A ∈ MN (H)



−ΩZΩ = Z.

(5.9)

We need the following lemma.  Lemma 5.1. Let B ∈ M2 (C). Then −Ω0 BΩ0 = B if and only if B =

α β

−β α

 for some

α, β ∈ C. Proof. Direct computation. 2 Consequently, for Z ∈ M2N (C), [−ΩZΩ]i,j =



[Ω]i,k [Z]k, [Ω],j = −Ω0 [Z]i,j Ω0 ,

(5.10)

1≤k,≤N

where [Z]i,j denotes the (i, j)th 2 × 2 block of Z. By Lemma 5.1, we see that for each 1 ≤ i, j ≤ N , there exists qi,j ∈ H such that ψ(qi,j ) = [Z]i,j precisely when −ΩZΩ = Z.  ) → Sp(N ) is a Lie group isomorphism. Theorem 5.2. The map Φ|Sp(N : Sp(N  ) Proof. The preceding discussion shows that Φ|Sp(N is a group isomorphism. It remains  )

to be shown that Φ|Sp(N and Φ|−1 are smooth, but this is clear from the definition   ) Sp(N ) of Φ. 2 ), resp., are related by Φ. For The inner products (2.10) and (5.3) for sp(N ) and sp(N X ∈ MN (H), note that Re Tr X =

1 1 Re Tr Φ(X) = Tr Φ(X). 2 2

(5.11)

), Hence for X, Y ∈ sp(N X, Y sp(N ) = 2N Re Tr(XY ∗ ) = N Tr(Φ(XY ∗ )) = N Tr(Φ(X)Φ(Y )∗ ) = Φ(X), Φ(Y ) sp(N ) .

(5.12)

) ⊆ MN (H) with respect to (5.2), In particular, if βsp(N ) is an orthonormal basis for sp(N then Φ(βsp(N ) ) is an orthonormal basis for sp(N ) ⊆ M2N (C) with respect to (2.10).

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.43 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

43

5.2. Magic formulas and derivative formulas Proposition 5.3 (Magic formulas for Sp(N )). Let βsp(N ) be any orthonormal basis for sp(N ) with respect to the inner product (2.10). For any A, B ∈ M2N (C),

X2 = −

X∈βsp(N )



XAX = −

X∈βsp(N )



tr(XA)X =

X∈βsp(N )



tr(XA) tr(XB) =

X∈βsp(N )

2N + 1 1 I2N = −I2N − I2N 2N 2N

(5.13)

1 ΩA Ω−1 − tr(A)I 2N 2N

(5.14)

1 (ΩA Ω−1 − A) 4N 2

(5.15)

1 (tr(ΩA ΩB) − tr(AB)) 4N 2

(5.16)

Proof. As with the magic formulas for SO(N, R), the quantities on the left hand side of (5.13), (5.14), (5.15), and (5.16) are independent of choice of orthonormal basis. Thus we may compute with respect to the orthonormal basis βsp(N ) := Φ(βsp(N ) ), where βsp(N ) is the orthonormal basis described in (5.4). We will use the following notation: for A ∈ M2N (C), let [A]i,j denote the (i, j)th i,j 2 × 2 submatrix, and let Ai,j k, denote the (k, )th entry of [A] . If Ea,b ⊆ MN (H) is an elementary matrix and γ ∈ {1, i, j, k} ⊆ H, γ Fa,b := Φ(γEa,b ) ∈ M2N (C).

(5.17)

γ i,j Observe that [Fa,b ] contains all zeros unless (i, j) = (a, b), in which case it is equal to one of

 ψ(1) =

  1 0 0 , ψ(i) = 0 1 1

     −1 0 −i i 0 , ψ(j) = , ψ(k) = , 0 −i 0 0 −i

depending on whether γ equals 1, i, j, or k, respectively. Equation (5.13) follows from (5.14). For (5.14), we have

X∈βsp(N )

XAX =

1 4N





γ γ Fa,b AFa,b

1≤a,b≤N γ∈{1,i,j,k}

1 − 4N For 1 ≤ a, b, c, d, i, j ≤ N , we have

1≤a,b≤N



1 1 ⎝Fa,b AFb,a −

γ∈{i,j,k}

⎞ γ γ ⎠ Fa,b AFb,a

(5.18)

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.44 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

44



γ γ i,j [Fa,b AFc,d ] =

γ i,k γ ,j γ i,b γ c,j [Fa,b ] [A]k, [Fc,d ] = [Fa,b ] [A]b,c [Fc,d ]

(5.19)

1≤k,≤N

=

" 02

i = a or j = d

(5.20)

ψ(γ)[A]b,c ψ(γ) i = a and j = d

A straightforward computation shows that  b,c

ψ(1)[A] ψ(1) =  ψ(j)[A]b,c ψ(j) =

Ab,c 1,1 Ab,c 2,1

 Ab,c −Ab,c 2,2 2,1 , ψ(i)[A] ψ(i) = Ab,c −Ab,c 1,2 1,1    b,c b,c −Ab,c A −A 2,1 1,1 1,2 , ψ(k)[A]b,c ψ(k) −Ab,c Ab,c −Ab,c 1,1 2,1 2,2

Ab,c 1,2 Ab,c 2,2

−Ab,c 2,2 −Ab,c 1,2





b,c

(5.21)

(5.22)

Hence for the first term on the right hand side of (5.18), ⎡ ⎣ 1 4N

⎤i,j





γ γ ⎦ Fa,b AFa,b

1≤a,b≤N γ∈{1,i,j,k}

⎡ 1 ⎣ = 4N 1 = 2N



⎤i,j



γ γ ⎦ Fi,j AFi,j

γ∈{1,i,j,k}

−Aj,i 2,2 Aj,i 2,1

Aj,i 1,2 −Aj,i 1,1

 .

(5.23)

For the second term on the right hand side of (5.18), the (i, j)th block is 02 for i = j. For i = j, ⎡ ⎣ 1 4N



⎛ 1 1 ⎝Fa,b AFb,a −

1≤a,b≤N

γ γ ⎠⎦ Fa,b AFb,a

γ∈{i,j,k}

⎡ =

⎞⎤i,j



1 ⎣ 4N

N





1 1 ⎝Fi,b AFb,i −

b=1

⎛ N 1 ⎝ b,b = [A] − 4N b=1

⎞⎤i,j γ γ ⎠⎦ Fi,b AFb,i

γ∈{i,j,k}



1 Tr(A)I2 . 2N

Putting this together, we have



ψ(γ)[A]b,b ψ(γ)⎠

(5.25)

γ∈{i,j,k}

  N b,b 1 Ab,b + A 0 1,1 2,2 = b,b 2N 0 Ab,b 1,1 + A2,2 b=1 =

(5.24)

(5.26) (5.27)

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.45 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

⎡ ⎣



⎤i,j XAX ⎦

X∈βsp(N )

⎧ ⎪ −Aj,i ⎪ 2,2 ⎪ ⎪ ⎪ ⎨ Aj,i 1 2,1 =  2N ⎪ ⎪ −Aj,i ⎪ 2,2 ⎪ ⎪ ⎩ Aj,i 2,1

Aj,i 1,2 −Aj,i 1,1 Aj,i 1,2

45

 i = j 

(5.28) − Tr(A)I2

−Aj,i 1,1

i = j.

On the other hand, we can compute

1 − ΩA Ω−1 − tr(A)I 2N 2N

i,j

1 =− 2N

" Ω0 ([A]j,i ) Ω−1 0 j,i 

Ω0 ([A] )

Ω−1 0

i = j, − Tr(A)I2

i = j.

(5.29)

Multiplying out (5.29) and comparing with (5.28) proves (5.14). Next, for (5.15), we again use the basis βsp(N ) to compute

X∈βsp(N )

1 tr(XA)X = 4N





γ A)F γ tr(F a,b a,b

(5.30)

1≤a,b≤N γ∈{1,i,j,k}

1 − 4N





1 1 ⎝tr(F a,b A)Fb,a −

1≤a,b≤N





γ A)F γ ⎠. (5.31) tr(F a,b b,a

γ∈{i,j,k}

For 1 ≤ a, b ≤ N , a simple calculation shows that 1 1 1 i a,b a,b (Ab,a + Ab,a (Ab,a − Ab,a A) = tr(F A) = tr(F 2,2 ), 2,1 ) 2N 1,1 2N 1,2 i b,a k j A) = − i (Ab,a a,b tr(F (Ab,a − Ab,a tr(F A) = 1,2 + A2,1 ), 2,2 ), a,b 2N 2N 1,1 The (i, j)th block of the right hand side of (5.30) is thus ⎡ ⎤i,j ⎡



1 1 ⎣ 1 γ A)F γ ⎦ = 1 ⎣tr(F i,j A)Fi,j + tr(F a,b a,b 4N 4N 1≤a,b≤N γ∈{1,i,j,k}

1 = 4N 2



Aj,i 2,2 −Aj,i 2,1



(5.32) (5.33)

⎤i,j γ γ ⎦ i,j A)Fi,j tr(F

γ∈{i,j,k}

−Aj,i 1,2 Aj,i 1,1



.

Similarly, the (i, j)th block of (5.31) is ⎡ ⎛ ⎞⎤i,j



1 γ γ 1 1 ⎣ ⎝tr(F ⎠⎦ a,b A)Fb,a − tr(F a,b A)Fb,a 4N 1≤a,b≤N γ∈{i,j,k} ⎡ ⎤i,j

1 ⎣ 1 γ γ ⎦ 1 j,i tr(Fj,i A)Fi,j = − tr(F A)Fi,j 4N γ∈{i,j,k}   i,j i,j 1 A1,1 A1,2 . = 2 4N Ai,j Ai,j 2,1 2,2

(5.34)

(5.35)

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.46 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

46

Combining our computations, ⎡ ⎣

⎤i,j





1 = 4N 2

⎦ tr(XA)X

X∈βsp(N )

i,j Aj,i 2,2 − A1,1 i,j −Aj,i 2,1 − Ai,j

i,j −Aj,i 1,2 − A1,2 i,j Aj,i 1,1 − A2,2

 (5.36)

Again, we can expand

i,j

1 (ΩA Ω−1 − A) 4N 2

=

1 i,j [−Ω0 ([A]j,i ) Ω−1 0 − A] 4N 2

(5.37)

to see that this is equivalent to (5.36). This proves (5.15); multiplying on the right by proves (5.16). 2 B ∈ M2N (C) and taking tr While Proposition 5.3 holds for all A, B ∈ M2N (C), we now restrict to the case in which A and B are in Sp(N ) or Sp(N, C). Note that for A ∈ Sp(N, C), ΩA Ω−1 = A−1 . The magic formulas for Sp(N ) allow us to compute the following derivative formulas, as in the SO(N, R) case. Proposition 5.4 (Derivative formulas for Sp(N )). The following identities hold on Sp(N ) and Sp(N, C): ⎡

ΔSp(N ) Am

⎤ m−1 m−1



1 j )Am−j ⎦ = 1m≥2 ⎣− (m − j)Am−2j − 2 (m − j)tr(A N j=1 j=1

⎡ ΔSp(N ) Am = 1m≤−2 ⎣−

 1 mAm , + −1 − 2N

1 N

−1

(−m + j)Am−2j − 2

j=m+1



− −1 −

X∈βsp(N )

m≥0

1 2N

(5.38) ⎤

−1

j )Am−j ⎦ (−m + j)tr(A

j=m+1

mAm ,

m<0

(5.39)

p = mp (Ap−m − Ap+m ), tr(A m ) · XA X 4N 2

m, p ∈ Z.

(5.40)

Proof. For m ≥ 0, we use (3.15) and magic formulas (5.83) and (5.84) to sum over all X ∈ βSp(N ) : ΔSp(N ) Am = 21m≥2





X∈βso(N,R)

k,j=0 k+j+=m

Ak XAj XA +



m

Aj X 2 Am−j

X∈βso(N,R) j=1

(5.41)

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.47 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••



= 21m≥2



k,j=0 k+j+=m



47

 1 k j ∗  j k+ m(2N + 1) m A (A ) A − tr(A )A A − 2N 2N ⎤

= 1m≥2 ⎣−

(5.42)

m−1 m−1

1

j )Am−j ⎦ (m − j)Am−2j − 2 (m − j)tr(A N j=1 j=1

 1 mAm . + −1 − 2N

(5.43)

For m < 0, we use (3.10) and magic formulas (5.83) and (5.84) to sum over all X ∈ βSp(N ) :

ΔSp(N ) Am = 21m≥2



Ak XAj XA +

= 21m≥2

k,<0,j≤0 j+k+=m



1 = 1m≤−2 ⎣− N  − −1 −

Aj X 2 Am−j

 1 k j ∗  j k+ m(2N + 1) m − + A (A ) A − tr(A )A A 2N 2N

−1

(−m + j)Am−2j − 2

j=m+1

1 2N

0

X∈βso(N,R) j=m+1

X∈βso(N,R) k,<0,j≤0 j+k+=m





−1

⎤ j )Am−j ⎦ (−m + j)tr(A

j=m+1



mAm .

Finally, comparing the magic formulas (3.7) and (5.15), we see that the proof of (5.40) is identical to that of (3.14). 2 5.3. Intertwining formulas for ΔSp(N ) The derivative formulas allow us to prove the intertwining formula for ΔSp(N ) . Proof of Theorem 1.4 for Sp(N ). We proceed as in the proof of Theorem 1.4 for the (4) SO(N, R) case. Again, it suffices to show that ΔSp(N ) PN = [DN P ]N holds for P (u; v) = um q(v), where m ∈ Z \ {0} and q ∈ C[v]. Then PN = Wm · q(V), and by the product rule, ΔSp(N ) PN = (ΔSp(N ) Wm ) · q(V) + 2



m · Xq(V) XW + Wm · (ΔSp(N ) q(V))

X∈βsp(N )

(5.44) For the first term in (5.44), we consider the cases m ≥ 0 and m < 0 separately. For m ≥ 0, we apply (5.38) to get

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.48 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

48

m(2N + 1) Wm · q(V) 2N

 m−1 m−1

1

(m − k)Wm−2k − 2 (m − k)Vk Wm−k q(V) + 1m≥2 − N k=1 k=1

 2N + 1 ∂ =− u R+ P 2N ∂u N  !  ∞

1 −k+1 + ∂ + −2 M −k R P + u R 2N ∂u u k=1 N  ∞ ! 

∂ −2 vk uR+ Mu−k R+ P ∂u k=1 N

 ∂ + 2N + 1 1 + u R P =− − [Z2 P ]N − 2[Y1+ P ]N . (5.45) 2N ∂u N N

(ΔSp(N ) Wm ) · q(V) = −

Similarly, for m < 0, we apply (5.39) to get (ΔSp(N ) Wm ) · q(V) =

 ∂ 2N + 1 1 u R− P + [Z2− P ]N + 2[Y1− P ]N . 2N ∂u N N

Since Y1+ , Z2+ annihilates C[u−1 ] while Y1− , Z2− annihilates C[u], we have that for all m ∈ Z, (ΔSp(N ) Wm ) · q(V) = −

2N + 1 1 [N1 P ]N − [Z2 P ]N − 2[Y1 P ]N . 2N N

(5.46)

For the middle term in (5.44), we note the similarity between the derivative formulas (3.14) and (5.40) for the SO(N, R) and Sp(N ) cases, resp. Hence, as in (3.38),

m · Xq(V) = XW

X∈βsp(N )

1 [K1 P ]N . 4N 2

(5.47)

For the last term in (5.44), we have, for each X ∈ βsp(N ) , 2 q(V) = X

 ∂ 2 Vk + q (V) · X ∂vk

|k|≥1

 |j|,|k|≥1

∂2 j · XV k ., (5.48) q (V) · XV ∂vj ∂vk

as with (3.39). In summing the first term in (5.48) over X ∈ βsp(N ) , we break up the sum for positive and negative k. Using (5.38) and (5.39), we have

 ∞  ∞

2N + 1

∂ ∂ 2 q (V) · X Vk = − kVk q (V) ∂vk 2N ∂vk

X∈βsp(N ) k=1

k=1

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.49 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••



49

 ∞ k−1 1

∂ (k − )Vk−2 q (V) N ∂vk k=2 =1

−2

∞ k−1



 (k − )V Vk−

k=2 =1

∂ q (V) (5.49) ∂vk

and

 −1  −1

2N + 1

∂ ∂ 2 q (V) · X Vk = kVk q (V) ∂vk 2N ∂vk

X∈βsp(N ) k=−∞

k=−∞



−2 1

N

−1

 (k − )Vk−2

k=−∞ =k+1

−2

−2

−1

∂ q (V) ∂vk 

(−k + )V V−k−

k=−∞ =k+1

∂ q (V) ∂vk (5.50)

Summing the second term in (5.48) over X ∈ βsp(N ) and using magic formula (5.40) yields





X∈βsp(N ) |j|,|k|≥1

∂2 j · XV k q (V) · XV ∂vj ∂vk

=

1 4N 2

|j|,|k|≥1

 jk(Vk−j − Vk+j )

∂2 q (V). ∂vj ∂vk

(5.51)

Adding (5.49), (5.50), and (5.51) together and multiplying by Wm , we get

 Wm · (ΔSp(N ) q(V)) =



 2N + 1 1 1 N0 − Z1 − 2Y2 + K . 2 P 2N N 4N 2 N

(5.52)

Combining (5.46), (5.47), and (5.52) proves (1.11) for Sp(N ). Equation (1.12) now follows. 2 Proof of Theorem 1.5 for Sp(N ). The proof for Sp(N ) is entirely analogous to the proof t τ for SO(N, R). Using the intertwining formula for e 2 ΔSp(N ) (1.12), we have e 2 ΔSp(N ) PN = (4) (4) τ τ [e 2 DN P ]N . Corollary 3.11 shows that [e 2 DN P ]N is a well-defined trace polynomial on Sp(N ), so its analytic continuation to Sp(N, C) is given by the same trace polynomial (4) τ function. Hence [e 2 DN P ]N , viewed as a holomorphic function on Sp(N, C), is the anaτ Sp(N ) lytic continuation of e 2 ΔSp(N ) PN , and so is equal to Bs,τ PN . 2

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.50 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

50

Sp(N,C)

5.4. Intertwining formulas for As,τ

Theorem 5.5. Fix s ∈ R and τ = t + iθ ∈ C. There are collections {Tεs,τ : ε ∈ E }, {Uεs,τ : ε ∈ E } such that for each ε ∈ E , Tεs,τ and Uεs,τ are certain finite sums of monomials of trace degree |ε| such that ASp(N,C) Vε = [Tεs,τ ]N + s,τ

1 s,τ [U ]N , N ε

(5.53)

s,τ Moreover, let {Sε,δ : ε, δ ∈ E } ⊆ W be the collection of word polynomials from Theorem 3.19. Then for ε, δ ∈ E ,



 t  Vε )(X  Vδ ) + t (Y  Vε )(Y  Vδ ) − θ(X  Vε )(Y  Vδ ) = 1 [S s,τ ]N , s− (X 2 2 N 2 ε,δ =1 (5.54)

dsp(N )

d

sp(N ) where βsp(N ) = {X }=1 and Y = iX .

For the proof, we will employ the conventions, mutandis mutatis, as those used in the proof of Theorem 3.19 (see the remarks following the theorem statement). Proof. Fix a word ε = (ε1 , . . . , εm ) ∈ E . Then for each X ∈ β± and A ∈ Sp(N ), we apply the product rule twice to get 2 Vε )(A) = (X

m

j=1

+2

tr(Aε1 · · · (AX 2 )εj · · · Aεm )

(5.55)

tr(Aε1 · · · (AX)εj · · · (AX)εk · · · Aεm ).

(5.56)

1≤j
Applying magic formula (5.13) to each term in (5.56), we have

tr(Aε1 · · · (AX 2 )εj · · · Aεm ) = ±

X∈β±

2N + 1 2N + 1 tr(Aε1 · · · Aεj · · · Aεm ) = ± Vε (A). 2N 2N (5.57)

Summing over 1 ≤ j ≤ m now gives m



X∈β± j=1

tr(Aε1 · · · (AX 2 )εj · · · Aεm ) =

2N + 1 n± (ε)Vε (A), 2N

(5.58)

where n± (ε) ∈ Z and |n± (ε)| ≤ |ε| (where the integers n± (ε) are not necessarily the same as in the SO(N, R) case). For (5.56), we can express each term in the sum as 0

1

2

tr(Aε1 · · · (AX)εj · · · (AX)εk · · · Aεm ) = ± tr(Aεj,k XAεj,k XAεj,k ),

(5.59)

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.51 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

51

so ε0j,k , ε1j,k , and ε2j,k are substrings of ε such that ε0j,k ε1j,k ε2j,k = ε. Summing (5.59) over X ∈ β± by magic formula (5.14), we have

0 1 2 1 tr(Aεj,k (Aεj,k )−1 Aεj,k ) tr(Aε1 · · · (AX)εj · · · (AX)εk · · · Aεm ) = ± − 2N X∈β±  ε0j,k ε2j,k ε1j,k − tr(A A ) tr(A ) =±

1 V 3 (A) ± Vε0j,k ε2j,k (A)Vε1j,k (A), 2N εj,k (5.60) 0

1

2

where ε3j,k is the word of length |ε| such that Vε3j,k (A) = tr(Aεj,k (Aεj,k )−1 Aεj,k ). Thus summing (5.56) over X ∈ β± gives



tr(Aε1 · · · (AX)εj · · · (AX)εk · · · Aεm ) X∈β± 1≤j
1 2N

=



±Vε3j,k (A) +

1≤j


±Vε1j,k (A)Vε0j,k ε2j,k (A). (5.61)

1≤j
We define word polynomials corresponding to (5.61) by

± Tε,0 = ±vε3j,k

(5.62)

1≤j


± = Tε,1

±vε0j,k ε2j,k vε1j,k .

(5.63)

1≤j
Now if X ∈ β+ and Y = iX ∈ β− , then Y Vε )(A) = (X

i

± tr(Aε1 · · · (AX 2 )εj · · · Aεm ) 2 j=1

± tr(Aε1 · · · (AX)εj · · · (AX)εk · · · Aεm ). +i m

(5.64) (5.65)

1≤j
Summing (5.64) over X ∈ β+ and applying (5.83) gives m



± tr(Aε1 · · · (AX 2 )εj · · · Aεm ) =

X∈β+ j=1

2N + 1 η(ε)Vε (A) 2N

(5.66)

where η(ε) ∈ Z and |η(ε)| ≤ |ε|. In addition, summing (5.65) over X ∈ β+ , we have



± tr(Aε1 · · · (AX)εj · · · (AX)εk · · · Aεm ) X∈β+ 1≤j
=−

1 2N

1≤j
Vε3j,k (A) −

1≤j
±Vε0j,k ε2j,k (A)Vε1j,k (A). (5.67)

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.52 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

52

We define corresponding word polynomials

Tε,2 = i

±vε3j,k

(5.68)

±vε0j,k ε2j,k vε1j,k .

(5.69)

1≤j


Tε,3 = i

1≤j
Putting this together, we define Tεs,τ Uεs,τ

  t  t  + − n+ (ε)vε + 2Tε,1 n− (ε)vε + 2Tε,1 = s− + − θ (iη(ε) + 2Tε,3 ) , (5.70) 2 2     t η(ε) n+ (ε) n− (ε) t − + − vε + Tε,0 + − vε + Tε,0 − θ −i + Tε,2 . = s− 2 2 2 2 2 (5.71)

By construction, Tεs,τ and Uεs,τ are of the required form and satisfy (5.53). The proof of (5.54) is essentially identical to that of Theorem 3.19(2); this is due to the similarity of magic formulas (3.7) and (5.85) for SO(N, R) and Sp(N ). 2 Sp(N,C)

Theorem 5.6 (Intertwining formula for ). Fix ) As,τ * s ∈ R and τ = t + iθ ∈ C. Let s,τ s,τ s,τ {Tε : ε ∈ E }, {Uε : ε ∈ E }, and Sε,δ : ε, δ ∈ E be as in Theorem 5.5, and L s,τ 2 be as in Theorem 3.20. Define first and second order differential operators C0s,τ =

1 s,τ ∂ Tε 2 ∂vε

(5.72)

1 s,τ ∂ Uε 2 ∂vε

(5.73)

ε∈E

C1s,τ =

ε∈E

Then for all N ∈ N and P ∈ W , 1 Sp(N,C) A PN = 2 s,τ

 C0s,τ +

1 s,τ 1 C + 2 L s,τ N 1 N 2

 P . N

Proof. For each X ∈ sp(N ), we apply the chain rule to get  ∂P ε) (V) · (XV ∂vε ε∈E

 ∂2P

 ∂P 2 Vε ) + ε )(XV δ ). (V) · (X (V) · (XV = ∂vε ∂vε ∂vδ

2 PN = X



ε∈E

Hence

X



ε,δ∈E

(5.74)

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.53 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

ASp(N,C) PN = s,τ

 ∂P ε∈E

+

∂vε

53

(V) · ASp(N,C) Vε s,τ

dSO(N,R) 

 ∂2P t  Vε )(X  Vδ ) (V) s− (X ∂vε ∂vδ 2

ε,δ∈E

=1

The result now follows from Theorem 5.5.

 t  Vε )(Y  Vδ ) . + (Y  Vε )(Y  Vδ ) − θ(X 2

2

5.5. Limit theorems for the Segal-Bargmann transform on Sp(N ) In this section, we outline the proof of Theorem 1.6 for Sp(N ). Since the techniques used are similar to those used in Section 4 to prove the SO(N, R) case, we do not provide the full details. The following two lemmas are the analogues of Lemmas 4.5, 4.6, and 4.7 for Sp(N ). The proofs are very similar, and so are not included: the key concentration result is again Lemma 4.2, and the only change required is to replace intertwining formulas (1.11) and (3.82) for SO(N, R) with intertwining formulas (1.11) and (5.74) for Sp(N ) wherever applicable. Lemma 5.7 (Lévy, [13]). For s > 0 and k ∈ Z,  lim

N →∞ Sp(N )

) tr(Ak ) ρSp(N (A) dA = νk (s) . s

(5.75)

Lemma 5.8. Let s > 0 and τ = t + iθ ∈ D(s, s). For any Q ∈ C[v],   s,τ eC0 ι(Q) (1) = πs−τ Q.

(5.76)

As in the proof of Theorem 1.6 for SO(N, R), we first require the following analogue of Theorem 4.10. Theorem 5.9. Let s > 0 and τ = t + iθ ∈ D(s, s). For any P ∈ C[u, u−1 ; v], 1 , and PN − =O N2  1 PN − [πs−τ P ]N 2L2 (μSp(N,C) ) = O . s,τ N2 

[πs P ]N 2L2 (ρSp(N ) ) s

(5.77) (5.78)

Proof. The proof is entirely analogous to the proof of Theorem 4.10. Again, we reSO(N,C) place intertwining formula (3.82) for As,τ with intertwining formula (5.74) for

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.54 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

54

Sp(N,C) As,τ wherever necessary. To show the Sp(N ) version of (4.34) (where L s,τ is re0 s,τ placed with C0 ), we use Lemma 5.8 in place of Lemma 4.7. The remainder of the proof is the same. 2

Proof of Theorem 1.5 for Sp(N ). The proof of the limit results (1.15) and (1.16) is very similar to the SO(N, R) case (see p. 37), where we now use the polynomial S (N ) = (4) τ τ e 2 DN f − e 2 L0 f ∈ Cm [u, u−1 ; v] in place of R(N ) and intertwining formula (5.74) in place of (3.82) in (4.38). To show uniqueness, we define seminorms on C[u, u−1 ; v] by (4)

(4)

P s,N = PN L2 (ρSp(N ) , ) s

P s,τ,N = PN L2 (μSp(N,C) , ) s,τ

(4)

(4)

P (4) s = lim P s,N ,

P (4) s,τ = lim P s,τ,N .

N →∞

N →∞

(5.79) (5.80)

Noting that (2) P (4) s = P s ,

(5.81)

the remainder of the proof proceeds as in the proof of uniqueness in Theorem 1.5 for SO(N, R). 2  ) 5.6. Extending the magic formulas and intertwining formulas to Sp(N We conclude this section by showing a version of the magic formulas and intertwining  ) ⊆ MN (H). We have the following key relation: results for Sp(N ) hold for Sp(N tr(Φ(A)) = Re tr(A),

A ∈ MN (H),

(5.82)

which can be seen by the definition of ψ (see (5.5)).  )). Let β  Proposition 5.10 (Magic formulas for Sp(N sp(N ) be any orthonormal basis for ) with respect to the inner product (5.3). For any A, B ∈ MN (H), sp(N

X2 = −

X∈βs, p(N )



XAX = −

X∈βs, p(N )



Re tr(XA)X =

X∈βs, p(N )

X∈βs, p(N )

Re tr(XA) Re tr(XB) =

2N + 1 1 IN = −IN − IN 2N 2N

(5.83)

1 ∗ A − Re tr(A)IN 2N

(5.84)

1 (A∗ − A) 4N 2

(5.85)

1 (Re tr(A∗ B) − Re tr(AB)) 4N 2

(5.86)

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.55 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

55

Proof. We begin with the observation that for A ∈ MN (H), ΩΦ(A) Ω−1 = Φ(A∗ ).

(5.87)

To see why this is the case, note that Lemma 5.1 directly implies that for q ∈ H, ∗ Ω0 ψ(q) Ω−1 0 = ψ(q ).

Hence if A = [qi,j ], [ΩΦ(A) Ω−1 ]i,j =



[Ω]i,k [Φ(A) ]k, [Ω],j = [Ω]i,i [Φ(A) ]i,j [Ω]j,j

1≤k,≤N ∗ ∗ i,j = Ω0 ψ(qj,i ) Ω−1 0 = ψ(qj,i ) = [Φ(A )] ,

which shows (5.87). ), Φ(βsp(N ) ) is an orthonormal Recall that if βsp(N ) is an orthonormal basis for sp(N basis for sp(N ). Let A ∈ MN (H). Using (5.14) and (5.82), we have



Φ(XAX) =

X∈βs, p(N )

Φ(X)Φ(A)Φ(X)

X∈βs, p(N )

1 ΩΦ(A) Ω−1 − tr(Φ(A))I 2N 2N 1 Φ(A∗ ) − Re tr(A)Φ(IN ) =− 2N  1 ∗ A − Re tr(A)IN . =Φ − 2N =−

Since Φ is injective, (5.84) follows. The proof of (5.85) is similar, and (5.83) and (5.86) follow from (5.84) and (5.85), resp. 2  ) version of Theorem 1.4 also holds for an appropriately defined trace polyAn Sp(N  ) → MN (H) be the nomial functional calculus: for P ∈ C[u, u−1 ; v], we let P N : Sp(N function defined by P N (A) := P (u; v)|u=A,vk =Re tr(Ak ),k=0 .

(5.88)

 ) ⊆ MN (H). Then Proposition 5.11. Let P ∈ C[u, u−1 ; v] and A ∈ Sp(N PN (Φ(A)) = Φ(P N (A)).

(5.89)

Proof. It suffices to prove the result for P (u; v) = um vkn , with m, n, k ∈ Z and k = 0. Using (5.82), we have

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.56 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

56

k n PN (Φ(A)) = Φ(A)m tr(Φ(A) ) = Φ(Am )(Re tr(Ak ))n

= Φ(Am (Re tr(Ak ))n ) = Φ(P N (A)).

2

Using this proposition, we see that the intertwining formula for Sp(N ) ⊆ MN (H) is a direct consequence of the intertwining formula (1.11) for Sp(N ) ⊆ M2N (C). Theorem 5.12 (Intertwining formulas). For any P ∈ C[u, u−1 ; v], . N = D(4) P ΔSp(N P .  N )

(5.90)

N

Moreover, for all τ ∈ C, , N = [e 2 DN P ]N . )P e 2 ΔSp(N τ

τ

(4)

(5.91)

 ) can be derived Remark 5.13. We have seen that the magic formulas (5.10) for Sp(N from the magic formulas (5.3) for Sp(N ). However, the converse is not true. Suppose we only know that the magic formula (5.84) holds. Applying Φ to both sides yields

Φ(X)Φ(A)Φ(X) = −

X∈βs, p(N )

1 Φ(A)∗ − tr(Φ(A))I 2N . 2N

(5.92)

We cannot replace Φ(A) with an arbitrary B ∈ M2N (C) in the formula above. For example, consider  B=

 1 1

0 1

∈ Sp(1, C).

Then we can compute that

Y ∈βsp(1)

1 Y BY = 2



−3 0 1 −3



1  = 2



−3 −1 0 −3



1 = − B ∗ − tr(B)I 2. 2

 ). This is another reason why it is more convenient to work with Sp(N ) rather than Sp(N 6. The Segal-Bargmann transform on SU(N ) In this final section, we analyze the Segal-Bargmann transform on the special unitary group SU(N ). This case uses many of the techniques from the U(N ) case, studied in [8]. Consequently, we outline the main ideas required for the proofs of these results and do not provide the full details. We first prove the intertwining formulas for SU(N ). As in the SO(N, R) and Sp(N ) cases, the basis of these results is a set of magic formulas. We recall the following set of magic formulas for U(N ), proven in [8].

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.57 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

57

Proposition 6.1 ([8, Proposition 3.1]). Let βu(N ) be any orthonormal basis for u(N ) with respect to the inner product defined by X, Y u(N ) = N 2 tr(XY ∗ ). For any A, B ∈ MN (C),

X 2 = −IN

(6.1)

X∈βu(N )



XAX = − tr(A)IN

(6.2)

X∈βu(N )



tr(XA)X = −

X∈βu(N )



tr(XA) tr(XB) = −

X∈βu(N )

1 A N2

(6.3)

1 tr(AB) N2

(6.4)

The magic formulas for SU(N ) follow easily from the magic formulas for U(N ). Proposition 6.2 (Magic formulas for SU(N )). Let βsu(N ) be any orthonormal basis for su(N ) with respect to the inner product (2.9). For any A, B ∈ MN (C), 



−1 +

X2 =

X∈βsu(N )



1 N2



XAX = − tr(A)IN +

X∈βsu(N )



tr(XA)X = −

X∈βsu(N )



tr(XA) tr(XB) = −

X∈βsu(N )

IN

(6.5)

1 A N2

(6.6)

1 1 A + 2 tr(A)IN N2 N

(6.7)

1 1 tr(AB) + 2 tr(A) tr(B) 2 N N

(6.8)

Proof. The expressions on the left side of (6.5), (6.6), (6.7), and (6.8) are independent of orthonormal basis chosen. Fix an orthonormal basis βsu(N ) for su(N ). Observe that the matrix Ni IN is an element of u(N ) of unit norm such that 3

i IN , X N

4 =0

for all X ∈ su(N ) ⊆ u(N ).

u(N )

Hence βu(N ) = βsu(N ) ∪ { Ni IN } is an orthonormal basis for u(N ). Thus

X∈βsu(N )

X2 =

X∈βu(N )

 X2 −

i IN N

2

 =

−1 +

1 N2

IN ,

which proves (6.5). The remaining magic formulas are proven similarly. 2

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.58 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

58

Proposition 6.3 (Derivative formulas for SU(N )). The following identities hold on SU(N ) and SL(N, C): ΔSU(N ) Am = −21m≥2

m−1

jAj tr(Am−j ) − mAm +

m2 m A , N2

m≥0

(6.9)

jAj tr(Am−j ) + mAm +

m2 m A , N2

m<0

(6.10)

j=1

ΔSU(N ) Am = 21m≤−2

X∈βsu(N )

−1

j=m+1

tr(Am ) · XA p = mp (−Am+p + Ap tr(Am )), X N2

m, p ∈ Z.

(6.11)

Proof. The proof of these results is essentially identical to the proofs of the corresponding derivative formulas for U(N ) in [8, Theorem 3.3]; one need only to replace the magic formulas for U(N ) by the magic formulas for SU(N ) in the proof, wherever applicable. 2 The derivative formulas of Proposition 6.3 comprise the key ingredient for the interSU(N ) (Theorems 1.4 and 1.5). The proofs of these twining formulas for ΔSU(N ) and Bs,τ results are entirely analogous to the corresponding results for U(N ) (cf. [8, Theorems 1.8, 1.9]). The only change required in the proof is to replace the magic formulas and derivative formulas for U(N ) by the corresponding formulas for SU(N ) from Propositions 6.2 and 6.3. By keeping track of these changes, we see that for P ∈ C[u, u−1 ; v], the intertwining formula for ΔSU(N ) is (2)

ΔSU(N ) PN = [DN P ]N =

  1 L0 − 2 (2K1− + K2− − J ) P . N N

The only difference between this and (3.44), the intertwining formula for ΔU(N ) , is that for SU(N ), the 1/N 2 term on the right hand side contains the additional operator J . This is a consequence of the close relationship between the magic formulas for U(N ) and SU(N ). Finally, our main result regarding the free Segal-Bargmann transform for SU(N ), Theorem 1.6, is proven in the same way as the corresponding result for U(N ) (see [8, Theorem 1.11]); again, the only change necessary is the substitution of the magic and derivative formulas for SU(N ) in place of those for U(N ) wherever required. Acknowledgments The author thanks their PhD advisor Todd Kemp for suggesting the topic of the present paper, as well as his invaluable advice and extensive comments in completing this work. In addition, the author thanks the referee, whose suggestions greatly improved the presentation of the results.

JID:YJFAN

AID:108430 /FLA

[m1L; v1.261; Prn:16/12/2019; 10:27] P.59 (1-59)

A.Z. Chan / Journal of Functional Analysis ••• (••••) ••••••

59

References [1] V. Bargmann, On a Hilbert space of analytic functions and an associated integral transform, Commun. Pure Appl. Math. 14 (1961) 187–214. [2] V. Bargmann, Remarks on a Hilbert space of analytic functions, Proc. Natl. Acad. Sci. USA 48 (1962) 199–204. [3] P. Biane, Free Brownian motion, free stochastic calculus and random matrices, in: Free Probability Theory, Waterloo, ON, 1995, in: Fields Inst. Commun., vol. 12, Amer. Math. Soc., Providence, RI, 1997, pp. 1–19. [4] P. Biane, Segal-Bargmann transform, functional calculus on matrix spaces and the theory of semicircular and circular systems, J. Funct. Anal. 144 (1) (1997) 232–286. [5] G. Cébron, Free convolution operators and free Hall transform, J. Funct. Anal. 265 (11) (2013) 2645–2708. [6] B.K. Driver, On the Kakutani-Itô-Segal-Gross and Segal-Bargmann-Hall isomorphisms, J. Funct. Anal. 133 (1) (1995) 69–128. [7] B.K. Driver, L. Gross, L. Saloff-Coste, Holomorphic functions and subelliptic heat kernels over Lie groups, J. Eur. Math. Soc. 11 (5) (2009) 941–978. [8] B.K. Driver, B.C. Hall, T. Kemp, The large-N limit of the Segal-Bargmann transform on UN , J. Funct. Anal. 265 (11) (2013) 2585–2644. [9] B.K. Driver, B.C. Hall, T. Kemp, The complex-time Segal–Bargmann transform, J. Funct. Anal. 278 (2020) 1. [10] B.C. Hall, The Segal-Bargmann “coherent state” transform for compact Lie groups, J. Funct. Anal. 122 (1) (1994) 103–151. [11] C.-W. Ho, The two-parameter free unitary Segal-Bargmann transform and its Biane-Gross-Malliavin identification, J. Funct. Anal. 271 (12) (2016) 3765–3817. [12] R.A. Horn, C.R. Johnson, Topics in Matrix Analysis, Cambridge University Press, Cambridge, 1994, Corrected reprint of the 1991 original. [13] T. Lévy, The master field on the plane, Astérisque 388 (2017). [14] J. Milnor, Curvatures of left invariant metrics on Lie groups, Adv. Math. 21 (3) (1976) 293–329. [15] I.E. Segal, Mathematical characterization of the physical vacuum for a linear Bose-Einstein field. (Foundations of the dynamics of infinite systems. III), Ill. J. Math. 6 (1962) 500–523. [16] I.E. Segal, Mathematical problems of relativistic physics, in: With an Appendix by George W. Mackey, Lectures in Applied Mathematics (Proceedings of the Summer Seminar, Boulder, Colorado), vol. 1960, American Mathematical Society, Providence, RI, 1963. [17] I.E. Segal, The complex-wave representation of the free boson field, in: Topics in Functional Analysis (Essays Dedicated to M. G. Kre˘ın on the Occasion of His 70th Birthday), in: Adv. in Math. Suppl. Stud., vol. 3, Academic Press, New York-London, 1978, pp. 321–343. [18] J. Stillwell, Naive Lie Theory, Undergraduate Texts in Mathematics, Springer, New York, 2008.