J. Math. Anal. Appl. 479 (2019) 733–751
Contents lists available at ScienceDirect
Journal of Mathematical Analysis and Applications www.elsevier.com/locate/jmaa
A dynamic matrix exponential via a matrix cylinder transformation Tom Cuchta a,∗ , David Grow b , Nick Wintz c a
Department of Computer Science and Math, Fairmont State University, Fairmont, WV, 26554, USA Department of Mathematics & Statistics, Missouri University of Science & Technology, Rolla, MO, 65409, USA c Division of Mathematics and Computer Science, Lindenwood University, St. Charles, MO, 63301, USA b
a r t i c l e
i n f o
Article history: Received 9 January 2019 Available online 17 June 2019 Submitted by M.J. Schlosser Keywords: Time scales calculus Matrix exponential Cylinder transformation
a b s t r a c t In this work, we develop a new matrix exponential on time scales via a cylinder transformation with a component-wise, locally μΔ -integrable square matrix subscript. Our resulting matrix function can be written in terms of the matrix exponential of a Lebesgue integral added to a logarithmic sum in terms of the gaps of a general time scale. Under strict commutativity conditions, we show our dynamic matrix exponential is equivalent to the one in the standard literature. Finally, we demonstrate that our matrix exponential satisfies a nonlinear dynamic integral equation. © 2019 Elsevier Inc. All rights reserved.
1. Introduction The exponential on a time scale is a widely studied function whose definition depends on the cylinder transformation ⎧ ⎨1 Log(1 + hz), h > 0, (1) ξh (z) = h ⎩ z, h = 0. Here, ξ maps the so-called Hilger complex plane to a horizontal strip in C; the boundaries are sometimes identified to create a cylinder. The (scalar) exponential function ep is defined as ⎛ ep (t, t0 ) = exp ⎝
t
⎞ ξμ(τ ) (p(τ ))Δτ ⎠ ,
(2)
t0
* Corresponding author. E-mail addresses:
[email protected] (T. Cuchta),
[email protected] (D. Grow),
[email protected] (N. Wintz). https://doi.org/10.1016/j.jmaa.2019.06.048 0022-247X/© 2019 Elsevier Inc. All rights reserved.
734
T. Cuchta et al. / J. Math. Anal. Appl. 479 (2019) 733–751
where the integral is the one associated with the measure determined by the chosen time scale. It can be shown that this function is the unique solution of the dynamic initial value problem y Δ = py, y(t0 ) = 1, where p is a regressive rd-continuous function [2, Theorem 2.35]. The standard matrix generalization of ep , denoted by eA , follows a different path. It is usually defined first as the unique solution of the matrix dynamic initial value problem y Δ = A(t)y,
y(t0 ) = I,
(3)
where A is regressive and rd-continuous [2, Definition 5.18]. No general definition of it in terms of a matrix cylinder function appears in the literature. In this paper, we develop a time scales matrix exponential EA analogous to (2), which uses a matrix cylinder transformation similar to (1). We assume that the entries of A : T → C n×n are locally integrable with respect to the measure μΔ generated by the time scale [10], and we demonstrate that EA reduces to eA whenever A obeys strong self-commutativity properties. Furthermore, we derive both a nonlinear dynamic equation which EA satisfies almost everywhere analogous to (3) and an integral equation that EA satisfies everywhere. Properties of eA and EA eA (t, t0 ) EA (t, t0 ) eA (t, t) = I EA (t, t) = I E0 (t, t) = I e0 (t, t) = I −1 eA (t, t0 ) = eA (t0 , t)−1 EA (t, t0 ) =
EA (t0 , t) eσ A = (I + μA)eA
σ EA = exp μLog(I + μA) + Log(EA )
Cabada and Vivero [5] produced an identity for the scalar time scale exponential as t as the product of a “continuous factor” — the exponential of a Lebesgue integral over [t0 , t] — and a “discrete factor” — a product over the gaps in the time scale between t0 and t. However, the resulting formula contained a unnecessary factor identically equal to one. Hilscher and Zemánek characterized when the limit of the scalar time scales exponential exists in terms of the limit of the right-dense part of T [15]. Monteiro and Slavík [19] generalized the scalar exponential function to include p ∈ L1 (T , μΔ ) and codomain R. Cieśliński [6] offered a variant of the scalar exponential defined via a Cayley transformation, which has nice qualitative properties for defining bounded trigonometric functions on time scales. Karpuz [17] created the scalar exponential as an analytic function of its subscript for fixed t0 , t ∈ T : z → ez (t, t0 ). In comparison, treatment of a corresponding matrix exponential on time scales has not been as thoroughly investigated. Hilscher and Šepitka characterized when the solution to (3) is invertible [24]. Zafer [26], [27] provided a power series representation of the matrix exponential where the coefficient matrix A was constant. DaCunha [7] then gave a Peano-Baker representation for the matrix exponential (or state transition matrix) where A is allowed to vary with time, but is bounded in norm. Davis, et al. [21], have offered a power series representation of the matrix exponential which is the solution to a continuous, linear time-invariant system discretized using a bounded stochastic graininess. Hilscher and Zemánek found lower and upper bounds on eA for any submultiplicative norm [14]. Recently, Grow and Wintz [9] introduced a matrix exponential as the absolutely continuous solution to an integral equation corresponding to (3), where A has L∞ loc (T , μΔ ) components. The organization of this paper is as follows. In Section 2, we provide the background theory on time scales calculus as well as the matrix properties used in this paper. In Section 3, we describe the matrix cylinder transformation and develop some of its properties. In Section 4, we offer the main results of our matrix exponential, as well as its variants depending on the commutativity conditions granted. In Section 5, we offer our analysis of these results and potential extensions moving forward.
T. Cuchta et al. / J. Math. Anal. Appl. 479 (2019) 733–751
735
2. Preliminaries A time scale T is a closed subset of R under the usual topology. The forward jump function σ : T → T is defined by σ(t) = inf {s : s ∈ T , s > t}, taking inf ∅ := sup T . The graininess function μ : T → [0, ∞) is defined by the formula μ(t) = σ(t) − t. A t ∈ T is called right-dense if σ(t) = t and right-scattered if σ(t) > t. Throughout, we use the notation (a , b ) to denote the countably many (but not necessarily ∞ ordered) disjoint “gaps” of T , i.e. R \ T = (a , b ). We write =0
κ
T =
T \ (ρ(sup T ), sup T ], T,
sup T < ∞ sup T = ∞.
The set F1 = {[a, b) ∩ T ⊂ R : a, b ∈ R} is a ring of subsets of T and we define the measure m1 ([a, b)) = b − a on this ring. The unique Carathéodory extension of m1 to the σ-algebra generated by F1 is called the μΔ measure for the time scale T ; more details may be found in [3, Section 5.7]. We say that f is rd-continuous [2, Definition 1.58] provided it is continuous at right-dense points and its left-sided limits exist at all left-dense points in T . We say that f is regressive [2, Definition 2.25] provided that for all t ∈ T , 1 + μ(t)f (t) = 0. We say that a function f : T → R is Δ-differentiable at t with Δ-derivative f Δ (t) provided that for all > 0 there is a neighborhood U of t such that for all s ∈ U , [f (σ(t)) − f (s)] − f Δ (t)[σ(t) − s] ≤ |σ(t) − s| The concept of a Δ-derivative extends naturally to a function f : T → C by writing f = u + iv for u, v : T → R. A consequence of the definition is the following formula for the Δ-derivative of a function at t [2, Theorem 1.16]: ⎧ ⎨ f (t), Δ f (t) = f (σ(t)) − f (t) ⎩ , μ(t)
σ(t) = t σ(t) > t.
The following theorem is often called the “simple useful formula.” Theorem 2.1. [2, Theorem 5.2] If A : T → C n×n is Δ-differentiable at t ∈ T κ , then A(σ(t)) = A(t) + μ(t)AΔ (t). An elementary theory of Δ-integration [2, Section 1.4] is defined so as to satisfy the fundamental theorems of calculus: the “first FTC” resembling the formula ⎞Δ ⎛ t ⎝ f (τ )Δτ ⎠ = f (t),
(4)
t0
and the “second FTC” resembling the formula t f Δ (τ )Δτ = f (t) − f (t0 ). t0
(5)
736
T. Cuchta et al. / J. Math. Anal. Appl. 479 (2019) 733–751
An analog of (4) is proven in [10, Theorem 4.3] and an analog of (5) is proven in [10, Theorem 4.1]. We define the function space L1loc (T , μΔ ) to be those functions f : T → C that are locally Lebesgue Δ-integrable on T , i.e., Δ-integrable on all compact subsets of T . If f ∈ L1 (T , μΔ ), then [4, Theorem 4.1] shows that f : T → R is absolutely continuous if and only if f is Δ-differentiable μΔ -almost everywhere on [a, b) ∩ T , f Δ ∈ L1 (T , μΔ ), and (5) holds for all t ∈ T . An analog of (4) for f a locally integrable function is proven in [29, Lemma 2.6]. The paper [22] shows that the function G : R → R defined by G(t) = sup{s ∈ T : s ≤ t} relates the time scale integral to the classical Lebesgue integral: t
t f (τ )Δτ =
s
f (G(τ ))dτ.
(6)
s
Throughout, we use dτ to denote integration with respect to Lebesgue measure and we will use Δτ to denote integration with respect to the time scales measure μΔ . Following [2, pp. 51–57] (which is based on [12, Section 7]) and h ≥ 0, we define the Hilger complex plane
Ch =
C \ − h1 , C,
h>0 h=0
π π and define the strip Zh = z ∈ C : − < Im(z) ≤ . We define the (scalar) “cylinder transformation,” h h ξh : Ch → Zh by (1), where Log denotes the principal (scalar) logarithm with branch cut (−∞, 0]. A function A : T → C n×n is called regressive [2, Definition 5.5] provided that for all t ∈ T κ , I + μ(t)A(t) is invertible. The unique [2, Definition 5.18] solution to the initial value problem (3) is the time scales matrix exponential, denoted by Y (t) = eA (t, t0 ) [2, Theorem 5.18]. This definition is written as a solution to a dynamic equation as opposed to a direct matrix analog of (2). By [2, Theorem 5.21], it is known that eA obeys the following formulas: e0 (t, t0 ) = I and eA (t, t) = I,
(7)
eA (σ(t), t0 ) = (I + μ(t)A(t))eA (t, t0 ),
(8)
eA (t, t0 )−1 = eA (t0 , t),
(9)
eA (t, t0 )eA (t0 , t1 ) = eA (t, t1 ).
(10)
and
A matrix A ∈ C n×n is called Hermitian if it is equal to its conjugate transpose, i.e. A∗ = A. A matrix is called normal if AA∗ = A∗ A. It is clear that all Hermitian matrices are normal. A matrix U ∈ C n×n is called unitary if U ∗ = U −1 . The following theorem is a consequence of the Schur Decomposition. Theorem 2.2. [8, Corollary 7.1.4] A matrix A ∈ C n×n is normal if and only if there exists a unitary U ∈ C n×n such that U ∗ AU = diag(λ1 , . . . , λn ). Therefore all Hermitian matrices are unitarily diagonalizable. It is known that the Jordan canonical form of a diagonalizable matrix D ∈ C n×n can be written as D = J −1 diag(λ1 , . . . , λn )J,
T. Cuchta et al. / J. Math. Anal. Appl. 479 (2019) 733–751
737
where λi denote the eigenvalues of D. In [11, Definition 1.2], the extension of a function f : C → C to a Hermitian matrix A ∈ C n×n with Jordan canonical form A = J −1 diag(λ1 , . . . , λn )J is given by f (A) = J −1 diag(f (λ1 ), . . . , f (λn ))J,
(11)
provided that f (λi ) exists for all 0 ≤ i ≤ n. Throughout, we will assume the scalar logarithm has branch cut (−∞, 0] and hence the matrix analogue via (11) is easily understood. Following [11, p. 39], we define the function sign : C \ {z ∈ C : Re(z) = 0} → {1, −1} by
sign(z) =
1, −1,
Re(z) > 0 Re(z) < 0.
By (11), the extension of the scalar exponential (and sign function) to Hermitian (invertible) matrices is clear. We say that A is positive semi-definite and write A ≥ 0 provided that for all nonzero vectors z ∈ C n×1 , z ∗ Az ≥ 0;
(12)
we say that A is positive definite and write A > 0 if (12) is strict. Theorem 2.3. [16, Theorem 4.1.8] A matrix A ∈ C n×n is positive semi-definite if and only if A is Hermitian and all of its eigenvalues are nonnegative. The matrix exponential is defined [11, (10.1)] by exp(A) = I + A +
A3 A2 + + ..., 2! 3!
and it is noted there that this agrees with the definition via (11) for Hermitian matrices. The familiar scalar formula ea+b = ea eb generalizes to the matrix case in the following way. Theorem 2.4. [11, Theorem 10.2] For A, B ∈ C n×n , exp((A + B)t) = exp(At) exp(Bt) for all t ∈ C if and only if AB = BA. Since A always commutes with −A, we observe that exp(A) exp(−A) = exp(A + (−A)) = exp(0) = I, hence exp−1 (A) = exp(−A) We say that a matrix Y is a logarithm of the matrix X provided that X = exp(Y ). The following result is the logarithm version of Theorem 2.4. Theorem 2.5. [11, Theorem 11.3] Suppose B, C ∈ C n×n both have no eigenvalues on R− and BC = CB. If for every eigenvalue λj of B and the corresponding eigenvalue ηj of C,
738
T. Cuchta et al. / J. Math. Anal. Appl. 479 (2019) 733–751
|arg(λj ) + arg(ηj )| < π, then log(BC) = log(B) + log(C). Theorem 2.6. [1, Proposition 11.4.5] If A is positive definite, then there exists a unique Hermitian logarithm B such that exp(B) = A. If A is positive definite, by Theorem 2.6, the unique logarithm is Hermitian and its eigenvalues lie in the strip Zh . If A ∈ C n×n is Hermitian with some negative eigenvalues, then we know that it has a Jordan canonical form A = J −1 diag(λ1 , . . . , λn )J, and Log(A) = J −1 diag(Log(|λ1 |), . . . , Log(|λn |)) + diag
1 − sign(λn ) 1 − sign(λ1 ) ,..., 2 2
iπ J,
hence Log(A) = Log(|A|) + iπ
I − sign(A) , 2
(13)
where |A| is the absolute value of the matrix A given by the formula [11, p. 196] |A| =
√
A∗ A,
which we note is consistent with (11). From [11, (11.15)], Log(I + A) =
∞ (−1)k Ak k=1
k
,
r(A) < 1,
(14)
where r(A) denotes the spectral radius of A, i.e., r(A) = max{|λ| : det(A − λI) = 0}. The 1-norm of a matrix A = {aij } ∈ C n×n is defined by A1 =
n n
|aij |. It is known [16, p. 365] that
i=1 j=1
for A ∈ C n×n , r(A) ≤ A1 . The trace of a matrix A, denoted Tr(A), is the sum of its diagonal entries; it is known [16, p. 50] that the trace of A equals the sum of the eigenvalues of A. The following inequality, called the Golden-Thompson inequality, holds for positive semi-definite matrices [20, (8)]
Tr exp (A + B) ≤ Tr exp (A) exp (B) .
(15)
The following two technical results will be used to develop properties of our cylinder matrix exponential in Section 4. Lemma 2.7 ([18, Lemma 2.1]). If T =
∞ k=1
subsets and
Tk is a covering of T by at most countably many measurable
T. Cuchta et al. / J. Math. Anal. Appl. 479 (2019) 733–751
739
Fig. 1. The Hilger complex plane Ch , the strip Zh , and the cylinder transformation ξh .
g1 (t) gi+1 (t)
= χT1 , = χTi (t)(1 − g1 (t) − . . . − gi (t)),
where χ denotes the indicator function, then
∞
gi (t) = 1 is a measurable partition of unity of T .
k=1
Lemma 2.8 ([18, Lemma 2.2]). Let u1 , u2 , . . . , um be measurable functions from T to Rn such that {u1 (t), . . . , um (t)} is an orthonormal subset of Rn for each t ∈ T . Then we can find measurable functions um+1 , . . . , un from T to Rn such that {u1 (t), . . . , un (t)} is an orthonormal basis of Rn for every t ∈ T . 3. Matrix cylinder transformation Suppose A is regressive and has entries aij ∈ L1loc (T , μΔ ). We now generalize (1) by defining the matrix cylinder transformation. Definition 3.1. We define Chn×n to be the family of complex regressive matrices and Zn×n to be the set of h matrices whose eigenvalues lie in the strip Zh . The matrix cylinder function Ξh : Chn×n → Zn×n is defined h by ⎧ ⎨1 Log(I + hA), Ξh (A) = h ⎩ A,
h>0 h = 0.
Remark 3.2. We note that due to (11), this definition amounts to applying the scalar cylinder transformation to the eigenvalues of A. Hence the eigenvalues of A are mapped into the horizontal strip as described in Fig. 1. The next lemma follows from the definition of Ξ. Lemma 3.3. If A : T → C n×n is a regressive Hermitian matrix function, then (i) I + μ(t)A(t) is Hermitian for all t ∈ T , (ii) if A(t) is positive semi-definite for all t ∈ T , then Ξμ(t) (A(t)) is Hermitian for all t ∈ T , and (iii) if A(t) is positive definite for all t ∈ T , then Ξμ(t) (A(t)) is positive definite for all t ∈ T . Proof. Proofs for (i)-(iii) are trivial when t is right-dense. Assume t is right-scattered. Then for (i),
T T T (I + μ(t)A(t))∗ = I + μ(t)A(t) = I + μ(t)A(t) = I + μ(t)A(t),
T. Cuchta et al. / J. Math. Anal. Appl. 479 (2019) 733–751
740
completing the proof. To prove (ii), first we show that I + μ(t)A(t) is positive definite. Then for any nonzero z ∈ C n×1 , compute z ∗ (I + μ(a )A(a ))z = z ∗ z + μ(a )z ∗ A(a )z > 0. Now applying Theorem 2.6 completes the proof. To prove (iii), let z ∈ C n×1 be nonzero. Since for all t ∈ T , A(t) > 0, Theorem 2.2 guarantees that there is a unitary matrix U (t) such that A(t) = U ∗ (t)diag(λ1 (t), . . . , λn (t))U (t), where λ1 (t), . . . , λn (t) are the positive eigenvalues of A(t). Now I + μ(t)A(t) = U ∗ (t)U (t) + μ(t)U ∗ (t)diag(λ1 , . . . , λn )U (t) = U ∗ (t)diag(1 + μ(t)λ1 (t), . . . , 1 + μ(t)λn (t))U (t). Let D(t) = diag(ln(1 + μ(t)λ1 (t)), . . . , ln(1 + μ(t)λn (t))) > 0. From the calculation ∗
exp (U (t)D(t)U (t))
=
∞ (U ∗ (t)D(t)U (t))k
k=0 ∗
= U (t)
k! ∞
k=0
D(t)k k!
U (t)
= I + μ(t)A(t), and the uniqueness of the principal logarithm, we have Log(I + μ(t)A(t)) = U ∗ (t)D(t)U (t). Since D > 0 and U is invertible, it is clear that Log(I + μ(t)A(t)) is positive definite. 2 Lemma 3.4. If A : T → C n×n is a regressive Hermitian matrix function with each aij ∈ L1loc (T , μΔ ), then the function t → Ξμ(τ ) (A(t)) is locally integrable. Proof. Since A is locally integrable,
t μ(a )A(a )1 ≤
t0 ≤a
A(τ )1 Δτ < ∞, t0
1 for all but finitely many indices . If μ(τ ) = 0, then Ξμ(τ ) (A(τ ))1 = A(τ )1 . 2 Now suppose μ(τ ) > 0. Then τ = a for some . By definition, it follows that and μ(a )A(a )1 ≤
Ξμ(τ ) (A(τ )) = 1
1 Log(I + μ(a )A(a ))1 . μ(a )
Recall that
|ln(1 + y)| ≤
⎧ ⎨ y, ⎩ |y| ln(4),
1 0≤y≤ 2 1 − ≤ y < 0. 2
T. Cuchta et al. / J. Math. Anal. Appl. 479 (2019) 733–751
741
1 Since r μ(a )A(a ) ≤ μ(a )A(a )1 ≤ , we use (14) to write 2 Log(I + μ(a )A(a )1
∞ (−1)k (μ(a )A(a ))k = k k=1 ∞ k (μ(a )A(a )1 ) ≤ k k=1
= − ln (1 − μ(a )A(a )1 ) ≤ ln(4)μ(a )A(a )1 . Therefore Ξμ(a ) (A(a ))1 =
1 Log(I + μ(a )A(a ))1 ≤ ln(4)A(a )1 . μ(a )
Hence, t Ξμ(τ ) (A(τ ))Δτ t0
t ≤
Ξμ(τ ) (A(τ )) Δτ 1
t0
1
t
≤ ln(4)
A(τ )1 Δτ < ∞, t0
completing the proof. 2 4. Cylinder matrix exponential on time scales Analogous to (2), we now define the cylinder matrix exponential EA . Definition 4.1. If A : T → C n×n be regressive, then we define the cylinder matrix exponential EA : T × T → C n×n by ⎛ EA (t, t0 ) = exp ⎝
t
⎞ Ξμ(τ ) (A(τ ))Δτ ⎠ ,
t0
where the integral is associated with the measure μΔ generated by the time scale. A previous attempt to define a time scales matrix exponential in terms of a matrix cylinder transformation can be found in [28, (2.3)]. But, as written, it appears to restrict itself to when A has regressive rd-continuous components and also restrict itself to T = R or time scales of isolated points. Furthermore, [28, Theorem 2.11] claims that properties (7)–(10) hold, but it turns out that only (7) and (9) do. The following two propositions are analogs of (7) and (9), respectively, and follow immediately from Definition 4.1. Proposition 4.2. If A : T → C n×n is a regressive matrix function, then EA (t0 , t0 ) = I and if A is the zero matrix, then E0 (t, t0 ) = I. Proposition 4.3. If A : T → C n×n is a regressive matrix function with each aij ∈ L1loc (T , μΔ ), then −1 EA (t, t0 ) = EA (t0 , t).
742
T. Cuchta et al. / J. Math. Anal. Appl. 479 (2019) 733–751
We now demonstrate that eA and EA are, in general, different functions by showing that EA does not satisfy (8) and (10). Example 4.4. Consider the time scale T = {0, 1, 2, . . .} and t0 = 0. Define A : T → R2×2 by A(t) = 2 t+2 (t + 2) . Note that eA (0, 0) = I, and by repeated application of (8), eA (1, 0) = I + A(0) = (t + 2)3 (t + 2)4 3 4 , and 8 17
4 9 eA (2, 0) = (I + A(1))(I + A(0)) = 27 82
3 4 84 169 = . 8 17 737 1502
On the other hand, EA (0, 0) = exp(0) = I and
3 4 EA (1, 0) = exp (Log(I + A(0))) = I + A(0) = , 8 17 but EA (2, 0) = exp (Log(I + A(0)) + Log(I + A(1))) 3 4 4 9 = exp Log + Log 8 17 27 82 ⎤⎞ ⎛⎡ 2 ln(19) 3 ln(85) ln(19) ln(85) + + ⎥⎟ ⎜⎢ 9 28 9 28 = exp ⎝⎣ 4 ln(19) 9 ln(85) 8 ln(19) 27 ln(85) ⎦⎠ + + 9 9 28 28 96.9684 240.686 ≈ . 582.728 1463.05 As a result, we see that (8) does not have a direct analog for EA . Finally, compute EA (2, 1) = exp (Log(I + A(1))) 4 9 = exp Log 27 82 4 9 = . 27 82 So we have EA (2, 1)EA (1, 0) =
4 9 27 82
3 4 = eA (2, 0) = EA (2, 0), 8 17
showing that a direct analog of (10) also fails. In light of Theorem 2.4 and Theorem 2.5 we see that eA (2, 0) = EA (2, 0) if and only if A(0) and A(1) commute. By uniqueness of solutions, we know that EA does not generally satisfy the dynamic equation (3). We later derive an integral equation that EA satisfies. We also show the corresponding dynamic equation it satisfies μΔ -almost everywhere, and we explain when that dynamic equation is identical to (3). Note that if A ∈ C 1×1 , then EA is identical to the scalar time scales exponential (2). Lemma 4.5. If A : T → C n×n is a regressive Hermitian matrix function with each aij ∈ L1loc (T , μΔ ), then for all t ∈ T with t ≥ t0 ,
T. Cuchta et al. / J. Math. Anal. Appl. 479 (2019) 733–751
⎛ ⎜ EA (t, t0 ) = exp ⎝
743
A(τ )dτ
[t0 ,t]∩T
⎞ I − sign(I + μ(a )A(a )) ⎠ Log(|I + μ(a )A(a )|) + iπ + . 2 t0 ≤a
Proof. Using (6), Definition 3.1, and (13) yields t
t Ξμ(τ ) (A(τ ))Δτ =
t0
Ξμ(G(τ )) (A(G(τ ))dτ t0
Ξμ(G(τ )) (A(G(τ ))dτ +
= [t0 ,t]∩T
[t0 ,t]\T
b
A(τ )dτ +
=
t0 ≤a
[t0 ,t]∩T
1 Log(I + μ(a )A(a ))dτ μ(a )
=
Ξμ(G(τ )) (A(τ ))dτ
A(τ )dτ +
Log(I + μ(a )A(a )).
t0 ≤a
[t0 ,t]∩T
Taking exp of both sides, considering Definition 4.1, and applying (13) completes the proof. 2 The following results from combining Lemma 3.3 (iii) and (15). Corollary 4.6. Let A : T → C n×n be a regressive Hermitian matrix function with each aij ∈ L1loc (T , μΔ ). If A(t) is positive definite for every t ∈ T , then ⎛
⎛
⎞
⎜ ⎜ Tr (EA (t, t0 )) ≤ Tr ⎝exp ⎝
⎛
⎞⎞ ⎟ Log(1 + μ(a )A(a ))⎠⎠ .
⎟ A(τ )dτ ⎠ exp ⎝
t0 ≤a
[t0 ,t]∩T
The following corollary is the scalar case of Lemma 4.5; the coefficient (−1)m keeps track of the (finite) number m of gaps in T between t0 and t where the quantity 1 + μ(a )p(a ) is negative. We note that a special case of this appears in [5, p. 204], but contains factors inside the product of the form ⎛ ⎜ exp ⎝
⎞
σ(t i)
⎛
⎟ ⎜ p(s)ds⎠ = exp ⎝
ti
⎞
⎟ p(s)ds⎠ ,
[ti ,σ(ti ))∩T
which is always identically equal to 1. Corollary 4.7. If p ∈ L1loc (T , μΔ ), then the following formula holds for all t, t0 ∈ T with t ≥ t0 : ⎛ ⎜ ep (t, t0 ) = (−1)m exp ⎝
[t0 ,t]∩T
⎞ ⎟ p(τ )dτ ⎠
#
(1 + μ(a )p(a )),
t0 ≤a
where m is the (necessarily finite) cardinality of the set of points where 1 + μ(a )p(a ) < 0.
(16)
744
T. Cuchta et al. / J. Math. Anal. Appl. 479 (2019) 733–751
A direct matrix analog of (16) requires strong commutativity assumptions on the matrix function A. Corollary 4.8. Let A : T → C n×n be a regressive Hermitian matrix function with each aij ∈ L1loc (T , μΔ ). If A satisfies (i) for all t1 , t2 ∈ [t0 , t) ∩ T , A(t1 ) commutes with A(t2 ), and (ii) for all t1 ∈ [t0 , t),
A(τ )dτ commutes with A(t1 ), [t0 ,t]∩T
then ⎛ ⎜ EA (t, t0 ) = exp ⎝
⎞ #
⎟ A(τ )dτ ⎠
I + μ(a )A(a ) .
t0 ≤a
[t0 ,t]∩T
Proof. By [11, Theorem 1.13(a)], for any α ∈ (0, ∞), Log(I + αA(t)) and I + αA(t) commute. Hence Log(I + αA(t)) always commutes with A(t). The result follows by considering Lemma 4.5, Theorem 2.4, Theorem 2.5, and the properties (i) and (ii). 2 Corollary 4.8 shows that if A is a constant matrix, then
$ % EA (t, t0 ) = exp Aλ [t, t0 ) ∩ T
#
1 + μ(a )A ,
t0 ≤a
where λ denotes Lebesgue measure. If A(t) is diagonal for all t ∈ T , then ⎛ ⎜ EA (t, t0 ) = exp ⎝
⎞ ⎟ A(τ )dτ ⎠
[t0 ,t]∩T
#
I + μ(a )A(a ) .
t0 ≤a
Lemma 4.9. If A : T → C n×n is a regressive positive definite matrix function with each aij ∈ L1loc (T , μΔ ), then EA (t, t0 ) is Hermitian for every t ∈ T with t ≥ t0 . Proof. We know from Lemma 3.3 (ii) that Ξμ(τ ) (A(τ )) is Hermitian. It is well-known that the Δ-integral of a Hermitian matrix is Hermitian and that the exponential of a Hermitian matrix is Hermitian, which completes the proof. 2 The following theorem is similar to [18, Theorem 2.4]. Theorem 4.10. If A : T → C n×n is a matrix function with each aij ∈ L1loc (T , μΔ ) and A(t) is Hermitian for every t ∈ T , then the distinct eigenvalues λn (t) > λn−1 (t) > . . . > λn−m (t) of A(t), where m ≤ n, are measurable and have corresponding measurable eigenvector functions u1 (t), . . . , un (t) which form an orthonormal basis of C n×n . Proof. The case n = 1 is trivial. Assume the induction hypothesis that the theorem holds for (n −1) ×(n −1) matrices. We first argue the case where A(t) is positive semi-definite with λn (t) > 0. Let j (t) denote the multiplicity of the eigenvalue λj (t) of A(t). It is clear that for k ≥ 1,
T. Cuchta et al. / J. Math. Anal. Appl. 479 (2019) 733–751
$
745
& ' k1 $ %% 1 Tr Ak (t) k = n (t)λkn (t) + n−1 (t)λkn−1 (t) + . . . + n−m (t)λkn−m (t) .
Now compute $
%% 1 $ Tr Ak (t) k (
= λn (t) n (t) + n−1 (t)
λn−1 (t) λn (t)
k
+ . . . + n−m (t)
λn−m (t) λn (t)
k ) k1
≥ λn (t). Therefore, $ %1 lim inf Tr Ak (t) k ≥ λn (t). k→∞
On the other hand, (
k k ) k1 λn−1 (t) λn−m (t) + . . . + n−m (t) n (t) + n−1 (t) λn (t) λn (t) (
k k )* 1 λn−1 (t) λn−m (t) ln n (t) + n−1 (t) = exp + . . . + n−m (t) k λn (t) λn (t) , + 1 ln [n (t) + n−1 (t) + . . . + n−m (t)] ≤ exp k , + ln(n) →1 = exp k
as k → ∞. Consequently, $ $ %% 1 lim sup Tr Ak (t) k k→∞
= lim sup λn (t) n (t) + n−1 (t) k→∞
= λn (t) lim sup n (t) + n−1 (t) k→∞
λn−1 (t) λn (t) λn−1 (t) λn (t)
k
. . . + n−m (t)
λn−m (t) λn (t)
k
+ . . . + n−m (t)
k k1
λn−m (t) λn (t)
k k1
≤ λn (t). Hence, $ $ %% 1 lim Tr Ak (t) k = λn (t).
k→∞
Since the entries of A(t) are μΔ -measurable, Tr(Ak (t)) is also μΔ -measurable. As a result, the maximal eigenvalue of A(t) is expressible as the pointwise limit of a μΔ -measurable function, proving it also is μΔ -measurable. If t ∈ T , let I = Pn (t) +Pn−1 (t) +. . .+Pn−m (t) be the orthogonal sum of eigenprojections for the distinct eigenvalues (see Fig. 2) where Pi (t) corresponds to the eigenvalue λi (t). Note that Pi (t) is the orthogonal projection of Rn onto the eigenspace corresponding to λi (t) and in general, m depends on t.
746
T. Cuchta et al. / J. Math. Anal. Appl. 479 (2019) 733–751
Fig. 2. Eigenprojections in the case m = 2. Note that v = Pn (t) v +Pn−1 (t) v , hence we have the operator identity I = Pn (t)+Pn−1 (t).
Let {e1 , . . . , en } be an orthonormal basis for C n×1 and t ∈ T . For each k = 1, 2, . . . , n, write ek = Pn (t)ek + Pn−1 (t)ek + . . . + Pn−m (t)ek . Let q ∈ {1, 2, 3, . . .} and multiply by A(t) on the left q times to obtain Aq (t)ek = λqn (t)Pn (t)ek + λqn−1 (t)Pn−1 (t)ek + . . . + λqn−m (t)Pn−m (t)ek . By our ordering of eigenvalues, we see that Aq (t)ek = Pn (t)ek . q q→∞ λn (t) lim
This proves that Pn (t)ek is μΔ -measurable, since it is a pointwise limit of μΔ -measurable functions. Hence its cozero set Tk = {t ∈ T : Pn (t)ek = 0} is a μΔ -measurable set for k = 1, 2, . . . , n. It is clear that T = T1 ∪ T2 ∪ . . . ∪ Tn is a μΔ -measurable covering of T . Using Lemma 2.7, we know n there is an associated measurable partition of unity gk (t) = 1 of T . We define a μΔ -measurable function k=1 n
un (t) =
k=1 n
k=1
gk (t)Pn (t)ek , gk (t)Pn (t)ek
which has norm 1 for all t ∈ T . Also, by construction, A(t) un (t) = λn (t) un (t). By Lemma 2.8, we enlarge the singleton { un (t)} to a μΔ -measurable orthonormal basis { u1 (t), . . . , un (t)} of C n×1 everywhere on T . Define the matrix U (t) = [ u1 (t) u2 (t) . . . un (t)]. Now compute U ∗ (t)A(t)U (t) = U ∗ (t)A(t) [ u1 (t) . . . un−1 (t) un (t)] = [ U ∗ (t)A(t) u1 (t) . . . U ∗ (t)A(t) un−1 (t)
U ∗ (t)A(t) un (t)] .
Observe that ⎡
⎡ ⎤ ⎤
u∗1 (t) · un (t) 0 .. .⎥ ⎢ ⎢ ⎥ ⎥ = λn (t) ⎢ .. ⎥ . . U ∗ (t)A(t) un (t) = U ∗ (t)λn (t) un (t) = λn (t) ⎢ ⎣ u∗ (t) · u (t) ⎦ ⎣0⎦ n n−1 ∗ 1
un (t) · un (t)
T. Cuchta et al. / J. Math. Anal. Appl. 479 (2019) 733–751
747
On the other hand, for 1 ≤ j ≤ n − 1, observe that since { u1 (t), . . . , un−1 (t), un (t)} is an orthonormal system in C n×1 and un (t) is an eigenvector of the positive semi-definite matrix A(t) corresponding to the eigenvalue λn (t), we have A(t) uj (t), un (t) = uj (t), A(t) un (t) = λn (t) uj (t), un (t) = 0. It follows that the nth component of U ∗ (t)A(t) uj (t) is
u∗n (t)A(t) uj (t) = A(t) uj (t), un (t) = 0. This shows that
0 A0 (t) U (t)A(t)U (t) = 0 λn (t) ∗
(17)
for some matrix A0 (t) ∈ C (n−1)×(n−1) . But the left-hand side of (17) is a positive semi-definite matrix and it follows that A0 (t) is positive semi-definite. By the induction hypothesis, we can order its n − 1 eigenvalues as λ1 (t) ≤ λ2 (t) ≤ . . . ≤ λn−1 (t) and they are μΔ -measurable functions. In the general case, where A is Hermitian, let B(t) = A(t) + r(A(t))I, where r denotes the spectral radius of A(t). By construction, we have B(t) ≥ 0. By above, we write the eigenvalues Λ1 (t), . . . , Λn (t) of B(t) ordered such that Λn (t) ≥ . . . ≥ Λ1 (t) ≥ 0. Hence there are nonzero vectors zi obeying Bi (t)zi = Λi (t)zi for i = 1, 2, . . . , n. By definition of B, we have (A(t) + r(A(t)I)zi = Λi zi , hence A(t)zi = (Λi (t) − r(A(t)))zi , and we see that A(t) and B(t) share the same eigenvector functions. 2 Next we consider a diagonalization of a positive definite matrix. For the resulting matrix exponential, the eigenvalue functions may be expressed in terms of the scalar time scales exponential. Theorem 4.11. If A : T → C n×n is a matrix function with each aij ∈ L1loc (T , μΔ ) and A(t) is positive definite for every t ∈ T , then there is a μΔ -measurable unitary matrix function U (t) satisfying the identity EU ∗ AU (t, t0 ) = diag (eλ1 (t, t0 ), . . . , eλn (t, t0 )). Proof. Since A(t) is positive definite for every t ∈ T , by Theorem 2.2 and Theorem 4.10, there is a measurable unitary matrix function U such that for all t ∈ T U ∗ (t)A(t)U (t) = diag(λ1 (t), . . . , λn (t)). Therefore by Lemma 4.5, ⎛ EU ∗ AU (t, t0 ) = exp ⎝
t
U ∗ (τ )A(τ )U (τ )dτ
t0
+
t0 ≤a
⎞ I − sign(I + μ(a )A(a )) ⎠. Log (|I + μ(a )U ∗ (a )A(a )U (a )|) + iπ 2
Since U ∗ AU is diagonal, we apply Theorem 2.4. Since
T. Cuchta et al. / J. Math. Anal. Appl. 479 (2019) 733–751
748
⎛ exp ⎝
t
⎞ U ∗ (τ )A(τ )U (τ )dτ ⎠
t0
⎛
⎛ t ⎞ ⎛ t ⎞⎞ = diag ⎝exp ⎝ λ1 (τ )dτ ⎠ , . . . , exp ⎝ λn (τ )dτ ⎠⎠ t0
t0
and
Log (I + μ(a )U ∗ (a )A(a )U (a )) + iπ
t0 ≤a
⎛ = diag ⎝(−1)m1
I − sign(I + μ(a )A(a ) 2
#
(1 + μ(a )λ1 (a )), . . . ,(−1)mn
t0 ≤a
#
⎞ (1 + μ(a )λn (a ))⎠ ,
t0 ≤a
where m1 , . . . , mn are non-negative integer functions of t that count how many times 1 + μλ is negative in [t0 , t) ∩ T , we arrive at ⎛
⎛ t ⎞ EU ∗ AU (t, t0 ) = diag ⎝(−1)m1 exp ⎝ λ1 (τ )dτ ⎠
#
(1 + μ(a )λ1 (a )), . . . ,
t0 ≤a
t0
⎛ t ⎞ (−1)mn exp ⎝ λn (τ )dτ ⎠ t0
#
⎞ (1 + μ(a )λn (a ))⎠ .
t0 ≤a
Applying (16) to each entry completes the proof. 2 Lemma 4.12. If A : T → C n×n is a regressive Hermitian matrix function with each aij ∈ L1loc (T , μΔ ), then the function t → A(t)EA (t, t0 ) is locally integrable. t Proof. By Lemma 3.4, we conclude that t →
Ξμ(τ ) (A(τ ))Δτ is locally absolutely continuous. Hence t0
t → EA (t, t0 ) is also locally absolutely continuous and so t → A(t)EA (t, t0 ) is locally integrable, completing the proof. 2 The next corollary follows immediately from Lemma 4.9. Corollary 4.13. If A : T → C n×n is a regressive Hermitian matrix function with each aij ∈ L1loc (T , μΔ ), then there is a unitary matrix function V satisfying the identity EA (t, t0 ) = V ∗ (t)diag(λ1 (t), . . . , λn (t))V (t), where the λi (t) are the eigenvalues of EA (t, t0 ). Example 4.14. Let A : T → C 2×2 be a matrix function with each aij ∈ L1loc (T , μΔ ), and let A(τ ) be positive definite for every τ ∈ T . We write
E11 EA (t, t0 ) = E21
E12 (t, t0 ). E22
T. Cuchta et al. / J. Math. Anal. Appl. 479 (2019) 733–751
749
The eigenvalues of EA (t, t0 ) are of the form λ(t) =
Tr(EA (t, t0 )) ±
-
Tr(EA (t, t0 ))2 − 4det(EA (t, t0 )) . 2
It is well-known that det (exp (X)) = eTr(X) [25, (3.23)]. By Lemma 3.3 (iii), we conclude that t
t Ξμ(τ ) (A(τ ))Δτ =
t0
A(τ )dτ +
Log(1 + μ(a )A(a ))
t0 ≤a
t0
is positive definite. Hence there is a unitary matrix-valued function U satisfying the identity t t ∗ Ξμ(τ ) (A(τ ))Δτ = U (t)diag(Λ1 (t), Λ2 (t))U (t), where the Λi (t) are the eigenvalues of Ξμ(τ ) (A(τ ))Δτ . t0
t0
Therefore ⎛ EA (t, t0 ) = exp ⎝
t
⎞
$ % Ξμ(τ ) (A(τ ))Δτ ⎠ = U ∗ (t)diag eΛ1 (t) , eΛ2 (t) U (t).
t0
Thus Tr(EA (t, t0 )) = eΛ1 (t) + eΛ2 (t) . As a result, the eigenvalues of EA (t, t0 ) are $ Λ (t) % √ e 1 + eΛ2 (t) ± e2Λ1 (t) + e2Λ2 (t) − 2eΛ1 (t)+Λ2 (t) . λ(t) = 2 Theorem 4.15. If A : T → C n×n is a regressive Hermitian matrix function with each aij ∈ L1loc ([t0 , ∞) ∩ T , μΔ ), then the function Y (t) = EA (t, t0 ) satisfies the Δ-integral equation t Y (t) = I +
F (τ, Y (τ ))Δτ,
(18)
t0
where ⎧ ⎪ ⎨ A(t)Y
(t), F (t, Y (t)) = exp Log(I + μ(t)A(t)) + Log(Y (t)) − Y (t) ⎪ ⎩ , μ(t)
μ(t) = 0 μ(t) > 0.
Proof. By Proposition 4.2, the initial condition holds, and Lemma 3.4 shows that Y (t) is Δ-differentiable μΔ -almost everywhere. If t is a right-dense point, then the Δ-derivative of the left-hand side of (18) is A(t)y(t). If t is right-scattered, we first compute the logarithm of Definition 4.1 to obtain t Log(Y (t)) =
Ξμ(τ ) (A(τ ))Δτ.
(19)
t0
Since the integral of a locally integrable function is locally absolutely continuous, by Lemma 3.4 the righthand side of (19) is Δ-differentiable μΔ -almost everywhere. Thus we compute the Δ-derivative of Log(Y (t)) and Definition 3.1 yields 1 Log(Y (σ(t))) − Log(Y (t)) = Log(I + μ(t)A(t)). μ(t) μ(t)
T. Cuchta et al. / J. Math. Anal. Appl. 479 (2019) 733–751
750
By algebraic manipulation, we arrive at
Y (σ(t)) = exp Log(I + μ(t)A(t)) + Log(Y (t)) .
(20)
Applying Theorem 2.1 to the left-hand side of (20) and rearranging, we observe the following formula for μΔ -almost all t ∈ T : Y Δ (t) = F (t, Y (t)).
(21)
Hence, t → F (t, Y (t)) is locally integrable. Using Lemma 4.12 and the fundamental theorem of calculus [29, Lemma 2.6], the right-hand side of (18) is Δ-differentiable μΔ -almost everywhere and its Δ-derivative is equal to F (t, Y (t)) μΔ -almost everywhere. This completes the proof. 2 The following corollary shows that EA and eA are identical under strong commutativity assumptions on the matrix function A. In this case, Corollary 4.8 yields a factorization of eA . Corollary 4.16. If A : T → C n×n and commutativity conditions (i) and (ii) from Corollary 4.8 hold, then (18) is equivalent to the integral equation t Y (t) = I +
A(τ )Y (τ )Δτ. t0
Proof. The case when μ(t) = 0 is clear from the definition of F in Theorem 4.15. If μ(t) > 0, we start by considering (21), which holds μΔ -almost everywhere. Corollary 4.8 guarantees that Log(I + μ(t)A(t)) and Log(Y (t)) commute. Thus by Theorem 2.4, we have ( ) 1 1 (I + μ(t)A(t))Y (t) − Y (t) = A(t)Y (t). Y (t) = μ(t) μ(t) Δ
Taking the Δ-integral from t0 to t on both sides completes the proof. 2 5. Conclusion In this paper, we have developed the theory of the time scales matrix cylinder transformation and its use in defining a new time scales matrix exponential. We have established some of its properties – in particular Lemma 4.5 and consequently (16) – and we have provided a direct connection to the time scales matrix exponential eA via Corollary 4.16. We have generalized to the matrix case the locally integrable subscript approach to time scale exponentials from the work by Monteiro and Slavík [19]. That paper primarily uses a non-constructive approach by appealing to existence and uniqueness theorems in the theory of generalized ordinary differential equations, as in [23]. As a result, we have provided an alternative approach to the scalar time scales exponential. Hilger’s paper [13, p. 477] emphasizes that the scalar cylinder transformation “reveals the close connection between the stability regions for oΔe’s and ode’s.” The same is true for our matrix analog as will be explored in a future paper. He further notes the importance of the inverse cylinder transformation in defining the Hilger trigonometric functions distinct from the usual approach to trigonometric functions on time scales from the standard literature [2]. This too will be investigated in further work.
T. Cuchta et al. / J. Math. Anal. Appl. 479 (2019) 733–751
751
Acknowledgments The authors would like to personally thank the anonymous referees for their recommendations in improving the manuscript. References [1] D.S. Bernstein, Matrix Mathematics. Theory, Facts, and Formulas, 2nd expanded ed., Princeton University Press, Princeton, NJ, 2009. [2] M. Bohner, A. Peterson, Dynamic Equations on Time Scales, Birkhäuser Boston Inc., Boston, MA, 2001. [3] M. Bohner, A. Peterson (Eds.), Advances in Dynamic Equations on Time Scales, Birkhäuser, Boston, MA, 2003. [4] A. Cabada, D.R. Vivero, Criterions for absolute continuity on time scales, J. Difference Equ. Appl. 11 (11) (2005) 1013–1028. [5] A. Cabada, D.R. Vivero, Expression of the Lebesgue Δ-integral on time scales as a usual Lebesgue integral; application to the calculus of Δ-antiderivatives, Math. Comput. Modelling 43 (1–2) (2006) 194–207. [6] J.L. Cieśliński, New definitions of exponential, hyperbolic and trigonometric functions on time scales, J. Math. Anal. Appl. 388 (1) (2012) 8–22. [7] J.J. Dacunha, Transition matrix and generalized matrix exponential via the Peano-Baker series, J. Difference Equ. Appl. 11 (15) (2005) 1245–1264. [8] G. Golub, C.F. Van Loan, Matrix Computations, 3rd ed., The Johns Hopkins Univ. Press, Baltimore, MD, 1996. [9] D. Grow, N. Wintz, Existence and uniqueness of solutions to a bilinear state system on a time scale, in preparation. [10] G.S. Guseinov, Integration on time scales, J. Math. Anal. Appl. 285 (1) (2003) 107–127. [11] N.J. Higham, Functions of Matrices. Theory and Computation, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2008. [12] S. Hilger, Analysis on measure chains – a unified approach to continuous and discrete calculus, Results Math. 18 (1–2) (1990) 18–56. [13] S. Hilger, Special functions, Laplace and Fourier transform on measure chains, Dynam. Systems Appl. 8 (3–4) (1999) 471–488. [14] R.Š. Hilscher, P. Zemánek, Limit point and limit circle classification for symplectic systems on time scales, Appl. Math. Comput. 233 (2014) 623–646. [15] R.Š. Hilscher, P. Zemánek, Limit circle invariance for two differential systems on time scales, Math. Nachr. 288 (5–6) (2015) 696–709. [16] R.A. Horn, C.R. Johnson, Matrix Analysis, 2nd ed., Cambridge University Press, Cambridge, 2013. [17] B. Karpuz, Analyticity of the complex time scale exponential, Complex Anal. Oper. Theory 11 (1) (2017) 21–34. [18] W.F. Ke, K.F. Lai, T.L. Lee, N.C. Wong, Preconditioning random Toeplitz systems, J. Nonlinear Convex Anal. 17 (4) (2016) 757–770. [19] G.A. Monteiro, A. Slavík, Generalized elementary functions, J. Math. Anal. Appl. 411 (2) (2014) 838–852. [20] D. Petz, A survey of certain trace inequalities, in: Functional Analysis and Operator Theory. Proceedings of the 39th Semester at the Stefan Banach International Mathematical Center in Warsaw, Poland, March 2–May 30, 1992, Polish Academy of Sciences, Warsaw, 1994, pp. 287–298. [21] D.R. Poulsen, J.M. Davis, I.A. Gravagne, Optimal control on stochastic time scales, in: 20th IFAC World Congress, IFAC-PapersOnLine 50 (1) (2017) 14861–14866. [22] B.P. Rynne, L2 spaces and boundary value problems on time-scales, J. Math. Anal. Appl. 328 (2) (2007) 1217–1236. [23] Š. Schwabik, Generalized Ordinary Differential Equations, World Scientific, Singapore, 1992. [24] P. Šepitka, R.Š. Hilscher, Principal solutions at infinity for time scale symplectic systems without controllability condition, J. Math. Anal. Appl. 444 (2) (2016) 852–880. [25] G. Teschl, Ordinary Differential Equations and Dynamical Systems, American Mathematical Society (AMS), Providence, RI, 2012. [26] A. Zafer, The exponential of a constant matrix on time scales, ANZIAM J. 48 (1) (2006) 99–106. [27] A. Zafer, Calculating the matrix exponential of a constant matrix on time scales, Appl. Math. Lett. 21 (6) (2008) 612–616. [28] A.H. Zaidi, Uniqueness of solutions to matrix equations on time scales, Electron. J. Differential Equations 2013 (2013) 13. [29] Z. Zhan, W. Wei, Necessary conditions for a class of optimal control problems on time scales, Abstr. Appl. Anal. 2009 (2009) 14.