Accepted Manuscript Matrix power means and the information monotonicity
Mahdi Dehghani, Mohsen Kian, Yuki Seo
PII: DOI: Reference:
S0024-3795(17)30054-X http://dx.doi.org/10.1016/j.laa.2017.01.025 LAA 14026
To appear in:
Linear Algebra and its Applications
Received date: Accepted date:
8 March 2016 21 January 2017
Please cite this article in press as: M. Dehghani et al., Matrix power means and the information monotonicity, Linear Algebra Appl. (2017), http://dx.doi.org/10.1016/j.laa.2017.01.025
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
MATRIX POWER MEANS AND THE INFORMATION MONOTONICITY MAHDI DEHGHANI, MOHSEN KIAN AND YUKI SEO Abstract. Lim and P´ alfia established the notion of the matrix power means for k positive definite matrices (k ≥ 3) and showed that the matrix power means have the information monotonicity for a unital positive linear mapping. In this note, by virture of the generalized Kantorovich constant, we show counterparts to the information monotonicity of the matrix power means.
1. Introduction Throughout the paper, assume that Mn = Mn (C) is the algebra of all n × n complex matrices and I denote the identity matrix. A Hermitian matrix A is positive semidefinite (denoted by A ≥ 0) if all of its eigenvalues are nonnegative. If in addition A is invertible, then A is called positive definite (denoted by A > 0). For Hermitian matrices A, B ∈ Mn , we write A ≥ B if A − B ≥ 0. If m and M are real scalars, then we mean by m ≤ A ≤ M that mI ≤ A ≤ M I. A linear mapping Φ : Mn → Mp is called positive if Φ preserves the positivity, i.e., A ≥ 0 in Mn implies that Φ(A) ≥ 0 in Mp and Φ is called unital if Φ(I) = I. The following inequalities are known as the Choi inequality and the Kadison inequality respectively, see [4, 5, 8]: If Φ : Mn → Mp is a unital positive linear mapping, then Φ(A)−1 ≤ Φ A−1 , Φ(A)2 ≤ Φ A2 .
The Choi inequality The Kadison inequality
(1.1)
A counterpart to the Choi inequality (1.1) has been presented by Marshal and Olkin [14] as follows: For positive definite A ∈ Mn with 0 < m ≤ A ≤ M
−1
Φ A
(m + M )2 Φ(A)−1 . ≤ 4mM
(1.2)
2010 Mathematics Subject Classification. Primary 47A63; Secondary 47A64. Key words and phrases. Matrix power mean, information monotonicity, positive definite matrix, positive linear mapping. 1
2
MAHDI DEHGHANI, MOHSEN KIAN AND YUKI SEO
A similar result for the Kadison inequality (see [15]) holds also true: (m + M )2 Φ A2 ≤ Φ(A)2 . 4mM The constant
(m+M )2 4mM
(1.3)
is known as the Kantorovich constant. In addition, the
inequalities of types (1.2) and (1.3), which present reverse of some inequalities, are known as Kantorovich type inequalities. For a recent survey concerning Kantorovich type inequalities the reader is referred to [8, 16]. In [1] Ando showed that if Φ is a positive linear mapping, then for positive definite A and B ∈ Mn Φ(A B) ≤ Φ(A) Φ(B),
(1.4)
where the matrix geometric mean is defined by A B = A1/2 (A−1/2 BA−1/2 )1/2 A1/2 . A counterpart to the Ando’s inequality (1.4) is as follows: If A, B ∈ Mn and 0 < m ≤ A, B ≤ M , then m+M Φ(A B), Φ(A) Φ(B) ≤ √ 2 mM see [8, Remark 5.3]. In [11], Lim and P´alfia established the notion of the matrix power means for k positive definite matrices (k ≥ 3): Let A = (A1 , . . . , Ak ) be a k-tuple of positive definite matrices and ω = (ω1 , . . . , ωk ) a weight vector with ωi ≥ 0 for i = 1, . . . , k and ki=1 ωi = 1. The matrix power mean Pt (ω; A) is defined to be the unique positive definite solution of the non-linear equation: X=
k
ωi (X t Ai )
for t ∈ (0, 1],
i=1
where A t B = A1/2 (A−1/2 BA−1/2 )t A1/2 is the t-weighted geometric mean of A and B. For t ∈ [−1, 0), it is defined by Pt (ω; A) = P−t (ω; A−1 )−1 −1 where A−1 = (A−1 1 , . . . , Ak ). We note that
P1 (ω; A) =
k i=1
ωi Ai
and
P−1 (ω; A) =
k
−1 ωi A−1 i
i=1
are the ω-weighted arithmetic mean and the ω-weighted harmonic mean of A1 , . . . , Ak , respectively.
MATRIX POWER MEANS AND THE INFORMATION MONOTONICITY
3
Moreover, the Karcher mean GK (ω; A) is defined by the unique positive definite solution of the Karcher equation: k
1
1
ωi log(X − 2 Ai X − 2 ) = 0.
i=1
The Karcher mean coincides with the limit of matrix power means as t → 0: GK (ω; A) = lim Pt (ω; A). t→0
(1.5)
We recall some basic properties of matrix power means. If A = (A1 , . . . , Ak ) and B = (B1 , . . . , Bk ) such that 0 ≤ Ai ≤ Bi (i = 1, . . . , k), then Pt (ω, A) ≤ Pt (ω, B) for all t ∈ [−1, 1] \ {0}. The matrix power mean Pt (ω, A) interpolates between the weighted harmonic and arithmetic means: k −1 k −1 ωi Ai ≤ Pt (ω, A) ≤ ωi Ai for all t ∈ [−1, 1] \ {0}. i=1
(1.6)
i=1
For a unital positive linear mapping Φ : Mn → Mp , the matrix power means satisfy the following information monotonicity: For each t ∈ (0, 1] Φ(Pt (ω; A)) ≤ Pt (ω; Φ(A)),
(1.7)
where Φ(A) = (Φ(A1 ), · · · , Φ(Ak )). However, it is not known whether (1.7) holds for t ∈ [−1, 0) or not, also see [10, 11]. For more information on the Karcher mean and the matrix power means, the reader is referred to [2, 10, 11]. In this paper, by virtue of a generalized Kantorovich constant, we show counterparts to the information monotonicity (1.7) of matrix power means for all t ∈ [−1, 1] \ {0}. 2. Information monotonicity First of all, we begin with a complementary result to the inequality (1.7), see also [6, Theorem 6.2 and Remark 6.3]: Theorem 2.1. Let A = (A1 , . . . , Ak ) be a k-tuple of positive definite matrices with 0 < m ≤ Ai ≤ M (i = 1, · · · , k) for some scalars m ≤ M and ω = (ω1 , . . . , ωk ) a weight vector. If Φ : Mn → Mp is a unital positive linear mapping, then Pt (ω; Φ(A)) ≤ for all t ∈ [−1, 1] \ {0}.
(m + M )2 Φ(Pt (ω; A)) 4mM
(2.1)
4
MAHDI DEHGHANI, MOHSEN KIAN AND YUKI SEO
Proof. If we put Ψ(A) = ki=1 ωi Ai in (1.2), then Ψ is a unital positive linear
−1 −1 +m−1 )2 (M +m)2 k −1 −1 −1 map and ki=1 ωi Ai = Ψ(A) ≤ (M4M −1 Ψ(A ) = ω A i=1 i i m−1 4M m and thus for each t ∈ (0, 1] Pt (ω, Φ(A)) ≤
k
ωi Φ(Ai )
i=1
=Φ
k
⎛
(by (1.6))
ωi Ai
i=1 2
k
−1 ⎞ ⎠ ωi A−1 i
(m + M ) 4mM i=1 ⎛ −1 ⎞ k 2 (m + M ) ⎠ = ωi A−1 Φ⎝ i 4mM i=1
≤ Φ⎝
≤
(m + M )2 Φ(Pt (ω, A)) 4mM
(by (1.6)).
Therefore Pt (ω; Φ(A)) ≤
(m + M )2 Φ(Pt (ω; A)) 4mM
Let t ∈ [−1, 0). By (1.2), we have Φ(A−1 i ) ≤
for all t ∈ (0, 1].
(m+M )2 Φ(Ai )−1 4mM
for i = 1, · · · , k and
hence (by (1.7)) Φ(P−t (ω; A−1 )) ≤ P−t (ω; Φ(A−1 )) (m + M )2 ≤ P−t ω; Φ(A)−1 4mM (m + M )2 = P−t ω; Φ(A)−1 . 4mM This leads to −1 Pt (ω; Φ(A)) = P−t ω; Φ(A)−1 (m + M )2 Φ(P−t (ω; A−1 ))−1 4mM (m + M )2 ≤ Φ P−t (ω; A−1 )−1 4mM (m + M )2 Φ(Pt (ω; A)) = 4mM for all t ∈ [−1, 0). This completes the proof. ≤
(by the Choi inequality (1.1))
Theorem 2.1 together with (1.5) give the following corollary.
MATRIX POWER MEANS AND THE INFORMATION MONOTONICITY
5
Corollary 2.2. Let A = (A1 , . . . , Ak ) be a k-tuple of positive definite matrices with 0 < m ≤ Ai ≤ M
(i = 1, · · · , k) for some scalars m ≤ M , and ω =
(ω1 , . . . , ωk ) a weight vector. If Φ : Mn → Mp is a unital positive linear mapping, then (m + M )2 GK (ω, Φ(A)) ≤ Φ(GK (ω, A)). 4mM
(2.2)
The inequality (2.1) can be squared by a similar method as in [12, 13]: Theorem 2.3. Let A = (A1 , . . . , Ak ) be a k-tuple of positive definite matrices with 0 < m ≤ Ai ≤ M (i = 1, · · · , k) for some scalars m ≤ M and ω = (ω1 , . . . , ωk ) a weight vector. If Φ : Mn → Mp is a unital positive linear mapping, then 2 (m + M )2 2 Φ2 (Pt (ω, A)) (2.3) Pt (ω, Φ(A)) ≤ 4mM for every t ∈ [−1, 1] \ {0}. Proof. Bhatia and Kittaneh [3] showed that if A and B are positive semidefinite matrices, then 1 AB ≤ A + B2 4 for every unitarily invariant norm · . Therefore 1 2 mM Pt (ω, Φ(A))Φ−1 (Pt (ω, A)) ≤ Pt (ω, Φ(A)) + mM Φ−1 (Pt (ω, A)) 4 for every t ∈ [−1, 1] \ {0}. On the other hand, it follows from (1.6) and (1.1) that −1
Pt (ω, Φ(A)) + mM Φ (Pt (ω, A)) ≤
k
ωi Φ(Ai ) + mM Φ Pt (ω, A)−1
i=1
≤
k
ωi Φ(Ai ) + mM Φ
i=1
=
k
k
ωi A−1 i
i=1
ωi Φ(Ai ) + mM Φ(A−1 i )
i=1
≤ m + M. The last inequality follows from (A − m)(M − A)A−1 ≥ 0. Hence 2 Pt (ω, Φ(A))Φ−1 (Pt (ω, A)) ≤ (m + M ) , 4mM
6
MAHDI DEHGHANI, MOHSEN KIAN AND YUKI SEO
which further implies 2 (m + M )2 2 Pt (ω, Φ(A)) ≤ Φ2 (Pt (ω, A)) 4mM
for all t ∈ [−1, 1] \ {0}.
Tending t → 0 in (2.3), we obtain a square version of (2.2): Corollary 2.4. Let A = (A1 , . . . , Ak ) be a k-tuple of positive definite matrices with 0 < m ≤ Ai ≤ M (i = 1, . . . , k), and ω = (ω1 , . . . , ωk ) a weight vector. If Φ : Mn → Mp is a unital positive linear mapping, then 2 (m + M )2 2 Φ2 (GK (ω, A)). GK (ω, Φ(A)) ≤ 4mM Remark 2.5. If we put t = 1 in Theorem 2.1, then P1 (ω; A) is the weighted arithmetic mean and P1 (ω; Φ(A)) = Φ(P1 (ω; A)). Since (m + M )2 /4mM > 1 for 0 < m < M , the estimates of Theorem 2.1 are not good in the case of t = 1. Thus, we try to improve Theorem 2.1 by virtue of a generalized Kantorovich constant in the next section. 3. Improvement Let m, M be positive scalars with 0 < m < M . Put h =
M . m
The generalized
Kantorovich constant K(h, t) [8, Definition 2.2] is defined by t t − 1 ht − 1 ht − h K(h, t) = (t − 1)(h − 1) t ht − h for all real numbers t ∈ R. The next lemma will be used to achieve our purposes.
Lemma 3.1. [17, Theorem 3] Let Φ : Mn → Mp be a unital positive linear mapping and let A and B be positive definite matrices with 0 < m ≤ A, B ≤ M . Then Φ(A)t Φ(B) ≤ K(h2 , t)−1 Φ(At B)
for all t ∈ (0, 1].
(3.1)
Theorem 3.2. Let A = (A1 , . . . , Ak ) be a k-tuple of positive definite matrices with 0 < m ≤ Ai ≤ M (i = 1, · · · , k) for some scalars m ≤ M , and ω = (ω1 , . . . , ωk ) a weight vector. If Φ : Mn → Mp is a unital positive linear mapping, then 1
Pt (ω, Φ(A)) ≤ K(h2 , t)− t Φ(Pt (ω, A))
for all t ∈ (0, 1]
(3.2)
MATRIX POWER MEANS AND THE INFORMATION MONOTONICITY
7
and 1 4mM K(h2 , −t)− t Φ(Pt (ω; A)) ≤ Pt (ω; Φ(A)) 2 (m + M )
for all t ∈ [−1, 0). (3.3)
Proof. If we put Xt = Pt (ω, A) for t ∈ (0, 1], then we have Xt =
k
i=1
ωi (Xt t Ai ).
It follows from (1.6) that 0 < m ≤ Xt ≤ M and so Φ(Xt ) =
k
ωi Φ(Xt t Ai )
i=1
≥
k
ωi K(h2 , t) Φ(Xt )t Φ(Ai )
(by (3.1))
i=1 2
= K(h , t)
k
ωi Φ(Xt )t Φ(Ai ).
i=1
Now if f is defined by f (Y ) =
k
i=1
ωi [Y t Φ(Ai )] for Y > 0, then
Φ(Xt ) ≥ K(h2 , t)f (Φ(Xt )).
(3.4)
Moreover, k f K(h2 , t)f (Φ(Xt )) = ωi K(h2 , t)f (Φ(Xt )) t Φ(Ai ) i=1
= K(h2 , t)1−t
k
ωi f (Φ(Xt ))t Φ(Ai )
i=1
= K(h2 , t)1−t f 2 (Φ(Xt )).
(3.5)
Since it is known that f is monotone [10, Proposition 3.5], we have K(h2 , t)−1 Φ(Xt ) ≥ f (Φ(Xt )) ≥ f K(h2 , t) f (Φ(Xt )
(by (3.4))
= K(h2 , t)1−t f 2 (Φ(Xt ))
(by (3.5))
(by (3.4))
and so Φ(Xt ) ≥ K(h2 , t)1+(1−t) f 2 (Φ(Xt )). Use the monotonicity of f once more to get f (Φ(Xt )) ≥ f K(h2 , t)1+(1−t) f 2 (Φ(Xt )) .
(3.6)
8
MAHDI DEHGHANI, MOHSEN KIAN AND YUKI SEO
Hence K(h2 , t)−1 Φ(Xt ) ≥ f (Φ(Xt )) ≥ f K(h2 , t)1+(1−t) f 2 (Φ(Xt )) 1−t 3 f (Φ(Xt )) = K(h2 , t)1+(1−t)
(by (3.4)) (by (3.6)) (by (3.5))
2
= K(h2 , t)(1−t)+(1−t) f 3 (Φ(Xt )) and so 2
Φ(Xt ) ≥ K(h2 , t)1+(1−t)+(1−t) f 3 (Φ(Xt )). Continue this procedure to get Φ(Xt ) ≥ K(h2 , t)1+(1−t)+(1−t)
2 +···+(1−t)k−1
f k (Φ(Xt ))
1−(1−t)k
= K(h2 , t) 1−(1−t) f k (Φ(Xt )). Finally, tending k → ∞ and noting that f k (Y ) → Pt (ω, Φ(A)) [10, Proposition 3.5] we conclude that 1
Pt (ω, Φ(A)) ≤ K(h2 , t)− t Φ(Pt (ω, A)). Next, suppose that t ∈ [−1, 0). By the Choi inequality (1.1) and (3.2), it follows that 1
P−t (ω; Φ(A)−1 ) ≤ P−t (ω; Φ(A−1 )) ≤ K(h2 , −t) t Φ(P−t (ω, A−1 )). By taking inverse of both sides, we have 1
P−t (ω; Φ(A)−1 )−1 ≥ K(h2 , −t)− t Φ(P−t (ω, A−1 ))−1 ≥
4mM 2 − 1t K(h , −t) Φ(P−t (ω, A−1 )−1 ) 2 (m + M )
by (1.2)
and so Pt (ω; Φ(A)) ≥
1 4mM K(h2 , −t)− t Φ(Pt (ω; A)). 2 (m + M )
If we put t = −1 in Theorem 3.2, then we have the following corollary: Corollary 3.3. Let A, ω and Φ be as in Theorem 3.2. Then k −1 k 2 (m + M ) −1 ≤ ωi A−1 ωi Φ(Ai )−1 . Φ ( i ) 4mM i=1 i=1
MATRIX POWER MEANS AND THE INFORMATION MONOTONICITY
9 1
Remark 3.4. If we put t = 1 in Theorem 3.2, then we have K(h2 , t)− t = 1. 1
However, if t → 0, then we have limt→0 K(h2 , t)− t = S(h2 ), where the Specht ratio S(h) is defined by
1
(h − 1)h h−1 S(h) = . e log h In fact, it follows from [8, (iv) of Theorem 2.54 and Theorem 2.56] that 2 2 1 2 − 1t 2 2 − 2t 2t 2 12 K(h , t) = K(h , ) = K(h , ) 2t 2t 1 = K((h2 )t , ) → S(h2 ) as t → 0. t On the other hand, it is known in [7, Lemma 2.1] and [9, Lemma 3.2] that (1 + h)2 (m + M )2 = ≤ S(h)2 ≤ S(h2 ) S(h) ≤ 4mM 4h and so the estimate of Theorem 3.2 is not better than one of Theorem 2.1 in the case of t → 0. Thus, we make comparison between the Kantorovich constant of Theorem 2.1 and the generalized Kantorovich constant of Theorem 3.2 in the next section.
4. estimations In this section, we compare Theorem 2.1 with Theorem 3.2. For this, we need some preliminaries. An special case of the following result is due to YamazakiYanagida [18] and is an essential part of our result. Lemma 4.1. For each h > 1
f (t) = log
ht − 1 t
is a convex function on R. Proof. The case of t > 0 is proved by Yamazaki-Yanagida [18]. Suppose that t < 0. Put x(t) =
ht −1 t
and then we have f (t) = log x(t). Since f (t) =
x(t)x (t) − {x (t)}2 , x(t)2
f (t) ≥ 0 for t < 0 is equivalent to x(t)x (t) − x (t)2 ≥ 0 for t < 0. We have x(t)x (t) − x (t)2 =
1 t (h − 1 + tht/2 log h)(ht − 1 − tht/2 log h) t4
10
MAHDI DEHGHANI, MOHSEN KIAN AND YUKI SEO
Now, since
ht −1 t
+ ht/2 log h ≥ 0 for t < 0, we have ht − 1 + tht/2 log h ≤ 0
for all t < 0.
Put y(t) = ht − 1 − tht/2 log h and we have y(0) = 0 and for all t < 0 y (t) = ht/2 log h ht/2 − 1 − log ht/2 > 0 Hence y(t) ≤ 0 for t < 0. Therefore we have f (t) ≥ 0 for t < 0.
Lemma 4.2. For each h > 1, 1
Z(t) = K(h, t)− t
is decreasing for 0 < t < 1, where K(h, t) is the generalized Kantorovich constant. Proof. Note that Z(t) = K(h, t)
− 1t
1 t−1 1−t t t t t−1 h −1 1 1h −1 = . h t h−1 t ht−1 − 1
Put
G(t) = log Z(t) and f (t) = log
ht − 1 t
and then
f (1) − f (t) 1−t − log h + + f (t) − f (t − 1) G(t) = t 1−t t−1 f (1) − f (t) 1 − t = log h + + (f (t) − f (t − 1)). t t t Differentiate G(t) with respect to t, we have G (t) =
1 2 2 f (t) + (t − t)f (t − 1) . log h − f (1) + f (t − 1) − t t2
Put H(t) = log h − f (1) + f (t − 1) − t2 f (t) + (t2 − t)f (t − 1) and we have H(0) = 0. By calculation on differential calculus and refinement, we have H (t) = 2t(f (t − 1) − f (t)) − t2 f (t) + (t2 − t)f (t − 1). By Lemma 4.1, f (t) is convex on R. Hence we have f (t − 1), f (t) ≥ 0 and f (t − 1) − f (t) < 0 on R and t2 − t < 0 for all 0 < t < 1. Hence H (t) < 0 on [0, 1]. Therefore H(t) < 0 on [0, 1] and this implies G (t) < 0 on [0, 1]. Consequently,
Z (t) Z(t)
< 0 and so Z (t) < 0.
MATRIX POWER MEANS AND THE INFORMATION MONOTONICITY
For 0 < m < M , put h =
M m
11
and 1
F (t) = K(h2 , t)− t
for t ∈ [0, 1]
Then we have F (1) = 1 and (1 + h)2 (m + M )2 1 = F( ) = 2 4h 4mM
and F (0) = lim F (t) = S(h2 ) t→0
where S(h) is the Specht ratio. By Lemma 4.2, we have the following estimation: M m
Lemma 4.3. Let h =
for 0 < m < M . Then 1
1 ≤ K(h2 , t)− t ≤ for all 0 < s ≤
1 2
and
1 2
1 (m + M )2 ≤ K(h2 , s)− s ≤ S(h2 ) 4mM
≤ t ≤ 1.
By Theorem 2.1, Theorem 3.2 and Lemma 4.3, we have the following precise estimate: Theorem 4.4. Let A = (A1 , . . . , Ak ) be a k-tuple of positive definite matrices with 0 < m ≤ Ai ≤ M (i = 1, · · · , k) for some scalars m ≤ M , and ω = (ω1 , . . . , ωk ) a weight vector. Put h = M . Let Φ : Mn → Mp be a unital positive linear m mapping. (i) If t ∈ (0, 12 ], then Φ(Pt (ω; A)) ≤ Pt (ω; Φ(A)) ≤
1 (m + M )2 Φ(Pt (ω, A)) ≤ K(h2 , t)− t Φ(Pt (ω, A)). 4mM
(ii) If t ∈ [ 12 , 1], then 1
Φ(Pt (ω; A)) ≤ Pt (ω; Φ(A)) ≤ K(h2 , t)− t Φ(Pt (ω, A)) ≤
(m + M )2 Φ(Pt (ω, A)). 4mM
(iii) If t ∈ [−1, − 12 ], then (m + M )2 4mM 2 − 1t K(h , −t) Φ(P (ω; A)) ≤ P (ω; Φ(A)) ≤ Φ(Pt (ω; A). t t (m + M )2 4mM (iv) If t ∈ [− 12 , 0), then 2 4mM (m + M )2 Φ(Pt (ω; A). Φ(P (ω; A)) ≤ P (ω; Φ(A)) ≤ t t (m + M )2 4mM Proof. The proof of (i)-(iii) follows from Theorem 2.1, Theorem 3.2, Lemma 4.3 and the information monotonicity (1.7) of matrix power means. For (iv), suppose
12
MAHDI DEHGHANI, MOHSEN KIAN AND YUKI SEO
that t ∈ [− 12 , 0). Since −t ∈ (0, 12 ] and M −1 ≤ A−1 ≤ m−1 for i = 1, . . . , k, it i follows from Theorem 2.1 and
(M −1 +m−1 )2 4M −1 m−1
=
(m+M )2 4mM
that
(m + M )2 Φ(P−t (ω; A−1 )). 4mM By the Choi inequality (1.1) and the monotonicity of matrix power means P−t , P−t (ω; Φ(A−1 )) ≤
we have P−t (ω; Φ(A)−1 ) ≤ P−t (ω; Φ(A−1 )). By taking the inverse of two inequalities above, we have P−t (ω; Φ(A)−1 )−1 ≥ P−t (ω; Φ(A−1 ))−1 4mM Φ(P−t (ω; A−1 ))−1 (m + M )2 2 4mM ≥ Φ(P−t (ω; A−1 )−1 ) by (1.2) (m + M )2 ≥
and so we have (iv). Remark 4.5. In the case of t ∈ [− 12 , 0), we have 2 4mM 4mM 2 − 1t K(h , −t) ≤ . (m + M )2 (m + M )2
Acknowledgement. The authors would like to express their cordial thanks to the referee for his/her valuable suggestions. The third author is partially supported by the Ministry of Education, Science, Sports and Culture, Grant-inAid for Scientific Research (C), JSPS KAKENHI Grant Number JP 16K05253. References [1] T. Ando, Concavity of certain maps on positive definite matrices and applications to Hadamard products, Linear Algebra Appl., 26 (1979), 203–241. [2] R. Bhatia, Positive Definite Matrices. Princeton University Press, Princeton, 2007. [3] R. Bhatia, F. Kittaneh, Notes on matrix arithmetic-geometric mean inequalities, Linear Algebra Appl., 308 (2000), 203–211. [4] M. D. Choi, A Schwarz inequality for positive linear maps on C ∗ -algebras, Illinois J. Math. 18 (1974), 565–574. [5] C. D` avis, A Schwarz inequality for convex operator functions, Proc. Amer. Math. Soc, 8 (1957), 42–44. [6] M. Dehghani, M. Kian and Y. Seo, Developed matrix inequalities via positive multilinear mappings, Linear Algebra and its Appl., 484 (2015), 63–85. [7] J.I. Fujii and Y. Seo, On the Ando-Li-Mathias mean and the Karcher mean of positive definite matrices, Linear Multilinear Algebra, 63 (2015), no.3, 636–649.
MATRIX POWER MEANS AND THE INFORMATION MONOTONICITY
13
[8] T. Furuta, J. Mi´ci´c Hot, J. Peˇcari´c and Y. Seo, Mond–Pecaric Method in Operator Inequalities, Zagreb, Element, 2005. [9] S. Kim and Y. Lim, A converse inequality of higher order weighted arithmetic and geometric means of positive definite operators, Linear Algebra Appl., 426 (2007), 490–496. [10] J. Lawson and Y. Lim, Karcher means and Karcher equations of positive definite operators, Trans. Amer. Math. Soc., Series B, 1 (2014), 1–22. [11] Y. Lim and M. P´ alfia, Matrix power means and the Karcher mean, J. Funct. Anal., 262 (2012), 1498–1514. [12] M. Lin, On an operator Kantorovich inequality for positive linear maps, J. Math. Anal. Appl. 402 (2013), 127–132. [13] M. Lin, Squaring a reverse AM-GM inequality, Studia Math., 215 (2013), 189–194. [14] A.W. Marshall, I. Olkin, Matrix versions of Cauchy and Kantorovich inequalities, Aequationes Math., 40 (1990), 89–93. [15] B. Mond and J. Peˇcari´c, Converses of Jensen’s inequality for linear maps of operators, Analele Universit. din Timisoara Seria Math.-Inform. XXXI 2 (1993), 223–228. [16] M.S. Moslehian, Recent developments of the operator Kantorovich inequality, Expo. Math. 30 (2012), 376–388. [17] Y. Seo, Reverses of Ando inequality for positive linear maps, Math. Inequal. Appl. 14 (2011), 905-910. [18] T. Yamazaki and M. Yanagida, Characterizations of chaotic order associated with Kantorovich inequality, Sci. Math., 2 (1999), 37–50. Mahdi Dehghani: Department of Pure Mathematics, Faculty of Mathematical Sciences, University of Kashan, Kashan, Iran E-mail address:
[email protected]
and
[email protected]
Mohsen Kian: Department of Mathematics, Faculty of Basic Sciences, University of Bojnord, P. O. Box 1339, Bojnord 94531, Iran E-mail address:
[email protected]
and
[email protected]
Yuki Seo: Department of Mathematics Education, Osaka Kyoiku University, Asahigaoka,Kashiwara, Osaka582-8582, Japan E-mail address:
[email protected]