New entropy formula with fluctuating reservoir

New entropy formula with fluctuating reservoir

Physica A 417 (2015) 215–220 Contents lists available at ScienceDirect Physica A journal homepage: www.elsevier.com/locate/physa New entropy formul...

348KB Sizes 0 Downloads 25 Views

Physica A 417 (2015) 215–220

Contents lists available at ScienceDirect

Physica A journal homepage: www.elsevier.com/locate/physa

New entropy formula with fluctuating reservoir T.S. Biró ∗ , G.G. Barnaföldi, P. Ván Heavy Ion Research Group, MTA, Wigner Research Centre for Physics, Budapest, Hungary

highlights • • • • •

We present a mathematical procedure to obtain a deformed entropy function. We describe effects due to finite heat capacity and temperature fluctuations in the heat reservoir. For the Gaussian fluctuation model the resulting entropy–probability relation recovers the traditional ‘‘log’’ formula. Without temperature fluctuations (but at finite heat capacity) we obtain the Tsallis formula. For extreme large temperature fluctuations we obtain a new ‘‘log(1 − log)’’ formula.

article

abstract

info

Article history: Received 2 June 2014 Received in revised form 3 July 2014 Available online 26 September 2014 Keywords: Entropy Fluctuation Finite reservoir

Finite heat reservoir capacity, C , and temperature fluctuation, ∆T /T , lead to modifications of the well known canonical exponential weight factor. Requiring that the corrections least depend on the one-particle energy, ω, we derive a deformed entropy, K (S ). The resulting formula contains the Boltzmann–Gibbs, Rényi, and Tsallis formulas as particular cases. For extreme large fluctuations, in the limit C ∆T 2 /T 2 → ∞, a new parameter-free entropy–probability relation is gained. The corresponding canonical energy distribution is nearly Boltzmannian for high probability, but for low probability approaches the cumulative Gompertz distribution. The latter is met in several phenomena, like earthquakes, demography, tumor growth models, extreme value probability, etc. © 2014 Published by Elsevier B.V.

1. Introduction Presenting entropy formulas has a long tradition in statistical physics and informatics. The first, classical ‘logarithmic’ formula, designed by Ludwig Boltzmann at the end of nineteenth century, is the best known example, but – often just out of mathematical curiosity – to date a multitude of entropy formulas are known [1,2]. Our purpose is not just to add to this respectable list a number, we are after some principles which would select out entropy formulas for a possibly most effective incorporation of finite reservoir effects in the canonical approach (usually assuming infinitely large reservoirs). Naturally, this endeavor can be done only approximately when restricting to a finite number of parameters (setting kB = 1). Among the suggestions going beyond the classical Boltzmann–Gibbs–Shannon entropy formula, SB = −



pi ln pi ,

(1)

i

only a single parameter, q, is contained in the Rényi formula [3], SR =



1 1−q

ln



q

pi .

i

Corresponding author. Tel.: +36 139222223388. E-mail address: [email protected] (T.S. Biró).

http://dx.doi.org/10.1016/j.physa.2014.07.086 0378-4371/© 2014 Published by Elsevier B.V.

(2)

216

T.S. Biró et al. / Physica A 417 (2015) 215–220

Many thoughts have been addressed to the physical meaning and origin of the additional parameter, q, in the past and recently. The idea of a statistical–thermodynamical origin of power-law tailed distributions of the one-particle energy ω, out of a huge reservoir with total energy, E was expressed by using a power-law form for the canonical statistical weight,

 ω − q−1 1 , w = expq (−ω/T ) := 1 + (q − 1) T

(3)

instead of the classical exponential exp(−ω/T ).1 Such weights can be derived from a canonical maximization of the Tsallisentropy [4,5], ST =

1 1−q



q

pi − pi ,



(4)

i

or the Rényi-entropy Eq. (2), too. It is evident to justify that these two entropy formulas are unique and strict monotonic functions of each other: using the notation C = 1/(1 − q), one easily obtains ST = hC (SR ) := C e SR /C − 1 .





(5)

The use of these entropy formulas is exact in case of an ideal, energy-independent heat capacity reservoir [6]. The correspondence Eq. (5) emerges naturally from investigating a subsystem–reservoir couple of ideal gases [7]. Particle number or volume fluctuations in a reservoir lead to further interpretation possibilities of the parameter q [8–13]. In a recent paper [14] we demonstrated that both effects contribute to the best chosen q if we consider the power-law statistical weight (3) as a second order term in the expansion in ω ≪ E of the classical complement phase–space formula, w ∝ e S , due to Einstein. A review of an ideal reservoir, with fixed energy, E, and particle number, n, fluctuating according to the negative binomial distribution (NBD), reveals that the statistical power-law parameters are given by T = E /⟨n⟩ and q = 1 + 1n2 /⟨n⟩2 − 1/ ⟨n⟩. The derivation relies on the evaluation of the microcanonical statistical factor, (1–ω/E )n , obtained as exp(S (E–ω)–S (E )), for ideal gases. Since each exponential factor grows like xn , their ratio delivers the (1–ω/E )n factor. This factor is averaged over the assumed distribution of n. The parameter q, obtained in this way is also named as second factorial moment, F2 , discussed with respect to canonical suppression in Refs. [15,16]. For the binomial distribution of n one gets q = 1 − 1/k, for the negative binomial q = 1 + 1/(k + 1) and Eq. (3) is exact. The theoretical results on q and T depending on the mean multiplicity, ⟨n⟩, and its variance in the reservoir represent a particular case. For non-ideal reservoirs described by a general equation of state, S (E ), the parameter q is given by (approximately for small relative variance) q = 1 − 1/C + 1T 2 /T 2 ,

(6)

as it was derived in Ref. [14]. It is important to realize that the scaled temperature variance is meant as a variance of the fluctuating quantity 1/S ′ (E ), while the thermodynamical temperature is set by 1/T = ⟨S ′ (E )⟩. This effect and the finite heat capacity, C , act against each other. Therefore even in the presence of these finite reservoir effects, q = 1 might be the subleading result, leading back to the use of the canonical Boltzmann–Gibbs exponential. √ In particular this is the case for the variance calculated in the Gaussian approximation, when it is exactly 1T /T = 1/ |C | and one arrives at q = 1. It is interesting to note that both parts of this formula, namely q = 1 − 1/C and q = 1 + 1T 2 /T 2 , have been derived and promoted in earlier publications [7,17–20]. In this paper we generalize the canonical procedure by using a deformed entropy K (S ) [7]. Postulating a statistical weight, wK , based on K (S ) instead of S, corresponding parameters, TK and qK occur. We construct a specific K (S ) deformation function by demanding qK = 1. This demand can be derived from the requirement that the temperature set by the reservoir, TK , is independent of the one-particle energy, ω. We call this the Universal Thermostat Independence Principle (UTI) [21]. The final entropy formula contains the Tsallis expression for K (S ) and the Rényi one for S as particular cases. The Boltzmann–Gibbs formula is recovered at two special choices of the parameters. Surprisingly there is another limit, that of huge reservoir fluctuations, C 1T 2 /T 2 → ∞, when the low-probability tails, canonical to this entropy formula, approach the cumulative Gompertz distribution, exp(1 − e x ) [22–25]. 2. Fluctuations and mutual entropy The description of thermodynamical fluctuations is considered mostly in the Gaussian approximation. Reflecting the fundamental thermodynamic variance relation, 1E · 1β = 1 with β = S ′ (E ), the characteristic scaled fluctuation of the temperature is derived [26–28]. The variance of a well-peaked function of a random variable is related to the variance of the original variable via the Jacobi determinant, 1f = |f ′ (a)|1x. Applying this to the functions E (T ) and β = 1/T , one obtains 2 1E = |C |1T with the √C := dE /dT definition of heat capacity, and 1β = 1T /T . Combining these one obtains the classical formula 1T /T = 1/ |C |. 1 The traditional exponential is restored in the q → 1 limit.

T.S. Biró et al. / Physica A 417 (2015) 215–220

217

Traditionally statistical physics assumes that the state space is uniformly populated considering a few constraints on the totals of conserved quantities. But exactly such constraints make expectation values and fluctuations in the subsystem and in the reservoir statistically dependent. Therefore not a product, but a convolution of phase space factors, ρ , describe such a couple of thermodynamical systems:

ρ(E ) =

E



ρ(E − ω) ρ(ω) dω

(7)

0

together with the form ρ(E ) = e S (E ) , leads to the normalized ratio E

 1=

e S (E −ω)+S (ω)−S (E ) dω.

(8)

0

Viewing the integrand as a statistical weight factor, also used for obtaining expectation values of ω- or E-dependent quantities of physical interest, one arrives at the interpretation of the joint probability with the mutual entropy: P = e I (ω;E ) with I (ω; E ) = S (ω) + S (E − ω) − S (E ).

(9)

In the canonical situation the total energy E is fixed and ω fluctuates; so does the reservoir energy, E − ω. In the Gaussian approximation the mutual information factor, I (ω; E ) is evaluated in the saddle point approximation leading to the following general property of the maximal probability state: From I ′ (ω∗ ) = 0 one obtains S ′ (ω∗ ) = S ′ (E − ω∗ ). Assuming small variance near this probability peak, the respective expectation values of the derivatives, defined as the common thermodynamical temperature in equilibrium, are also equal: 1/T := ⟨S ′ (ω)⟩ ≈ S ′ (ω∗ ). The second derivatives, however, lead to an effective heat capacity as the harmonic mean of the subsystem and reservoir heat capacities: 1

:= −T 2 I ′′ (ω∗ ) =

C∗

1 C (ω∗ )

+

1 C (E − ω∗ )

.

(10)

This result is dominated by the smaller heat capacity, so there is no use of expanding the one-particle phase space factor ρ(ω) = e S (ω) . Only the rest can be safely expanded with the canonical assumption, ω ≪ E: e I ≈ e S (ω)



1 − ω S ′ (E ) +

ω2 



S ′ (E )2 + S ′′ (E ) + · · · .

2



(11)

One possibility for going beyond the Gaussian approximation is to investigate finite reservoir effects in the microcanonical treatment [20,29–31]. This is, however, usually quite entangled with a complex microdynamical description of the interaction. It is therefore of interest to find a beyond-Gaussian but canonical approximation. Our idea is to construct such a K (S ) deformed entropy expression, which compensates q ̸= 1 effects in the ω ≪ E expansion. In this way the probability weight factor of partitioning the total energy E to a sub-part ω and a rest of E − ω, P ∝ e S (ω)+S (E −ω)−S (E ) , is replaced by the more general form PK ∝ e K (S (ω))+K (S (E −ω))−K (S (E )) .

(12)

The one-particle phase–space factor, ρ(ω) ∝ e S (ω) is generalized to ρK (ω) ∝ e K (S (ω)) in this formula. The statistical weight factor is consisting of the rest: wK = PK /ρK . Demanding now d2

ln wK = 0, (13) dω 2 we appeal to the Universal Thermostat Independence principle: we wish to have the statistical weight for the one selected particle with energy ω to be least dependent on the energy of that particle, itself. By annulating the second derivative we reach this beyond the Gaussian level. We compare the traditional assumption, K (S ) = S, and the UTI principle, obtaining the optimal K (S ) to second order in the canonical expansion. We consider a general system with general reservoir fluctuations. For small ω ≪ E

    ′ 2 ′′ w = e S (E −ω)−S (E ) ω≪E = e −ωS (E )+ω S (E )/2−···    ω2  ′ 2 S (E ) + S ′′ (E ) + · · · . = 1 − ω S ′ (E ) +

2 The power-law statistical weight (3) to second order is

 ω − q−1 1 ω ω2 w = 1 + (q − 1) = 1 − + q 2 − ···.

T T 2T Equating term by term, we interpret the statistical power-law parameters as 1 T

S ′ (E )2 + S ′′ (E )

 = S (E ) 





and

q=

⟨S ′ (E )⟩2

(14)

(15)

 .

(16)

218

T.S. Biró et al. / Physica A 417 (2015) 215–220

 A relation, S ′′ (E ) = −1/CT 2 , follows from the definition of the heat capacity of the reservoir. The UTI requirement Eq. (13), when applied to the full form in Eq. (15), leads to q = 1. Summarizing, we acknowledge that the parameter q has opposite    2   sign contributions from S ′2 − S ′ and from S ′′ . In general q is given by Eq. (6) up to second order. With this formula q > 1 and q < 1 are both possible. 

3. Deformed entropy formulas Techniques to handle the q = 1 case are known since long. For dealing with q ̸= 1 systems the calculations as a rule are involved, but the introduction of a deformed entropy, K (S ), instead of S provides more flexibility for handling the subleading term in ω [21,32]. The deformed statistical weight has an average over the reservoir fluctuations, as follows

wK = e 

Note that

K (S (E −ω))−K (S (E ))



=1−ω d2 K dE 2

(S (E )) = K ′ S ′ and

d K dE

d dE

K (S (E )) +

ω2 2



d2 dE 2

K (S (E )) +



d dE

K (S (E ))

2 

+ ···.

(17)

(S (E )) = K ′′ S ′ 2 + K ′ S ′′ . Comparing this expansion with the expression (15) we obtain

the parameters for the deformed entropy. Using previous notations for averages over reservoir fluctuations but assuming that K (S ) is independent of these we obtain 1 TK qK TK2

1

= K′ , T ′′



= K +K

′2

1T 2



 1 T2

1+



T2

− K′

1 CT 2

.

(18)

By choosing a particular K (S ) one manipulates qK . After a simple division we obtain

 qK =

1+

1T 2 T2

K ′′

 1+



1 1



K′2

C K′

.

(19)

Finally we gain a novel, general deformed entropy formula including the effect of reservoir fluctuations. Demanding qK = 1, which is a simple consequence of Eq. (13), one obtains the differential equation C

1T 2 T2

K

′2



1T 2



−K +C 1+



T2

K ′′ = 0.

(20)

The solution of Eq. (20) to K (0) = 0, K ′ (0) = 1 with S-independent C and 1T /T is given by K (S ) =

C∆

λ

ln 1 − λ + λe S /C∆ .





(21)

Here λ := C 1T 2 /T2 and C∆ = C + λ. The composition rule for this quantity can be decomposed to two simple steps: defining L(S ) = C∆ e S /C∆ − 1 , the formal additivity, K (S12 ) = K (S1 ) + K (S2 ), leads to2 L(S12 ) = L(S1 ) + L(S2 ) +

λ C∆

L(S1 ) · L(S2 ).

(22)

We point out that the non-additivity parameter in this formula is given by λ/C∆ = 1T 2 /(T 2 + 1T 2 ), for Gaussian scaling of the temperature fluctuations it is simply 1/(C + 1). Our main result, Eq. (21), can be elegantly comprised with the help of the default deformation function of the Tsallis entropy, hC (S ), defined in Eq. (5):

  1 KC ,λ (S ) = h− C∆ /λ hC∆ (S ) .

(23)

1 Since h− C (X ) = C ln(1 + X /C ), we immediately see that the composition law for the L(S ) = hC∆ (S ) quantities coincides 1 with Eq. (22). It is also obvious that KC ,1(S ) = S for any finite C and K∞,λ (S ) = S for any finite λ. Due to h− ∞ (S ) = S one also obtains KC ,0 (S ) = hC (S ). Once having a K (S ) deformation function for the entropy, one argues as follows. The K (S ) is constructed to lead to qK = 1 to the best possible approximation. Therefore K (S (E )) is additive for additive energy, E, to the same approximation. Being additive, the addition can be repeated arbitrary times, with a number Ni for energies Ei —viewed as a statistical ensemble. The occurrence frequencies of a given energy Ei are then well estimated by pi = Ni /N with N = i Ni being the total number of occurrences in the ensemble. This quantity, pi is the usual approximation to the probability of a state with energy Ei , hence one arrives at the construction formula [7]

K (S ) =



pi K (− ln pi ).

i

2 Here S = S (E ), S = S (E ), S = S (E + E ) and therefore S ̸= S + S cf. Eq. (9). 1 1 2 2 12 1 2 12 1 2

(24)

T.S. Biró et al. / Physica A 417 (2015) 215–220

219

Based on this, the following generalized entropy formula arises for an ideal finite heat bath with fluctuations: KC ,λ (S ) =

C∆ 

λ

−1/C∆



pi ln 1 − λ + λpi



.

(25)

i

For λ = C 1T 2 /T 2 = 1 the deformed entropy expression (25) leads exactly to the Boltzmann entropy, irrespective of the value of C∆ . The same limit is achieved for infinite reservoirs, C → ∞ while keeping λ finite; the entropy formula is traditional. Not considering event-by-event fluctuations in the reservoir one assumes λ = 0. With such assumptions from qK = 1 we arrive at the original UTI equation [21]: K ′′ K′

=

1 C

.

(26)

The solution of Eq. (26) with K (0) = 0 and K ′ (0) = 1 delivers KC ,0 (S ) = C e S /C − 1 and one obtains upon using  K (S ) = i pi K (− ln pi ) the statistical entropy formulas of Tsallis and Rényi with q = 1 − 1/C for λ = 0:



K C ,0 ( S ) =

1



1−q

q

pi − pi



S=

and

i

1 1−q

ln





q

pi .

(27)

i

For huge fluctuations, λ = C 1T 2 /T 2 ≫ C > 1, Eq. (20) reduces to K ′′ = −K ′ 2 and leads to the parameter free formula, 1 K C ,1 ( S ) = h − 1 (S ) = ln (1 + S ) =



pi ln (1 − ln pi )

(28)

i

even for arbitrary   C (S ) dependence. The canonical pi distribution to this is obtained by maximizing K (S ) with the constraints i pi = 1 and i pi ωi = U. This Jaynes principle leads to d dpi

KC ,1 (S ) = ln(1 − ln pi ) −

1 1 − ln pi

= α + βωi ,

(29)

having the Lambert-W function, defined as the W (x) satisfying W e W = x, as part of the solution:

 pi = exp 1 −



1



W e

 . −(α+βω )

(30)

i

When only a few states dominate over the others, appreciably high probabilities may occur and for those the quantity − ln pi is small. In this approximation the deformed entropy formula, Eq. (25), approaches the traditional Boltzmann–Gibbs–Shannon entropy, and the canonical distribution becomes the familiar exponential. For the opposite extreme, i.e. dealing with very low probability high-energy tails, W is small, and one obtains α+βωi p i ≈ e −e .

(31)

This result reminds to the complementary cumulative Gompertz distribution, originally discovered in demographic models [22], and later used as a tumor growth model [23]. This distribution also occurs in studies of extreme value distributions, showing deviations from scaling in the occurrence frequencies of large magnitude earthquakes [24] or on other seizmological phenomena [25]. 4. Conclusion In conclusion we have obtained a new class of entropy formulas characterized by two parameters, reflecting finite reservoir effects via the reservoirs heat capacity, C , and the event-by-event fluctuations of the thermodynamical β = 1/T = S ′ (E ) parameter in the relative variance 1β 2 /⟨β⟩2 ≈ 1T 2 /T 2 . Although in particular cases a whole distribution of β can be given via an Euler–Gamma distribution, for general systems only an approximation was presented. This approximation of the cut power-law statistical weight, expanded up to second order in the one-particle energy, ω, defines the q parameter in terms of the heat capacity of the reservoir and the scaled variance of the temperature. Since q ̸= 1 signals non-additivity of the entropy, for the sake of a thermodynamical treatment one wishes to return to q = 1. This can be achieved either by exactly Gaussian temperature fluctuations (λ = 1 case) or by infinite heat capacity reservoirs (C → ∞ case). For general finite environments the canonical procedure can only be saved by deforming the basic postulate due to Einstein: instead of eS (E ) one assumes an eK (S (E )) statistical weight factor. By demanding qK = 1 the UTI principle is applied and a determining differential equation, Eq. (20), emerges for the deformation function K (S ). The UTI differential equation has an interesting general solution for S-independent C and λ parameters, for the case of almost-ideal reservoirs. This solution is presented in Eq. (23) in its most compact form, as a to-and-back Tsallis deformation. For huge fluctuations, λ → ∞, a new, parameter-free entropy formula emerges for arbitrary C (S ) relation describing the reservoir in Eq. (28). Its canonical distribution contains the Lambert-W function and in some limits it reminds to the Gompertz distribution, usually considered in quite different context.

220

T.S. Biró et al. / Physica A 417 (2015) 215–220

Finally we briefly remark on the possible correspondence to superstatistics. As it is analyzed in detail in Ref. [14], the negative binomial distribution (NBD) of particle number in a relativistic, one-dimensional jet by a fixed total energy, E, leads exactly to the cut power-law weight factor. This – as it is known since long – can be represented as an integral over Gibbs-exponentials with a scaled β having an Euler–Gamma distribution. In this case q = 1 + 1/(k + 1). On the other hand this also means that the NBD can be represented as an integral over Poissonian distributions; indeed this mathematical relation has been derived among others in Ref. [33, Eq. (4.64) on page 84]. Acknowledgments This work has been supported by Hungarian OTKA grants NK77816, K81161, K104260, NK106119 and NIH TET_12_CN1-2012-0016. Author GGB also thanks the János Bolyai Research Scholarship of the Hungarian Academy of Sciences. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33]

R.G. Zaripov, Novije meri i metodi v teorii informatii, Kazan State Tech. Univ. Press, 2005. T.J. Taneja, Generalized Information Measures and Their Aplications, Springer, 2001. A. Rényi, in: J. Neumann (Ed.) Proc. of 4th Berkeley Symposium on Mathematical Statistics and Probability, 1961, p. 547. C. Tsallis, J. Stat. Phys. 52 (1988) 479. C. Tsallis, Introduction to Nonextensive Statistical Mechanics, Springer, New York, 2009. M.P. Almeida, Physica A 300 (2001) 424. T.S. Biró, Physica A 392 (2013) 3132. G. Wilk, Z. Wlodarczyk, Phys. Rev. Lett. 84 (2000) 2770. G. Wilk, Z. Wlodarczyk, J. Phys. G 38 (2011) 065101. V.V. Begun, M. Gazdzicki, M.I. Gorenstein, Phys. Rev. C 78 (2008) 024904. V.V. Begun, M. Gazdzicki, M.I. Gorenstein, Phys. Rev. C 80 (2009) 064903. M.I. Gorenstein, Phys. Rev. C 84 (2011) 024902. M.I. Gorenstein, K. Grebieszkow, Phys. Rev. C 89 (2014) 034903. T.S. Biró, G.G. Barnaföldi, P. Ván, K. Ürmössy, arXiv:1404.1256. S. Jeon, V. Koch, K. Redlich, X.N. Wang, Nuclear Phys. A 697 (2002) 546. V.V. Begun, M. Gazdzicki, M.I. Gorenstein, O.S. Zozulya, Phys. Rev. C 70 (2004) 034901. G. Wilk, Z. Wlodarczyk, Eur. Phys. J. A 40 (2009) 299. G. Wilk, Z. Wlodarczyk, Cent. Eur. J. Phys. 10 (2012) 568. G. Wilk, Z. Wlodarczyk, Eur. Phys. J. A 48 (2012) 162. G.B. Bagci, T. Oikonomou, Phys. Rev. E 88 (2013) 042126. T.S. Biró, P. Ván, G.G. Barnaföldi, Eur. Phys. J. A 49 (2013) 110. B. Gompertz, Phil. Trans. R. Soc. 115 (1825) 513. A.E. Casey, Am. J. Cancer 21 (1934) 760. J. Lomnitz-Adler, C. Lomnitz, Tectonophysics 49 (1978) 237. T. Hirose, T. Hiramatsu, K. Obara, J. Geophys. Res. 115 (2010) B10304. J. Uffink, J. van Lith, Found. Phys. 29 (1999) 655. B.H. Lavenda, Found. Phys. Lett. 13 (2000) 487. J. Uffink, J. van Lith, Found. Phys. Lett. 14 (2001) 187. M. Campisi, F. Zahn, P. Hänggi, Europhys. Lett. 99 (2012) 60004. K. Ürmössy, G.G. Barnaföldi, T.S. Biró, P. Ván, Phys. Lett. B 701 (2011) 111. K. Ürmössy, G.G. Barnaföldi, T.S. Biró, P. Ván, Phys. Lett. B 718 (2012) 125. T.S. Biró, P. Ván, Phys. Rev. E 83 (2011) 061147. T.S. Biró, Is There a Temperature? Springer, New York, 2011.