Physica A 282 (2000) 450– 474
www.elsevier.com/locate/physa
Dynamics of a one-dimensional granular gas with a stochastic coecient of restitution Timo Aspelmeier ∗ , Annette Zippelius Institut fur Theoretische Physik, Universitat Gottingen, Bunsenstrasse 9, D-37073 Gottingen, Germany Received 26 June 1999
Abstract Recently, we have modelled inelastic collisions of one-dimensional rods [1,2] by the absorption of translational energy Etr through internal degrees of freedom, in particular elastic vibrations. We arrived at a stochastic description of collision processes, characterised by a stochastic coef cient of restitution . In this paper, we construct an analytic approximation for the transition probability Etr → Etr0 = (1 − 2 )Etr . This allows us to perform much longer simulations of large, strongly inelastic granular systems and study relaxation to the true equilibrium state. If the internal vibrations are undamped, equilibrium is characterised by propagating sound waves. In the case of damping, the system develops towards a nal state which consists of one big cluster, c 2000 Elsevier Science B.V. All rights reserved. containing all particles at rest.
1. Introduction We investigate binary collisions between extended particles with internal degrees of freedom. Collisions are modelled by a contact potential which forbids overlapping of the particles. However, they can and in general will be elastically deformed. Thereby, translational energy is transferred to internal degrees of freedom during a collision. Consequently, collisions are in general inelastic (with respect to translational energy Etr ) and can be characterised by a coecient of restitution 6= 1. The statistical properties of binary collisions of one-dimensional, elastic rods were analysed in Ref. [1]. One of the main results concerns a separation of timescales: Equipartition among the vibrational modes is achieved much faster than the decay of the translational energy. This has led us to model the internal degrees of freedom by a thermalised bath, characterised by a temperature TB . The binary collision process can then be reduced to a stochastic equation of motion for the relative velocity of the ∗
Corresponding author. Fax: +49-551 39-9631. E-mail address:
[email protected] (T. Aspelmeier)
c 2000 Elsevier Science B.V. All rights reserved. 0378-4371/00/$ - see front matter PII: S 0 3 7 8 - 4 3 7 1 ( 0 0 ) 0 0 1 0 6 - 0
T. Aspelmeier, A. Zippelius / Physica A 282 (2000) 450– 474
451
two particles. The duration of a collision as well as the relative velocity after collision are stochastic variables, depending on the state of the bath of each particle. The time evolution of Etr upon successive collisions can be interpreted as a Markov process. During a collision Etr changes to a new value Etr0 = Etr 2 with a transition probability pTB (Etr → Etr0 ). The concept of a collision of two particles, each equipped with its own thermal bath, is much more general and may be of interest in various contexts, e.g. in molecular gases as well as in the kinetic regime of granular particles. Hence, it is natural to ask which of our results are speci c for the microscopic model of vibrating rods and which results are possibly universal, in the sense that they do not depend on the details of the underlying microscopic mechanism which models the uptake of translational energy. To discuss these questions, we consider the more general case of two one-dimensional particles, each with its own internal bath. During a binary collision the energy of translation changes to a new value with the probability pTB (Etr → Etr0 ). This probability can but need not be calculated from a microscopic model. In general, it has to ful ll the following conditions: (a) it has to be positive, (b) it has to be normalised, and (c) it has to ful ll detailed balance, to guarantee that the system relaxes to equilibrium. These constraints do not determine the transition probability uniquely, in analogy to Glauber dynamics of Ising spins, for which various spin ip probabilities are consistent with the requirement of detailed balance. As we will show in this paper, various analytic expressions for the transition probability can be constructed. This will allow us to address the above-mentioned question of universality. One may or may not impose further constraints on the transition probability in addition to the three listed above. For example in the case of elastic rods, we know the deterministic limit, corresponding to no initial excitation of the internal degrees of freedom, as well as the high-temperature limit, when all values of are equally likely. We can furthermore deduce rigorous bounds on the coecient of restitution, giving rise to additional constraints for the transition probability. In this paper we focus on deriving these properties and applying them to the construction of an explicit expression for the transition probability. Analytic expressions for the probability distribution simplify the simulations enormously: we do not have to simulate the stochastic process as was done in Ref. [2], instead we can simply draw a random number according to an explicitly given probability distribution. This allows us to simulate much larger systems for long times, so that the relaxation to the true equilibrium state can be studied. We discuss two dierent cases: (1) Overall energy conservation. Once internal degrees of freedom, e.g. vibrations, have been excited, they can return vibrational energy to the translational motion. This does not exclude a discussion of cooling of a granular material, i.e., the decay of the average kinetic energy from a macroscopic value to a very small value of the order of the inverse number of internal degrees of freedom. In that process the transfer of energy from the internal degrees of freedom to the translational motion of the centre of mass is much less likely than the reverse process, until the
452
T. Aspelmeier, A. Zippelius / Physica A 282 (2000) 450– 474
Fig. 1. Time evolution of the coarse-grained spatial density of a system of 10 000 particles using pTB . Dark (bright) regions correspond to higher (lower) than average densities. Stripes with a characteristic slope can be identi ed, i.e., density uctuations travel with a certain velocity; this corresponds to soundwave propagation as described in detail in Section 6.
Fig. 2. Time evolution of the coarse grained spatial density for a system of 1 000 particles with damping. Grayscale coding is as described in the caption to Fig. 1. Clusters form quickly and interact completely inelastically when they collide until there is only one cluster left.
system has equilibrated. We also measure, e.g. the dynamic structure factor and the time evolution of the density of a large system in equilibrium (Fig. 1). The result shows distinct soundwave propagation, quite contrary to a one-dimensional gas with elastic hard core interactions, but in complete agreement with hydrodynamic considerations. (2) Net dissipation. Irreversible energy loss is included in a phenomenological way by simply introducing a single relaxation time for the internal degrees of freedom. We show that in the long-time limit a system with net dissipation will develop towards a state with one big cluster, consisting of all particles at rest (see Fig. 2).
T. Aspelmeier, A. Zippelius / Physica A 282 (2000) 450– 474
453
The paper is organised as follows. In Section 2, we brie y review the model of inelastically colliding rods which was introduced in Ref. [1]. Subsequently, we discuss in Section 3 some exact properties of the transition probability like detailed balance and exact upper and lower bounds. In Section 4, we show how to construct an analytical approximation which ful lls all of the above requirements. A comparison of the analytical approximation to the probability distribution as computed from the microscopic model is given in Section 5. Simulations for large systems and long times, based on the analytical approximation, are presented in Section 6. Finally, we draw conclusions and give an outlook in Section 7.
2. Review Our starting point is the stochastic equation of motion for the relative velocity of two colliding elastic rods, which was derived in Refs. [1,2] and which we brie y review here. Two one-dimensional particles of lengths l1 and l2 and masses m1 and m2 move freely along a (circular) line until they collide. Their vibrational excitations are described by Nmod normal modes qi() ( = 1; : : : ; Nmod ) of wavenumber ki; = =li and frequency !i; = cki; . The only important material parameter for our model is the sound velocity c. The centre of mass positions are denoted by R1 and R2 , their distance is R2; 1 = R2 − R1 , and the relative velocity is denoted by R˙ 2; 1 . Throughout this paper, all lengths will be made dimensionless by division through l:=2l1 l2 = (l1 + l2 ), while times will be measured in units of l=c, i.e., = tc=l is the dimensionless time. In Ref. [1] we solved the Hamiltonian dynamics of two harmonically vibrating rods, modelling collisions of the rods by a short-range repulsive potential V (r)=B exp(−r), which depends on the momentary end-to-end distance r (to be de ned below). Thereby, translational and vibrational degrees of freedom are coupled. Of particular interest is the hard core limit, which can be achieved by letting → ∞. We showed that equipartition among the vibrational modes is established fast as compared to the decay of the translational energy. Based on this observation, we modelled the internal degrees of freedom by a thermalised bath, characterised by a temperature TB = Ebath =Nmod , where the vibrational energy of a rod Ebath is given by the sum of the energies of the individual modes. Under these assumptions the dimensionless relative velocity gain w() = 1 − R˙ 2; 1 (t)= R˙ 2; 1 (0) obeys a stochastic equation of motion !) ( 2 X ∞ X 1 d w( − i ) + q() ; w() = exp − 0 − w() − d
(1)
i=1 =1
where = −(l=c)R˙ 2; 1 (0) and 0 = R2; 1 (0)=. The initial distance R2; 1 (0) is chosen to be so large that the two particles have not touched for times 60. q() is a Gaussian
454
T. Aspelmeier, A. Zippelius / Physica A 282 (2000) 450– 474
random noise with zero mean and covariance Cq () = hq(0 )q(0 + )i 2 X 1 i − = − 2 i 2
∞ X i
i=1
!2 ( − i )
=1
T 1 2 B − : 24 2Etr
(2)
Here, denotes the Heaviside step function and Etr = (R˙ 2; 1 (0))2 =2 is the translational energy of the colliding rods before the collision. The i which appear in Eqs. (1) and (2) are determined by the length ratio = l1 =l2 (we assume l1 ¡ l2 without loss of generality) of the rods according to 1 =1+ and 2 =1+1= and =m1 m2 =(m1 +m2 ) is the eective mass. The stochastic process q() is simply the sum of two periodic Brownian bridge processes [3] with periods 1 and 2 , respectively. We do not want to repeat the derivation of the above equations, which is given in detail in Ref. [1]. Eq. (1) is Newton’s equation of motion for the centre of mass distance R2; 1 (t) with the right-hand side representing the force, derived from the exponential interaction between the two rods. The distance which enters the potential or the force includes√all deformations due to the initial conditions and previous interactions: PNmod () (q1 (t) + q2() (t)). With the help of the Green function for the r = R2; 1 (t) + 2 =1 one-dimensional harmonic oscillator, we can compute the deformations q1() (t) at time t in terms of R2; 1 (t) and the initial values for the positions and canonical momenta of the vibrations. The latter are modelled as independent Gaussian random variables, whose variance is determined by TB . The history of the vibrations produced during the collision is re ected in the memory terms (viz. the double sum) in Eq. (1) and the random initial conditions in the stochastic process q(). In the hard-core limit ( → ∞ or equivalently → ∞) the stochastic equation for w(), Eq. (1), can be solved by saddle point methods, yielding w() = max(0; f()) ; where
( f() = 0max
∈[0;]
0
− 0 −
∞ 2 X X
) 0
0
w( − i ) + q( )
:
(3)
i=1 =1
The duration of the collision as well as the nal velocity are stochastic variables. The collision ends when the memory terms in Eq. (3) overcompensate the gain from the other terms 0 − 0 + q(0 ), which are on average increasing. Fig. 3 shows a sample collision process in order to illustrate Eq. (3). The coecient of restitution is given by = lim w() − 1 →∞
(4)
and thus also becomes a stochastic variable, depending on the state of the vibrational bath before the collision. The results of the previous paragraphs are interpreted as a Markov process in discrete time, which accounts for transitions of the translational energy upon successive collisions. During a collision, Etr changes to a new value
T. Aspelmeier, A. Zippelius / Physica A 282 (2000) 450– 474
455
Fig. 3. Sample collision process for = 1=2, i.e., 1 = 3=2 and 2 = 3. The stochastic process q() is periodic with period 3. The memory terms (the dotted and short dashed curves) are attached to the curve for w() for illustration. ∗ marks the beginning of the collision.
Etr0 = Etr 2 . The probability for this transition is determined by the probability density for the coecient of restitution p () according to 1 pTB (Etr → Etr0 ) = p () √ : (5) 2Etr = E 0 =Etr tr
Here =Etr =TB and TB denotes the bath temperature of both rods, under the assumption that the temperatures are equal. If that is not the case, one would have to replace the index by two indices 1 and 2 . Here, we use only one index for notational simplicity. Changes in the bath temperature are not independent, but determined by energy conservation: TB0 = TB +
1 − 2 Etr : 2Nmod
(6)
The stationary state of the Markov process is known: after cooling, the system of two particles, each equipped with an internal bath, evolves into a stationary state with a Boltzmann distribution for Etr , with the bath temperature TB0 = Etot =(2Nmod + 1) where Etot is the total energy of the system.
3. Characteristics of the transition probability In this section we derive some exact properties of the transition probability from Eq. (3). The approximation to this probability that we are going to build will be constructed in such a way that it obeys all of these properties.
456
T. Aspelmeier, A. Zippelius / Physica A 282 (2000) 450– 474
3.1. Detailed balance Since the underlying model is microscopically reversible, detailed balance holds for the Markov process: the transition probability for the process Etr → Etr0 =Etr 2 is related to the transition probability for the reverse process Etr0 → Etr = Etr0 =2 via 0
0
0
pTB (Etr → Etr0 )e−Etr =TB = pTB (Etr0 → Etr )e−Etr =TB :
(7)
Here we have neglected uctuations of the temperatures of the baths which seems justi ed because changes in the bath temperature are O(1=Nmod ) (see Eq. (6)). Detailed balance, Eq. (7), implies the following relation for p (): 1 −2 e : (8) p ()e− = p2 Since this is a property of the equilibrium state of the two particle system, the temperatures of both rods are equal, i.e., 1 = 2 = . When the temperatures of the baths of oscillators are zero, i.e., q() = 0 for all , the collision is deterministic. For this case, it follows from Eq. (3) that the coecient of restitution is equal to , the ratio of the lengths of the rods (see also Refs. [1,4]) (actually, for this to be true it is sucient that only the temperature of the longer rod is zero). Thus, in the limit of small temperatures, p () should approach a -function centred around . At large temperatures, on the other hand, p () is a uniform distribution, i.e., p =0 () = const. This is proved in Appendix D. 3.2. Maximum collision time If the length ratio is a rational number, it is possible to extract some more information about p () from Eq. (3). First, we need to know the maximum duration of a collision before we can deduce an upper and lower bound for . The stochastic process q() consists of two periodic Brownian bridge processes with periods 1 and 2 respectively. If is rational, say = p=s (where the integers p and s are relatively prime), then 1 = 2 = p=s is rational and thus q() is periodic, the period being given by p + s. Let ∗ be the time where − 0 + q() rst equals 0 and is greater than 0 in an (arbitrarily small) interval to the right of ∗ . In other words, ∗ is the time when the particles rst touch and the collision begins. The periodicity of q() now leads to a maximum collision time because whatever w() is for a particular realisation of q(), the memory terms which accumulate overcompensate the gain from the term 0 − 0 + q(0 ) after time ∗ + p + s (see Eq. (3) and Fig. 3 for illustration). This can be seen more clearly by the following argument. By evaluating Eq. (3) at = ∗ + p + s it is clear that w˜ := w(∗ + p + s) ¿ f(∗ + p + s)
T. Aspelmeier, A. Zippelius / Physica A 282 (2000) 450– 474
¿ ∗ + p + s − 0 −
2 X ∞ X
457
w(∗ + p + s − i ) + q(∗ + p + s)
i=1 =1
= p+s−
∞ 2 X X
w(∗ + p + s − i ) :
(9)
i=1 =1
The last step follows from the de nition of ∗ and the periodicity of q(). When the expression inside the curly braces in Eq. (3) is rewritten in the following way by splitting the double sum in three parts: 0
0
− 0 + q( ) −
2 X ∞ X
w(0 − i )
i=1 =1
= (0 − (p + s)) − 0 + q(0 − (p + s)) −
2 X ∞ X
w(0 − (p + s) − i )
i=1 =1
−
s X
0
w( −
1)
−
=1
p X
w(0 −
2)
+p+s
(10)
=1
(note that s 1 = p 2 = p + s), the last four terms can be estimated by use of in Pp Ps ˜ rst only for 0 =∗ +p+s but Eq. (9): p+s− =1 w(0 − 1 )− =1 w(0 − 2 )6w, since w() is a monotonically increasing function, this also applies for all 0 ¿ ∗ +p+s. From the two sums the last terms could even be dropped because they are zero for 0 = ∗ + p + s. We will drop only one of these last terms so that we get p+s−
s−1 X
w(0 −
1)
−
=1
p X
w(0 −
˜ 2 )6w
for 0 ¿∗ + p + s :
(11)
=1
Thus, the r.h.s. of Eq. (10) can be estimated as follows (again for 0 ¿∗ + p + s): 0 − (p + s) − 0 + q(0 − (p + s)) −
2 X ∞ X
w(0 − (p + s) − i )
i=1 =1
−
s X =1
w(0 −
1)
−
p X
w(0 −
2)
+p+s
=1
60 − (p + s) − 0 + q(0 − (p + s)) −
2 X ∞ X
w(0 − (p + s) − i )
i=1 =1
˜ w˜ : −w(0 − (p + s)) + w6
(12)
The last inequality again follows from Eq. (3), evaluated at =0 −(p+s): w(0 −(p+s)) (which is the one term we kept) is greater than or equal to 0 − (p + s) − 0 + q(0 − P2 P∞ (p + s)) − i=1 =1 w(0 − (p + s) − i ). Since the l.h.s. of in Eq. (3) is just the expression in curly braces from Eq. (3) and w˜ = w(∗ + p + s), this means that no further contributions to w() come from times ¿ ∗ + p + s and it therefore follows that p + s is the maximum collision time.
458
T. Aspelmeier, A. Zippelius / Physica A 282 (2000) 450– 474
3.3. Upper and lower bound for Since the maximum collision time is p + s, we have = w(∗ + p + s) − 1 (cf. Eq. (4)). Bounds on can now be derived from Eq. (3), evaluated at = ∗ + p + s, by neglecting or overestimating the memory terms. Consider rst Eq. (3) at = ∗ + p + s with all memory terms neglected which yields the following inequality: ∗ 0 0 { − 0 + q( )} : (13) w( + p + s)6max 0; 0 max ∗ ∈[0; +p+s]
Since ∗ is the beginning of the collision, we know that 0 − 0 + q(0 )60 for all 0 6∗ . Thus, we also know that 0 − 0 + q(0 )6p + s for all 0 6∗ + p + s (simply by replacing 0 by 0 − p − s and using the periodicity of q()). At time 0 = ∗ + p + s, this inequality even becomes an equality. Therefore, the r.h.s. of Eq. (13) is equal to p + s, so we get + 1 = w(∗ + p + s)6p + s :
(14)
Next, at the most p + s − 2 non-zero memory terms in Eq. (3) at time ∗ + p + s are overestimated by the maximum possible value, namely w(∗ + p + s) = + 1: ∗ 0 0 { − 0 − (p + s − 2)( + 1) + q( )} : w( + p + s)¿max 0; 0 max ∗ ∈[0; +p+s]
(15) By the same argument as above, the r.h.s. of Eq. (15) is equal to p+s−(p+s−2)(+1). Thus, we have + 1 = w(∗ + p + s)¿p + s − (p + s − 2)( + 1) 1 : ⇔ ¿ p+s−1
(16)
It is easy to see that the upper and lower bounds on are also compatible with the detailed balance condition, Eq. (8), as it should be. Because it will be needed later on, we introduce here the parameter for convenience: 1 : (17) = p+s−1 The bounds calculated in this section are optimal bounds. This follows from the fact that p =0 () is a uniform distribution between and 1= which is proved in Appendix D. 3.4. The special case = 1 When the length ratio of the rods, , is equal to 1, we can deduce an exact solution for p () from the results of the preceding sections. = 1 implies p = s = 1 from which
T. Aspelmeier, A. Zippelius / Physica A 282 (2000) 450– 474
459
we get for the upper and lower bounds for : upper limit:
6p + s − 1 = 1 :
(18)
lower limit:
¿
1 =1: p+s−1
(19)
In other words, when = 1, we have p () = ( − 1)
for all
(20)
exactly. 4. Construction of an approximation for the transition probability As a short summary, we so far have established the following properties of the probability density p (): Z ∞ p () d = 1 ; (21) normalisation: 0
positivity: low-temperature limit: high-temperature limit:
p ()¿0 ;
(22)
lim p () = ( − ) ;
(23)
lim p () = const: within bounds ;
(24)
→∞
→0
upper and lower bounds: p () = 0 for ¡ and ¿ detailed balance:
−
p ()e
1 −2 e = p2 :
1 ;
(25) (26)
It is desirable to know the functional form of p () explicitly because that would greatly speed up simulations of a many particle system. Up to now, simulations rely on a numerical solution of Eq. (3) with a randomly generated process q() at every collision event. This is a very time-consuming method which does not allow for a many particle system to be simulated over very long times. Therefore, we now want to construct an analytical approximation for p () which ful lls all of the above conditions Eqs. (21) – (26). We will restrict ourselves to rational since Eq. (25), which will prove to be quite useful in the following, applies only for this case. 4.1. General solution of the detailed balance equation First, we rewrite the detailed balance equation (26) in terms of the probability density p(x; y) for the variable x = 2 =2 where the parameter y is given by y = =2. Then
460
T. Aspelmeier, A. Zippelius / Physica A 282 (2000) 450– 474
Eqs. (21) and (26) read: Z ∞ p(x; y) d x = 1 ; 0
p(x; y)e−2y = p(y; x)e−2x :
(27) (28)
It is easy to check that the general solution of these two equations together is given by p(x; y) = e2y
@2 c(x; y) ; @x@y
(29)
where c(x; y) is an arbitrary real function which is symmetric with respect to interchange of its arguments and with the additional property that 1 (30) c(0; y) = e−2y and lim c(x; y) = 0 for all y : x→∞ 2 So far, Eq. (29) does not guarantee that p(x; y) is non-negative everywhere nor does it give any clue as to how to choose c(x; y) such that the limits given in Eqs. (23) and (24) are ful lled. In order to get an idea how to accomplish this, we look at the known solution of p () for the case = 1, see Eq. (20). Expressed in terms of x and y, this gives p(x; y) = (x − y) :
(31)
It can be checked that this function is generated by using c(x; y) = 12 e−x−y−|x−y|
(32)
in Eq. (29). This expression suggests a way to reformulate Eq. (29) for arbitrary . As will be proved in Appendix A, c(x; y) may be written without loss of generality in the form c(x; y) = 12 e−x−y−f(x; y) ;
(33)
where f(x; y) is a real, symmetric function of its arguments with additional constraints f(0; y) = y and limx→∞ (x + f(x; y)) = ∞ in order to satisfy Eq. (30). For . 1 we expect this new function f(x; y) to be approximately equal to |x − y| but we also expect it to be dierentiable for all ¡ 1. This ansatz gives @2 −x−y−f(x; y) 1 e p(x; y) = e2y 2 @x @y @f @2 f @f 1 y−x−f(x; y) 1+ − : 1+ = e 2 @x @y @x @y
(34)
Now we need to nd a function f(x; y) which generates a p(x; y) with all requirements speci ed by Eqs. (21) – (26). As shown in Appendix A, the non-negativity of p(x; y) can be guaranteed by choosing f(x; y) such that @2 f=(@x @y)60 for all x; y. It is also shown in Appendix A that this condition implies |@f=@x|61 for all x; y. Although this may not be the most
T. Aspelmeier, A. Zippelius / Physica A 282 (2000) 450– 474
461
general choice of f(x; y), simulations suggest that the “real” f(x; y) behaves in just this way. The upper and lower limits for , Eq. (25), require that f(x; y) = |x − y| exactly for x ¡ 2 y
and
x ¿ −2 y :
(35)
So all that remains is to construct f(x; y) in the range 2 ¡ x=y ¡ −2 in such a way that the high- and low-temperature limits are ful lled. 4.2. Low-temperature limit First, we compute the average values of x and Z 1 y ∞ @f −x−f(x; y) e hxi = e 1+ dx ; 2 @y 0 √ 1 h xi = ey 4
Z
∞
0
@f 1+ @y
√
x from Eq. (34):
dx e−x−f(x; y) √ : x
These are related to the average values of and 2 by √ h xi hi = √ ; y h2 i =
hxi : y
(36)
(37)
(38)
(39)
Thus, for Eq. (23) to hold we must have hi2 = h2 i = 2 as → ∞. Expressed in terms of x and y this yields √ h xi (40) lim √ ; y→∞ y lim
y→∞
hxi = 2 : y
(41)
It is shown in Appendix B that this condition is satis ed if f(zy; y) has the following property: 0 if 06z ¡ 2 ; (42) lim y(1 − z) − f(zy; y) = y→∞ −∞ if 2 ¡ z61 : 4.3. High-temperature limit ˜ y) which is de ned First, we reformulate Eq. (34) in terms of a new function f(z; by ˜ y) = f(zy; y) : yf(z;
(43)
462
T. Aspelmeier, A. Zippelius / Physica A 282 (2000) 450– 474
˜ y) = z f(1=z; ˜ ˜ y) = The symmetry condition implies f(z; zy). Eq. (35) requires that f(z; 2 −2 |1 − z| for z6 and z¿ . This then gives, after translation into an equation for the original variables and , ! ! 2 ˜ @ f =2(1−2 )−( =2)f( @f˜ ˜ 2 ; =2) 2 ; 1+ −2 ; 1+ p () = e 2 @z 2 @z 2 ! @2 f˜ 22 @2 f˜ 2 2 − ; : (44) ; + @z 2 2 @z@y 2 As → 0, only one term on the r.h.s. of Eq. (44) survives: @2 f˜ 2 ( ; 0) : (45) @z 2 Now this has to be a constant for inside the range (; 1=) (see Eq. (24) and Appendix D). As shown in Appendix C, this is the case if 1−z for z62 ; 2 ˜ 0) = − 4 √z + 1 + (z + 1) for 2 ¡ z ¡ −2 ; (46) f(z; 2 1− 1 − 2 z−1 for z¿−2 : p =0 () = 3
4.4. Approximation for intermediate temperatures ˜ y) in such a way At intermediate temperatures, 0 ¡ y ¡ ∞, we try to construct f(z; that Eq. (46) is obtained in the limit y → 0 and Eq. (42) holds in the limit y → ∞. By writing q 2 2 ˜ (47) f(z; y) = f˜ (z; 0) + h(z; y)((1 − z)2 − f˜ (z; 0)) ; where we have made use of yet another auxiliary function h(z; y) which is invariant under the transformation z → 1=z and y → zy (in order to satisfy the symmetry ˜ y)), and such that h(z; 0) = 0 and 06h(z; y)61, the y → 0 limit is condition for f(z; automatically ful lled. If h(z; y) has the additional property p2 s2 =1 for z6 2 and z¿ 2 ; s p (48) lim h(z; y) 2 2 s p y→∞ ¡ 1 for 2 6z6 2 s p and convergence is suciently fast, then also the low-temperature limit Eq. (23) is sat˜ y)= is ed (see Section 4.2 and Eq. (42)) since this condition ensures that limy→∞ f(z; 2 2 2 2 2 2 ˜ |1 − z| for z6p =s and z¿s =p and limy→∞ f(z; y) ¿ |1 − z| for p =s 6z6s2 =p2 . Such a function is not hard to construct. One example is √ v(1 + v−1 (−ay z log(zp2 =s2 log zs2 =p2 )) √ ; (49) h(z; y) = 1 − v(1 + v−1 (0)) + by z
T. Aspelmeier, A. Zippelius / Physica A 282 (2000) 450– 474
463
where v() = −
1
for ¿ 0
(50)
and v−1 () is the inverse function of v(). The parameters a and b can be used to t the resulting expression for p () to data from simulations. We will not present the full expression for p () here because it is very lengthy indeed. However, it can be generated e.g. with the help of a symbolic algebra program by putting h(z; y) from Eq. (49) into Eq. (47) and then applying Eq. (44). There is one last thing that has to be checked, namely whether this particular choice of h(z; y) is consistent with the condition @2 f=(@x @y)60. Although we are at present unable to prove this, numerical checks indicate that it holds. 5. Comparison with computer simulations 5.1. Comparison of numerical and analytical probability densities Since we have constructed a probability density that has only some (but certainly not all) of the properties of the real density, we must compare the analytical function with simulation results and check if the parameters a and b can be adjusted in such a way that the two densities agree. Fig. 4 demonstrates that for the particular case = 4=5, a and b can be nicely adjusted to match the simulation data. This is also true for other values of which are not shown here. Although this comparison is not very precise, it is sucient for our purposes. It leads us to believe that the function constructed in this paper captures the important aspects of p (). This view is also con rmed by the comparison made below of many particle simulations performed with the old, “exact” method, and the new, approximate method. 5.2. Implementation of a random number generator In order to implement a random number generator on a computer that generates random numbers distributed according to the probability density that was constructed in the preceding section, we need the inverse of the distribution function D(x; y) (see e.g. Ref. [5, Chapter 7]). The distribution function can be computed from Eq. (34): Z x @f 1 (x; y) : (51) p(x0 ; y) d x0 = 1 − ey−x−f(x; y) 1 + D(x; y) = 2 @y 0 This can be transformed into the distribution function for and : ! 2 1 ( =2)(1−2 )−( =2)f( @f˜ ˜ 2 ; =2) −2 ; ; 1+ D () = 1 − e 2 @z 2 ˜ y) as in Eq. (43). where f(x; y) has again been expressed in terms of f(z;
(52)
464
T. Aspelmeier, A. Zippelius / Physica A 282 (2000) 450– 474
Fig. 4. (a) Simulation data for the probability distribution function for = 45 . (b) Analytic probability distribution function for = 45 . The parameters a and b of Eq. (49) were tted to best match the data shown in (a). The agreement is quite satisfactory. Note that while it is possible to plot the = 0 limit in (b), it is impossible to reach this limit in the simulation (a). This is responsible for the apparent dierence between the two graphs at = 0.
T. Aspelmeier, A. Zippelius / Physica A 282 (2000) 450– 474
465
It is probably impossible to calculate the inverse of this function analytically but it is easy to invert numerically. The random are nally generated by plugging a uniformly distributed random number r from the interval (0; 1) into D −1 : = D −1 (r) :
(53)
5.3. Comparison of many particle simulations In order to estimate the quality of our approximation, we have performed runs of systems with 10 000 particles using the old and new methods. By assigning the length l1 to all even numbered particles and l2 to all odd numbered particles, the length ratio
=l1 =l2 is the same for all collisions. Again, we have used = 45 . We compare the time decay of the translational energy for both runs, Fig. 5 shows the result. The curves agree very well, keeping in mind that they are merely single runs which necessarily show uctuations. These uctuations are due to production and breakup of clusters [2]. This agreement supports our claim that the approximated probability distribution captures the essential features of the model. 6. Application to many particle systems 6.1. Undamped equilibrium system As a rst (test-) application of the algorithm we consider a system of 10 000 particles on a ring in equilibrium, i.e. with all internal bath temperatures set to such an initial value that equipartition of the energy over all modes, translational and vibrational, is given. Naively, it could be argued that this state (as it is an equilibrium system of particles coupled to the heat bath of internal oscillators) is trivial and identical to a one-dimensional gas of completely elastic point particles. The simulations show that this is not the case; Fig. 6 shows a comparison of the dynamical structure factors of an ideal gas of elastic particles and of the aforementioned simulation. While the dynamic structure factor for the ideal gas is essentially structureless, the particles with internal modes show a distinct Brioullin line (see e.g. Ref. [6, Chapter 8]) corresponding to soundwave propagation. This can also be veri ed by eye in Fig. 1 where a certain preferred slope in the time developement of the density is evident. The absence of a Rayleigh line centred around ! = 0 in the dynamic structure factor is due to the fact that the prefactor of this line is given by (cp − cv )=cp where cp and cv are the speci c heats per particle at constant pressure and volume, respectively [6]. In our present case, each particle is equipped with a large number of internal modes, each of which contributes kB (Boltzmann’s constant) to both cp and cv . This leads to cp =cv ≈ 1 and thus to a vanishing prefactor for the Rayleigh line. Fig. 7 shows the linear relationship between the peak position !max of the Brillouin line and the wave vector k. The slope of this line is the adiabatic sound velocity cs . The
466
T. Aspelmeier, A. Zippelius / Physica A 282 (2000) 450– 474
Fig. 5. Energy decay towards equilibrium using the “exact” method (thin line) and using p () (thick line). The agreement is very good. The equilibrium value of the energy is indicated by the horizontal line. Note that due to increased simulation speed the thick line extends more than one decade further than the thin line.
Fig. 6. Dynamic structure factor of a purely elastic system (a) and a system using p () (b) drawn to the same scale. Note the pronounced Brillouin line in (b), indicating soundwave propagation which is also evident in Fig. 1.
agreement p with the theoretical result (the sound velocity of an ideal gas with cp =cv = 1 is cs = 1=2 in our dimensionless units) is very good. The width of the Brillouin line should be proportional to k 2 [6]; our data (not shown in this paper) is consistent with this result although there is too much scatter to draw a decisive conclusion.
T. Aspelmeier, A. Zippelius / Physica A 282 (2000) 450– 474
467
Fig. 7. The peak position !max of the Brillouin line from Fig. 6 as a function of wave vector k. The dashed p line is the expected result, corresponding to a sound velocity of cs = 1=2 in our dimensionless units.
Additionally, the static density uctuations in our system with internal modes are higher than in the ideal gas. This is qualitatively understandable if we keep in mind that velocity uctuations are coupled to density uctuations by the following mechanism: In a region where particles collide with a higher than average velocity, the translational energy will temporarily be stored in the vibrations, resulting in a mean coecient of restitution ¡ 1. This in turn leads to clustering and an increased density in that region. Analogously, the case of lower than average relative velocity leads to a decreased density. 6.2. Damped system Another application of the probability distribution constructed in this paper is a long time simulation of 1000 particles with damping, i.e. a simulation where we have included exponential decay of the energy stored in the vibrations. This is used to model net dissipation of energy (in contrast to the original model which has overall energy conservation). Simulations of this extended model tend to last very much longer than simulations of undamped systems because situations which would lead to an inelastic collapse in simulations with constant take longer to break up if damping is present. They are therefore only accessible with the new simulation algorithm. The results of this simulation should be comparable to other one-dimensional simulations of granular systems. However, due to the properties of our model, there is no inelastic collapse which allows for long runs. Figs. 2 and 8 show some of the results of this simulation. Not surprisingly, the time evolution of the particle density in Fig. 2 shows a clear tendency for clustering. In one dimension, the shape of clusters is naturally considerably simpler than in two or three dimensions (cf. e.g. simulation results in Refs. [7,8]). We see that behaviour on a large scale is dominated by relatively simple cluster dynamics. The clusters form very quickly, then drift, collide, and stick until only one big cluster is left. The energy decay, Fig. 8, shows a succession of steps, each of which corresponds to a cluster collision. During each completely inelastic cluster collision a substantial
468
T. Aspelmeier, A. Zippelius / Physica A 282 (2000) 450– 474
Fig. 8. Decay of the translational energy of the system of 1000 particles with damping. The last big step in the curve corresponds to the collision of the last two clusters. By this collision, basically all of the macroscopic kinetic energy is removed and only small “thermal” uctuations remain.
amount of kinetic energy stored in the movement of the clusters is removed from the system. This goes on until the last two clusters collide which removes basically all of the remaining kinetic energy, visible in the giant step at around t = 104 . Naturally, this behaviour is quite distinct from the homogeneous cooling law, Etr ∼ t −2 [9]. 7. Conclusion In this paper we have constructed an analytical approximation for the probability density of a collision process between one-dimensional, elastically vibrating rods. The approximation ful lls all of the known properties of the exact probability distribution which were derived in this paper: detailed balance, high- and low-temperature limit, and upper and lower bounds. The resulting expression was tted to data from Monte-Carlo simulations and was tested on (a) the translational energy decay of a system of particles which are initially nonvibrating, and (b) an equilibrium many particle system of rods. In both cases, the new method generates the expected results, namely complete agreement of the energy decay curve with previously generated data for (a), and a dynamic structure factor which agrees with standard hydrodynamic calculations for (b). This makes us con dent that the approximations that were necessary in the construction of the probability density are not relevant for practical purposes. A rst short application of the method consisted in simulating the energy decay and cluster formation in a system using exponential damping of the vibrations. The simulations showed that cluster dynamics are relatively simple (clusters form, drift, collide, and stick). The energy decay re ected this behaviour by decaying in steps, each one corresponding to a cluster collision, until the last two clusters combined. Since our method agrees with the old, “exact” method, we can now proceed to address topics like the behaviour of cluster dynamics and energy decay when the
T. Aspelmeier, A. Zippelius / Physica A 282 (2000) 450– 474
469
probability density is in some way modi ed. We are of course also interested in extending our approach to more than one dimension [10 –12]. Ultimately, we search for the analogue of p () in three dimensions. Progress in this direction has recently been made in [12] by developing a method to simulate collisions of three-dimensional, elastically vibrating bodies. Acknowledgements This work has been supported by the DFG through Grants Zi209=5-1 and Zi209=6-1. Appendix A. Implications of the non-negativity of the probability density
as
In Section 4.1, we made use of a real, symmetric function c(x; y) which was written c(x; y) = 12 e−x−y−f(x; y)
(A.1)
with a real, symmetric function f(x; y). This is a priori not the most general ansatz and may even lead to negative valued p(x; y). Hence, we will argue in Appendix A.1 that a c(x; y) that has negative values generates a negative-valued p(x; y) and is thus unsuited. We can therefore restrict ourselves to non-negative functions c(x; y), which in turn can be written as in Eq. (A.1). The additional properties of c(x; y), Eq. (30), are satis ed if f(0; y) = y and limx→∞ (x + f(x; y)) = ∞. In Appendix A.2 we show that further restrictions on f(x; y) follow from the non-negativity of p(x; y). A.1. Proof that c(x; y) is non-negative Suppose that c(x; y) had negative values, i.e., c(x1 ; y1 ) ¡ 0 for some x1 ; y1 . Since c(∞; y) = 0, there exists an x2 ¿ x1 such that @c=@x(x2 ; y1 ) ¿ 0. For every rational
= p=s we must have c(x; y) = e−2y =2 for all y ¿ −2 x, where is de ned in Eq. (17) (by the same argument that was used for Eq. (35)). This implies @c=@x(x2 ; y2 ) = 0 for any y2 ¿ x2 −2 . But we already had that @c=@x(x2 ; y1 ) ¿ 0, from which we can see that y1 ¡ y2 and therefore there must exist a y0 ∈ (y1 ; y2 ) where @2 c=(@x @y)(x2 ; y0 ) ¡ 0. Eq. (29) shows that this is equivalent to a negative value of the probability density: p(x2 ; y0 ) ¡ 0. A.2. Proof that @f=@x¿ − 1 and @f=@y¿ − 1 It is possible to deduce some restrictions on f(x; y) from the non-negativity of p(x; y). Suppose there were some x0 ; y0 , such that @f (x0 ; y0 ) ¡ − 1 : @x
(A.2)
470
T. Aspelmeier, A. Zippelius / Physica A 282 (2000) 450– 474
It follows from Eq. (35) that for any y˜ ¿ −2 x0 , @f=@x(x0 ; y) ˜ = −1. Thus, there exists ∗ −2 a y ∈ (y0 ; x0 ] such that for any y from a small neighbourhood of y∗ @f (x0 ; y) ¡ − 1 @x
for y ¡ y∗
(A.3)
@f (x0 ; y)¿ − 1 @x
for y ¿ y∗
(A.4)
and
holds. If we suppose that the leading non-constant term of @f=@x(x0 ; y) as y tends to y∗ from below is of the order n¿1, i.e., @f (x0 ; y) = −1 − (y∗ − y)n + O((y∗ − y)n+1 ) @x
for y ¡ y∗ ;
(A.5)
¿ 0 being the coecient, then we have @2 f (x0 ; y) = n(y∗ − y)n−1 + O((y∗ − y)n ) : @x @y
(A.6)
Here we see that at least in a small interval left of y∗ , @2 f=(@x @y)(x0 ; y) ¿ 0. (By inversion of this argument we can conclude that @2 f=(@x @y)(x; y)60 for all x; y implies @f=@x(x; y0 )¿ − 1 for all x; y. This will be needed below.) Plugging this into Eq. (34) we get ey−x0 −f(x0 ; y) @f ∗ n ∗ n+1 (−(y − y) + O((y − y) )) 1 + (x0 ; y) p(x0 ; y) = 2 @y −(n(y∗ − y)n−1 + O((y∗ − y)n )) =
@f ey−x0 −f(x0 ; y) (−(y∗ − y)n−1 ) O(y∗ − y) 1 + (x0 ; y) + n ; 2 @y (A.7)
which is bound to become negative in a small interval left of y∗ . So unless @f= @x(x0 ; y) → −1 faster than any power of y∗ − y as y → y∗ (a case which we will ignore), the assumption Eq. (A.2) will lead to a non-acceptable p(x; y). Therefore, we must restrict ourselves to those f(x; y) which satisfy @f=@x(x; y)¿ − 1 for all x; y. We will, however, be even a bit more restrictive and demand that @2 f=(@x @y)(x; y)60 for all x; y. As we have seen above, this implies @f=@x(x; y)¿−1 and because of the symmetry of f(x; y) we also have @f=@y(x; y)¿ − 1. Thus, all terms on the r.h.s. of Eq. (34) are non-negative so that this restriction guarantees a non-negative p(x; y). By a similar argument as above it can be seen that this condition also implies the additional property @f=@x(x; y)61 and @f=@y(x; y)61 for all x; y. Thus, we have proved that a function f(x; y) that satis es @2 f=(@x @y)60 not only generates a non-negative probability distribution but also has the useful properties |@f=@x|61 and |@f=@y|61.
T. Aspelmeier, A. Zippelius / Physica A 282 (2000) 450– 474
471
Fig. 9. Sketch of the possible range of f(x; y) as a function of x for xed y. f(x; y) = x − y for x ¿ −2 y and due to |@f=@x|61, f(x; y) can only reside in the region between the dashed lines for x ¡ −2 y. Thus, it follows that f(x; y)¿x − y for all x; y.
Appendix B. Evaluation of the low-temperature limit In order to motivate Eq. (42), lets evaluate Eq. (36) a little further: Z x ey ∞ @f hxi = (zy; y) e−zy−f(zy; y) y d z with z = 1+ y 2y 0 @y y Z 1 @f 1 (zy; y) ey(1−z)−f(zy; y) d z 1+ = 2 @y 0 Z ∞ @f (zy; y) ey(1−z)−f(zy; y) d z : 1+ + @y 1 | {z } |
¿0;62
{z
→0 as y→∞
(B.1)
}
The estimate for the bracket in the second integral in Eq. (B.1) follows from |@f=@y|61 which implies 061 + @f=@y(zy; y)62. The limit of the second integral as y → ∞ follows from |@f=@x|61 and f(x; y) = x − y for x ¿ −2 y because these two conditions together ensure thatRf(x; y)¿x − y ∞ for all x (see Fig. 9). Thus, the integral is bounded from above by 2 1 exp(2y(1 − z)) d z = 1=2y → 0. In order to evaluate the rst integral in Eq. (B.1), we rst note that f(zy; y)=y(1−z) for z62 and f(zy; y)¿y(1−z) for z ¿ 2 , which implies limy→∞ y(1−z)−f(zy; y)= 0 for z62 and limy→∞ y(1 − z) − f(zy; y)60 for z ¿ 2 . Now suppose that there is some ∈ (2 ; 1) such that 0 if 06z ¡ ; (B.2) lim y(1 − z) − f(zy; y) = y→∞ −∞ if ¡ z61 :
472
T. Aspelmeier, A. Zippelius / Physica A 282 (2000) 450– 474
Provided the convergence is fast enough, we can evaluate the rst integral in Eq. (B.1) as y → ∞ by interchanging the limit with the integral and with @=@y (note that @y(1 − z)=@y = 1 because z = x=y): Z hxi 1 1 @f = lim (zy; y) ey(1−z)−f(zy; y) d z 1+ lim y→∞ y y→∞ 2 0 @y Z @f 1 1 (zy; y) ey(1−z)−f(zy; y) d z lim 1 + = 2 0 y→∞ @y Z 1 2 dz = : (B.3) = 2 0 Additionally, we can compute gives √ h xi p lim √ = : y→∞ y
√ h xi √ y
according to Eq. (37) in the same way, which
(B.4)
In other words: A sucient condition for the low-temperature limit, Eq. (23), is just Eq. (B.2) with = 2 .
Appendix C. Evaluation of the high-temperature limit In the high-temperature limit, the r.h.s. of Eq. (45) must be a constant. From this ˜ y) we can conclude that and the symmetry condition for f(z; √ ˜ 0) = c1 z + c2 (z + 1) (C.1) f(z; with some constants c1 and c2 . We know that outside this range for , when z62 ˜ 0) = |1 − z|. f(z; ˜ y) must be continuously dierentiable with respect or z¿−2 , f(z; ˜ 2 term that to z at least once, otherwise p () would have a -peak due to the @2 f=@z appears in Eq. (44). Thus, we have the requirements that at the endpoint of the range, i.e., at z = 2 , 1 − 2 = c1 + c2 (2 + 1) and √ @ @(1 − z) = −1 = z + c (z + 1)) (c 1 2 @z @z 2 z=
z=2
=
c1 + c2 : 2
(C.2)
These two equations x c1 and c2 : c1 = − c2 =
4 ; 1 − 2
1 + 2 : 1 − 2
(C.3)
T. Aspelmeier, A. Zippelius / Physica A 282 (2000) 450– 474
˜ 0) is thus completely determined by the high-temperature limit: f(z; 1−z for z62 ; 2 ˜ 0) = − 4 √z + 1 + (z + 1) for 2 ¡ z ¡ −2 ; f(z; 1 − 2 1 − 2 z−1 for z¿−2 :
473
(C.4)
Appendix D. “Proof” for a uniform distribution at ÿ = 0 As indicated by the quotation marks in the heading, we will only outline the basic ideas of the proof for the sake of shortness. We start from Eq. (3) and take some realization of the stochastic process q(). Then we vary the amplitude of this realization by multiplying it with some , which we let tend to ∞. This is in accordance with Eq. (2), which states that the covariance of q() goes to in nity as goes to 0. We also have to scale 0 with in order to assure that − 0 + q()60 for all ¡ 0. This is simply the condition that at = 0 the particles are still well separated and have not touched before in this collision event. Consider now a rescaled version of Eq. (3): w0 () = max(0; f()) ; where
( f() = 0max
∈[0;]
)
X 0 − 0 − w0 (0 − i ) + q(0 ) i;
(D.1)
with w0 () = w()=. When is a rational number, = p=s, we know that q() is periodic with period p + s (see Section 3.2). Hence, there exists a ˜ in the period from 0 to p + s where q() takes on its maximum value. This will be the only point in this period where q() assumes this value because the probability for a degenerate maximum is vanishingly small. When → ∞, it is clear that the smallest ∗ where ∗ = − 0 + q(∗ ) crosses the axis (this marks the beginning of the collision) is very close to the position of the maximum of q() in a particular period, i.e. ∗ ≈ ˜ + n(p + s) with some n ∈ N. Also, provided that is large enough, the next time when the axis is crossed from below will be approximately one period later, namely at around ˜ + (n + 1)(p + s). Therefore, we see that w0 () reaches a plateau very shortly after the beginning of the collision, the height of the plateau being given approximately by z =(+n(p ˜ +s))= −0 +q(). ˜ This height can be varied continously from 0 to (p + s)= by using a dierent realisation of the stochastic process (but with the same statistical weight), namely q0 () = q( + ). This is because q0 () has a dierent position of the maximum, ˜0 = ˜ − mod p + s, possibly a dierent period n0 = n − 1; n or n + 1 of the beginning of the collision, ∗ ≈ ˜0 + n0 (p + s) and it thus leads to the plateau height z = (˜0 + n0 (p + s))= − 0 + q0 (˜0 ) = (˜ − + n(p + s))= − 0 + q() ˜ mod (p + s)=.
474
T. Aspelmeier, A. Zippelius / Physica A 282 (2000) 450– 474
Since this is true for any realisation of q(), we can conclude that the height z of the initial plateau is uniformly distributed between 0 and (p + s)=. We have just seen that = − 0 + q() ¡ 0 for ˜ + n(p + s) + ¡ ¡ ∗ + p + s − 0 with some small ; 0 ¿ 0. Therefore, any additional contributions (if any) to w0 () must come from times ∗ + p + s − 0 ¡ ¡ ∗ + p + s (remember the maximum collision time is p + s). When we let % ∗ + p + s, we can evaluate Eq. (D.1) because then all the p + s − 2 non-zero memory terms are equal to z. Thus, we get p+s 0 ∗ − (p + s − 2)z : (D.2) w ( + p + s) = max z; Here, we must distinguish two cases: z ¡ (1 + )= and z ¿ (1 + )= ( was de ned in Eq. (17)). In the former case, the rst argument of the max-function in Eq. (D.2) is smaller than the second, in the latter case it is the second argument that is smaller than the rst. This means that for z ¿ (1+)=, +1=w(∗ +p+s)=w0 (∗ +p+s)=z ∈ (1 + ; p + s] and for z ¡ (1 + )=, + 1 = p + s − (p + s − 2)z ∈ (1 + ; p + s]. Since z is uniformly distributed over its range, we conclude that is also uniformly distributed between 1=(p + s − 1) and p + s − 1. References [1] G. Giese, A. Zippelius, Collision properties of one-dimensional granular particles with internal degrees of freedom, Phys. Rev. E 54 (5) (1996) 4828. [2] T. Aspelmeier, G. Giese, A. Zippelius, Cooling dynamics of a dilute gas of inelastic rods a many particle simulation, Phys. Rev. E 57 (1) (1998) 857. [3] P.E. Kloeden, E. Platen, Numerical Solution of Stochastic Dierential Equations, Springer, Berlin, 1992, p. 42. [4] D. Auerbach, Colliding rods Dynamics and relevance to colliding balls, Am. J. Phys. 62 (1994) 522. [5] W.H. Press, S.A. Teukolsky, W.T. Vetterling, B.P. Flannery, 2nd Edition, Numerical Recipes in C., Cambridge University Press, Cambridge, 1992. [6] J.-P. Hansen, I.R. McDonald, Theory of Simple Liquids, 2nd Edition, Academic Press, London, 1986. [7] S. McNamara, W.R. Young, Dynamics of a freely evolving, two-dimensional granular medium, Phys. Rev. E 53 (5) (1996) 5089. [8] I. Goldhirsch, G. Zanetti, Clustering instability in dissipative gases, Phys. Rev. Lett. 70 (11) (1993) 1619. [9] P.K. Ha, J. Fluid Mech. 134 (1983) 401. [10] T. Aspelmeier, F. Gerl, A. Zippelius, A microscopic model of energy dissipation in granular collisions, in: H.J. Herrmann, J.-P. Hovi, S. Luding (Eds.), Physics of Dry Granular Media, Vol. 350 of NATO ASI Series, Kluwer Academic Publishers, Dordrecht, 1998, p. 407. [11] F. Gerl, A. Zippelius, Coecient of restitution for elastic disks, Phys. Rev. E 59 (2) (1999) 2361. [12] T. Aspelmeier, Ph.D. thesis, Univ. Gottingen, 2000, “Microscopic models of energy dissipation by internal degrees of freedom in particle collisions”.