Probabilistic Approach in Thermodynamics

Probabilistic Approach in Thermodynamics

CHAPTER Probabilistic Approach in Thermodynamics 15 15.1 Introduction In the vicinity of equilibrium, linear nonequilibrium thermodynamics provides...

507KB Sizes 0 Downloads 35 Views

CHAPTER

Probabilistic Approach in Thermodynamics

15

15.1 Introduction In the vicinity of equilibrium, linear nonequilibrium thermodynamics provides a linear relation between forces and flows (fluxes). Introduction of the concept of internal degrees of freedom allows for the description of wider class of irreversible processes and scales. One of the systems considered is small and driven system embedded in a heat bath with well-defined temperature such as biomolecules. For such a system, the probabilistic approach formulates the first law and entropy valid along single fluctuation trajectories. Such efforts became particularly useful for determining the free-energy differences between various states of biomolecules. These relations are valid for nonequilibrium systems driven by time-dependent forces. This approach of taking both the energy conservation and entropy production on a mesoscopic level has a revitalizing effect of the ensemble level for chemical nonequilibrium systems. One of the main advantages of probabilistic approach compared to the macroscopic phenomenological theory is that a thermodynamically consistent kinetics valid beyond the linear region can be imposed. A particularly interesting class of states are nonequlibrium steady states characterized by a timeindependent distribution and nonvanishing flows (currents). For a system in contact with a heat bath, symmetry of the probability distribution of entropy production in the steady state is known as the fluctuation theorem. As a variant, a transient fluctuation theorem valid for relaxation toward the steady state is also established. Jarzynski expresses the free-energy difference between two equilibrium states by a nonlinear average over the work required to drive the system in a nonequilibrium process from one state to the other. The Crooks fluctuation theorem compares probability distributions for the work spent in the original process with the time-reversed one. The probabilistic approach reached the broader appeal due to advances in experimental techniques for tracking and manipulation of single particles and molecules. Future research in the field may focus on specific applications, most likely for molecular motors, biomolecular networks, and information processing.

15.2 Statistical thermodynamics Statistical mechanics describe how reversible microscopic equations of motion can lead to irreversible macroscopic behavior. Statistical thermodynamics predicts the macroscopic properties of a system using information about the microscopic nature of the system, since the large number of molecules in any system allows the use of statistics. For example, the result of longtime average over many molecular collisions of gas molecules with the container walls is a finite force or measured pressure. Similarly, other macroscopic Nonequilibrium Thermodynamics. http://dx.doi.org/10.1016/B978-0-444-59557-7.00015-1 Copyright  2014 Elsevier B.V. All rights reserved.

659

660

CHAPTER 15 Probabilistic Approach in Thermodynamics

properties, such as heat capacity, temperature, chemical equilibrium constant, can be related to longtime averages of the corresponding molecular processes. A microscopic theory may be developed by using a calculational scheme based on following the trajectories (position and velocity) of each molecule in the system. At each molecule–molecule or moleculewall collision, new trajectories would have to be computed. Such calculations can be performed for limited number of molecules and short periods of time. Such calculations yield the probability distribution of particle velocities or kinetic energies. For example, the temperature of a monoatomic gas could then be computed from the average kinetic energy. Therefore, statistical thermodynamics determine probability distributions and average values of properties when considering all possible states of the molecules consistent with the constraints on the overall system.

15.2.1 Microstates Microscopic approach is based on following the trajectories that are the position and velocity of each molecule in the system. The collection of possible states consistent with the constraints is called the ensemble of states. The information about molecule velocities can be represented in terms of the probability distribution and average values for properties when considering the ensemble of states. Depending on the constraints, special names are given to these ensembles. For example, the canonical ensemble describes a system with fixed number of particles N, the volume V, and temperature T, which specifies fluctuations of energy. The microcanonical ensemble refers to all states consistent with a fixed number of particles, the volume, and total energy. The Grand canonical ensemble describes a system with fixed volume, temperature, and chemical potential (partial molar Gibbs energy). A grand ensemble is any ensemble that is more general and particularly applicable to systems in which the number of particles varies such as chemically reacting systems. In the classical mechanics, a molecule can have any possible energy. In the quantum mechanics, only certain discrete values are allowed for a molecular state. An energy state is a particular quantum number. For a single particle, the three almost equivalent quantum number assignments represent three distinguishable energy states with the same energy level, which is referred to threefold degenerate. The symbol uj shows the degeneracy of the jth molecular energy level of a single molecule, while U is the degeneracy of an energy level of an ensemble of molecules. Each distinct energy state of an assembly of molecules may be given by a set of occupation numbers; ith energy state will be specified by the vector ni ¼ ni1 ; ni2 . where nij is the occupation number of the jth single molecule energy state in the ith energy state of the assembly of the molecules. The energy of the microstate is X Ei ¼ nij εj (15.1) j

where ε is the energy of the jth energy state of a molecule. According to the equal a priori probability principle, all microstates that have the same energy and the same number of particles are equally probable. The ergodic hypothesis states that any experimental measurement is really based on a long time on a molecular timescale. During the measurement, the assembly of molecules undergoes a very large, statistically representative number of microstates. Therefore, a statistical average replaces time average and a macroscopic property computed from the probability distribution will be equivalent to measurements.

15.2.2 The Boltzmann energy distribution Consider a heat bath with temperature T containing subsystems A and B that are not affected by each other. The probability of A being in a microstate with energy En is pA(En), while pB(Em) is the probability that B is in a microstate with energy Em. The probability of occurrence of the state AB in a particular microstate is a function of the total energy (EAB ¼ EA þ EB) of the composite system AB: pAB ðEn þ Em Þ ¼ pA ðEn ÞpB ðEm Þ

(15.2)

15.2 Statistical thermodynamics

661

Any other microstate with the same total energy also has the same probability of occurrence. Assuming that the energy levels are closely spaced, derivatives of Eqn (15.2) with respect to energy of subsystems EA and EB yield d ln pA ðEn Þ d ln pB ðEm Þ ¼ ¼ b dEn dEm

(15.3)

where b ¼ 1/(kBT) and kB is the Boltzmann constant (kB ¼ 1.38044  1023 J/K). Each side of Eqn (15.3) must be independent of both subsystems A and B and can only depend on the properties of the bath. Integration of Eqn (15.3) yields pA ðEn Þ ¼ IA expð bEn Þ

and

pB ðEm Þ ¼ IB expð bEm Þ

The integration constants, IA and IB are characteristics of their respective subsystems and can be determined from the normalization condition: X X pA ðEn Þ ¼ 1; pB ðEm Þ ¼ 1 states n

Therefore, IA ¼ 1

. X

states m

expðbEn Þ; IB ¼ 1

states n

. X

expðbEm Þ

states m

15.2.3 Partition function For any system, the canonical partition function Q(N,V,b) in terms of state or level is X X     Q¼ expðbEi Þ ¼ u Ej exp bEj states i

(15.4)

levels j

Here the summation is over all the energy states of the system. Then the probability of occurrence of a particular microstate i with energy Ea, pi(Ea) is (Sandler, 2010)   expðbEa Þ expðbEa Þ  ¼ (15.5) p i Ea ¼ P Q exp bEj states j

The probability of occurrence of the energy level Ea, p(Ea) is       uðEa ÞexpðbEa Þ p Ea ¼ u Ea pi Ea ¼ Q The probability of finding of the macrosystem in any microstate i with energy Ea is X       p Ea ¼ pi ðEa Þ ¼ pi Ea U Ea

(15.6)

(15.7)

states i with energy Ea

where U is the degeneracy that is the number of states with energy Ea. As b > 0, a state of higher energy has a lower probability of occurrence than a lower energy state. Since the probability of any one microstate is proportional to exp (bE), a state with lower energy is more probable. On the other hand, the degeneracy u increases with energy level; there are more states possible having higher energy than a lower energy.

662

CHAPTER 15 Probabilistic Approach in Thermodynamics

The grand canonical partition function Q(V,T,m) is X   Q V; T; m ¼ expðNm=kB TÞ states i

¼

X

X

exp½Ei ðN; VÞ=kB T

Energy states i for N molecule

(15.8)

expðNm=kB TÞQ

states i

For pure fluid systems, the average number of particles Nav is 

vQ vm

 V;T

 v  ¼  vm

X

X

expðNm=kB TÞ

V;T states i

exp½Ei ðN; VÞ=kB T

Energy states i for N molecule

  vQ 1 X ¼ N expðNm=kB TÞ vm V;T kB T states i

So Nav ¼ kB T

X

exp½Ei ðN; VÞ=kB T

Energy states i for N molecule

  v ln Q vm V;T

The fluctuation of the particle number satisfies

  ðNÞ2 ¼ N 2  hNi2

where hNi is the average particle number and (DN)2 becomes the variance of the distribution. From the grand canonical partition function internal energy and pressure are defined; by using ðvQ=vTÞV;m we find     v lnQ v lnQ U ¼ kB T 2 þ mkB T vT V;m vm V;T By using ðvQ=vVÞT;m , we find P ¼ kB T

  vln Q vV T;m

In the grand canonical ensemble, the probability that the system is constrained with N particles, energy E with a fixed system volume V is   expðNm=kB TÞ expðEðN; VÞ=kB TÞ p N; V; E ¼ (15.9) Q So that the Gibbs entropy with the grand canonical ensemble is S ¼ kB

X State of energy E

p ln p

(15.10)

15.2 Statistical thermodynamics

663

15.2.4 Partition function and thermodynamic properties Internal energy U is the average value of the energy hEi and defined by P Ei expðbEa Þ X U ¼ hEi ¼ Ei pðEi Þ ¼ states i Q states i

(15.11)

Using the partition function various thermodynamic properties can be defined:     v ln Q v ln Q U¼ ¼ kB T 2 vb N;V vT N;V     v ln Q kB T vQ P ¼ kB T ¼ vV N;T Q vV N;T   v ln Q S ¼ kB ln Q þ kB T vT N;V     v ln Q v ln Q H ¼ U þ PV ¼ kB T 2 þ kB TV vT N;V vV T;N A ¼ U  TS ¼ kB T ln Q

Therefore, partition function rather than probability p(E) can be used to obtain an average value for a thermodynamic property. The relations above are always valid for any system.

15.2.5 The Gibbs entropy function The Gibbs entropy function is

  X v ln Q S ¼ kB ln Q þ kT ¼ pj ln pj vT N;V states j

(15.12)

The equation above relates the entropy to the probability of occurrence of the possible states of the system. For example, if there are two equally probable states, p ¼ 1/2 ¼ 0.5, then S ¼ kB ð0:5 ln 0:5 þ 0:5 ln 0:5Þ ¼ kB lnð1=2Þ

As the number of probable states available to a system increases, the uncertainty as to which state the system occupies increases and the entropy defined in terms of probability increases. A statistical interpretation of entropy is related to the uncertainty of knowledge about the state of the system. All microstates in the microcanonical ensemble have the same probability. If the degeneracy of the state with fixed N, V, and E is U(N,V,E), the probability of occurrence of any microstate is X X 1 1 and ¼1 (15.13) p¼ p¼ U U microstates microstates P By using the Gibbs entropy S ¼  p ln p for the microcanonical ensemble, the entropy becomes states j

  X 1 1 1 S¼ ln ¼ kB U ln U ¼ kB ln U U U U states j

(15.14)

664

CHAPTER 15 Probabilistic Approach in Thermodynamics

For the microcanonical ensemble, some other thermodynamic properties can be obtained using the classical thermodynamic relations S ¼ SðU; V; NÞ       vS vS vS dU þ dV þ dN dS ¼ vU V;N vV U;N vN V;U dS ¼

1 P G dU þ dV  dN T T T

Therefore using Eqn (15.14), we have       1 v ln U P v ln U G v ln U ¼ kB ¼ kB ¼ kB ; ; T vU V;N T vV U;N T vN V;U For a pure fluid of N identical molecules   Q V; T; N ¼ 

q 3N

L N!

  Z V; T; N

(15.15)

 2pmkB T where L ¼ is the De Broglie wave length, L3N is the translational partition function, h is the h2 34 J s), R is the gas constant (R ¼ kBNav), and Z is the configuration Planck constant R (hR ¼ 6.6261  10 integral Z ¼ V . V expðuðr1 .rN Þdr1 .drN . The number of integrals depends on the number of molecules (Sandler, 2010).

15.2.6 Coarse graining Coarse graining transforms a probability density in phase space into a coarse-grained density that is a result of density averaging in small size but finite cells. Coarse-graining models the uncontrollable impact of surrounding onto ensemble of mechanical systems. Coarse graining is accomplished by lumping microstates into a macrostate and hence can affect the emergence of macroscopic variables. For example, one observer will smear over 106 m and another, with better technology, 108 m. One observer will allocate all carbon into the same grain, another may distinguish C12 from C14. Many coarse-graining approaches are based on coordinate space, which may not be the only criterion. Coarse graining may be defined based on varying timescales. States are thrown into the same coarse grain if they are close to each other dynamically. Defining a “distance function” provides the basis for selection of coarse grains. (Schulman and Gaveau, 2001). For an underlying dynamics on a discrete set of states following a Markovian master equation, one option for coarse graining is to group several states into new “mesostates” or aggregated states. Typically, the dynamics between these mesostates is then no longer Markovian. One question is whether one can then distinguish a genuine equilibrium from a nonequilibrium steady state if only the coarse-grained trajectory is accessible. Coarse graining of a discrete network becomes systematically possible if states among which the transitions are much faster are grouped together (Parker et al., 2009).

15.3 Stochastic thermodynamics Stochastic thermodynamics describe small systems like biomolecules in contact with a well-defined heat bath at constant temperature and driven out of equilibrium. Based on an individual trajectory, stochastic thermodynamics formulates the first law and identifies entropy production (Seifert, 2012). Macromolecules of biological systems like proteins, enzymes, and molecular motors are embedded in an

15.3 Stochastic thermodynamics

665

aqueous solution. Brownian motion of a biomolecule is driven by collisions with surrounding fluid molecules and undergoes thermal motion. Nonequilibrium states for such systems may be: (1) relaxing toward equilibrium, (2) driven by time-dependent flows or unbalanced chemical reactions, and (3) a nonequilibrium steady state driven by external time-independent forces. The collection of the degrees of freedom makes up the state. The change of the state, either due to the driving or due to the fluctuations, leads to a trajectory of the system. Such trajectories belong to an ensemble that is fully characterized by the distribution of the initial state, by the properties of the thermal noise acting on the system, and by specifying the external driving. The thermodynamic quantities like work and heat follow a distribution defined along the trajectory. If the states are made up by continuous variables (like position), the dynamics follows a Langevin equation for an individual system and a Fokker–Planck equation for the whole ensemble. A master equation, on the ensemble level, describes discrete states with transition rates governing the dynamics.

15.3.1 Langevin equation An overdamped motion x(s) of a system with a single continuous degree of freedom can be described by the Langevin equation, the path integral, and the Fokker–Planck equation (Seifert, 2012). The Langevin equation is x_ ¼ mFðx; lÞ þ z

(15.16)

where z is the thermal noise. It is usually assumed that the strength of the noise is not affected by a timedependent force. The systematic force F(x,l) can arise from a conservative potential V(x,l) and/or be applied to the system directly as a nonconservative force f(x,l): Fðx; lÞ ¼ vx Vðx; lÞ þ f ðx; lÞ

(15.17)

Both sources may be time-dependent through an external control parameter l(s) varied from l0 to lt. The Langevin dynamics generates trajectories x(s) starting at x0. For an arbitrary number of degrees of freedom, x and F become vectors.

15.3.2 Fokker–Planck equation The Fokker–Planck equation for the probability p(x,s) to find a particle at x and at time s is vs pðx; sÞ ¼ vx Jðx; sÞ

(15.18)

where J(x,s) is the probability current given by vx Jðx; sÞ ¼ vx ðmFðx; sÞpðx; sÞ  Dvx pðx; sÞÞ

(15.19)

The equation above comes with normalized initial distribution p(x,0) h p0(x). In equilibrium, the diffusion coefficient D and the mobility m are related by the Einstein equation: D ¼ Tm, where T is the temperature of the surroundings with Boltzmann’s constant kB set to unity to make entropy dimensionless. The dynamics is described by assigning a weight to each path or trajectory: 3 2 Zt h i

7 6 p xðsÞjx0 ¼ exp4 ðx_  mFÞ2 =4D þ mF 0 =2 ds5 (15.20) 0

The last term in the exponent comes from the Jacobian jvz=vxj. The path-dependent observables can be averaged in a path integral that requires a path-independent normalization such that summing the weight over all paths is one (Seifert, 2012).

666

CHAPTER 15 Probabilistic Approach in Thermodynamics

15.3.3 Generalized Fokker–Planck equation The local isothermal rate of entropy strength s(x,t) at time t and temperature T for the intrinsic irreversible process of the nonequilibrium system is Z Z di S 1 ¼ sðx; tÞdx ¼ Jðx; tÞXðx; tÞdx  0 (15.21) dt T The factor (1/T ) refers to a force X related to external field or a chemical affinity. The probability density p(x,t) of the nonequilibrium system is given in Eqn (15.18): vpðx; tÞ vJðx; tÞ ¼ vt vx

(15.22)

By introducing a positive definite phenomenological coefficient L that may depend on microscopic properties of the system including the state variable x and the probability density p (Frank, 2002), and we have a linear relationship between the flux (current) J and the force X: Jðx; tÞ ¼ Lðx; pÞXðx; tÞ

(15.23)

where L(x,p) ¼ m(x,p)p(x,t) with positive definite coefficient m(x,p). Substitution of the equation above into Eqn (15.21) yields Z   di S 1 ¼ L x; p ½Xðx; tÞ2 dx  0 (15.24) dt T Meanwhile, substitution of Eqn (15.23) into Eqn (15.22) yields vpðx; tÞ v ¼  ½Lðx; pÞXðx; tÞ vt vx

(15.25)

For a closed system, using dS ¼ deS þ diS, G ¼ U  TS, and assuming dU ¼ TdeS, we have dG ¼ TdiS: Z dG ¼  Jðx; tÞXðx; tÞdx (15.26) dt R Considering a drift force h(x) described by the potentialRVðxÞ ¼  hðxÞdx with the averaged energy of the system (dynamics as an overdamped motion) UðtÞ ¼  VðxÞpðx; tÞdx, a free-energy functional may be Z

G p ¼ VðxÞpðx; tÞdx  TS½p (15.27) R ~ ~ the tÞÞdxg with B(z) describing a monotonically increasing entropy scale and SðzÞ where S½p ¼ Bf Sðpðx; entropy kernel, which is a concave function. Differentiating G[p] with respect to t and comparing with Eqn (15.26) yields the thermodynamic force X(x,t):    v d S~    X x; t ¼ Tr p þh x (15.28)  vx dz z¼p  dB where r p ¼  R . dz z¼ Sðpðx;tÞÞdx ~ R For the Boltzmann–Gibbs entropy S½p ¼  p ln pdx, thermodynamic force becomes     v X x; t ¼ T ln p þ h x (15.29) vx Using Eqns (15.23) and (15.25) with the force given above, we have v _

v   v       ~ p x; t ¼  m x; p h x p x; t  Tr p L Sðpðx; tÞÞ vt vx vx

(15.30)

15.3 Stochastic thermodynamics

667

_

where the operator is L ½cðyÞ ¼ c  ydc=dy. For m ¼ 1, we have v   v     v2   p x; t ¼  h x p x; t þ T 2 p x; t vx vt vx

(15.31)

The equation above is the conventional linear Fokker–Planck equation for stochastic processes with additive noise and the noise strength measured in terms of the temperature T.

Example 15.1 FokkerePlanck equation for Brownian motion in a temperature gradient: short-term behavior of the Brownian particles Discuss the FokkerePlanck equation for Brownian motion in a temperature gradient. Solution: By applying the nonequilibrium thermodynamics of internal degrees of freedom for the Brownian motion in a temperature gradient, the FokkerePlanck equation may be obtained. The Brownian gas has an integral degree of freedom, which is the velocity v of a Brownian particle (Pe´rez-Madrid et al., 1995). The probability density for the Brownian particles in velocity-coordinate space is f ðv; r; tÞ ¼ rðv; r; tÞ=m where r is the position, t is the time, r is the mass density, and m is the mass of the Brownian particles. The mass density of a system consisting of Brownian gas and a heat bath is Z r ¼ rH þ rB ¼ rH þ m fdv For a constant rH, the Gibbs equation is 1 m drs ¼ dre  T T

Z m d fdv

(a)

where d represents the total differential of a quantity, m(v, r, t) is the chemical potential gradient of the Brownian gas component with internal coordinate v, and T(r) is the temperature of the bath at position r. The chemical potential is related to energy (e) and entropy (s) per unit mass: Z re  Trs þ P ¼ m rdv þ rH mH Here, mH is the chemical potential of the heat bath and P is the hydrostatic pressure. The mass energy and entropy balance equations are needed. The rate of change of probability density with time is vf vf v vf v ¼ r$  $Jv ¼ v$  $Jv vt vr vv vr vv

(b)

The conservation of mass for the Brownian particles (B) is obtained from integrating the equation above vrB ¼ V$rB vB vt where vB is the average velocity of the Brownian particles obtained from Z   1 rvdv vB r; t ¼ rB

668

CHAPTER 15 Probabilistic Approach in Thermodynamics

The energy conservation is vre ¼ V$Jq vt

(c)

where Jq is a heat flow in the reference frame in which the heat bath is at rest. The entropy balance equation is derived assuming that the gas is at local equilibrium. We also assume that the suspension of Brownian particles in the heat bath may be a multicomponent ideal solution. Differentiating Eqn (a) and using conservation of mass and energy, and the chemical potential   kB T 1 m v; r; t ¼ ln f þ v2 m 2 the rate of change of entropy per unit volume is obtained as vrs ¼ V$Js þ s vt where the entropy flow Js, the entropy source strength s, and the modified heat flow J0q are obtained from the second law of thermodynamics: Z J0q Js ¼  kB f ðln f  1Þvdv (d) T ! Z J0q v f s ¼  2 $ VT  kB Jv $ ln dv (e) T vv fl;eq Z 1 2 v vfdv J0q ¼ Jq  m 2 One of the contributions to the modified heat flow is the motion of the Brownian particles. The entropy source strength is due to heat flow and due to diffusion in velocity space (internal degree of freedom), which is the contribution of the motion of the Brownian particles in the heat bath.

Example 15.2 Phenomenological equations Derive the phenomenological equations for the system considered in Example 15.1. Solution Since the system is isotropic and assuming locality in velocity space, and using the linear nonequilibrium formulations based on the entropy production relation in Eqn (e) in Example 15.1, we have the linear phenomenological equations (Pe´rez-Madrid et al., 1995): ! Z VT v f 0 Jq ¼ Lqq 2  kB Lqv dv (f) T vv fl;eq ! VT v f (g) Jv ¼ Lvq 2  kB Lvv T vv fl;eq

15.3 Stochastic thermodynamics

669

Lqq The Onsager relations yield Lvq ¼ Lqv .With a heat conduction coefficient expressed by l ¼ 2 and the T Lvq mLvv ; b¼ , Eqns (f) and (g) become friction coefficients are g ¼ fT fT  Z  kB T vf 0 Jq ¼ lVT þ m g f v þ dv (h) m vv   VT kB T vf Jv ¼ gf  b fv þ (i) T m vv Assuming that the coefficients b and g are independent of v, and using Eqn (i) in Eqn (b), the FokkerePlanck equation for the Brownian motion in a heat bath with a temperature gradient is obtained:   vf vf v kB T vf g v vT ¼ v$ þ b $ f v þ $f (j) þ vt vr vv m vv T vv vr For larger times than the characteristic time b1, the system is in the diffusion and thermal diffusion regime.

Example 15.3 The thermal diffusion regime Analyze the thermal diffusion regime for the system considered in Example 15.1. Solution: Conservation of momentum may be used to simplify the equation of motion for the Brownian gas for long time behavior: t >> b1. At this regime, the Brownian Z gas will reach an internal equilibrium with the heat   1 bath. Using mean velocity definition: vB r; t ¼ rvdv and the continuity (Eqn (b), Example 15.1), rB the equation of motion for the mean velocity becomes (Pe´rez-Madrid et al., 1995) Z   dvB (k) rB ¼ V$PB r; t þ m Jv dv dt where PB is the pressure tensor given by Z   PB r; t ¼ m f ðv  vB Þðv  vB Þdv d v v ¼ þ vB $ dt vt vr By substituting Eqn (i) into Eqn (k), the equation of motion becomes   dvB 1 VT þ V$PB r; t þ g ¼ bvB dt rB T

and the substantial derivation is

For the Brownian gas at internal equilibrium, the distribution function is approximated by    1 kB T f v; r; t y fi;eq ¼ exp m mB  ðv  vB Þ2 2

(l)

(m)

and the pressure tensor is reduced to gas pressure PB: PB ¼ PBU, PB ¼ rBkBT/m, where U is the unit tensor. The inertia term on the left side of Eqn (l) can be neglected, and we have JD ¼ rB vB ¼ DVrB  DT

VT T

(n)

670

CHAPTER 15 Probabilistic Approach in Thermodynamics

where the diffusion coefficient D and the thermal diffusion coefficient DT are defined by   kB T gm D¼ ; and DT ¼ rB D 1 þ kB T mb    R 1 2 m With fl;eq v; r; t ¼ exp mB  v , Eqn (m), and m Jv dv ¼ VPB , the entropy production 2 kB T equation becomes s ¼ Jq $

VT VPB  JD $ 2 rT T

(o)

Using the relation PB ¼ rBkBT/m, the equation above becomes s ¼ J0q $

VT ðkB =mÞVrB  JD $ rB T2

where the modified heat flux is J0q ¼ Jq þ

PB kB VrB JD ¼ l0 VT  DT T rB m rB

kB D2T . Eqn (o) can identify the conjugate m DrB forces and flows in ordinary space for which the Onsager relations will hold, and the linear phenomenological equations become where the heat conduction coefficient l0 is defined by l0 ¼ l þ

J0q ¼ l0 VT  DT T JD ¼ DT

kB VrB ¼ Lqq VT  LqD VrB m rB

VT  DVrB ¼ LDq VT  LDD VrB T

15.3.4 Nonequilibrium steady state A nonequilibrium steady state occurs if time-independent but nonconservative forces F(x) act on the system with a time-independent control parameter l. For such a system, a steady current is J s ¼ mFðxÞps ðxÞ  Dvx ps ðxÞ ¼ vs ðxÞps ðxÞ s

(15.32) s

where v (x) is the mean local velocity. A time-independent distribution p (x,l) in terms of nonequilibrium potential f(x,l) is ps ðx; lÞ ¼ exp½fðx; lÞ

(15.33)

For solving the equation above, quadratures for one dimensions and Fokker–Planck equation (with right hand side zero) for more degrees of freedom may be used. For f ¼ 0, the stationary state is the thermal equilibrium: ps ðx; lÞ ¼ exp½ðVðx; lÞ  GðlÞÞ=T

With the free energy given by   G l ¼ T ln

(15.34)

Z exp½Vðx; lÞ=Tdx

(15.35)

15.3 Stochastic thermodynamics The nonequilibrium current Js leads to a mean entropy production rate Z  s  s  J J hDstot i ¼ dx s¼ t D ps

671

(15.36)

A system can be driven from one nonequilibrium steady state f1(x) to another f2(x) by a time-dependent force f(s). In such a transition, heat splits into two contributions: q ¼ qhk þ qex. where qhk is the house keeping heat dissipated in maintaining nonequilibrium steady state, and for a Langevin dynamics, it is (Seifert, 2012) Zt qhk ¼

     _ xðsÞ vs x s ; l s ds m

with

hexp½qhk =Ti ¼ 1

(15.37)

0

The excess heat qex is the heat associated with changing the external control parameter:

 Zt Zt   _ _ lvx f qex ¼  D=m xðsÞv x fðx; lÞds ¼ T Df þ 0

(15.38)

0

The excess heat satisfies hexp  ½qex =T þ Dfðx; lÞi ¼ 1.

15.3.5 Generalized Jarzynski relation The work spent in driving the system from an initial equilibrium state at l0 via a time-dependent potential V(x,l(s)) for a time t obeys (Jarzynski, 1997) hexpð W=TÞi ¼ expðDG=TÞ

(15.39)

where DG h G(lt)  G(l0) is the free-energy difference between the equilibrium states corresponding to the final value lt of the control parameter and the initial state. This relation allows determining the free-energy difference (an equilibrium property) from nonequilibrium measurements or simulations. Its validity requires that one starts in the equilibrium distribution but not that the system has relaxed at time t into the new equilibrium. Within stochastic dynamics, the Jarzynski relation assumes that the noise in the Langevin equation is not affected by the driving force. The Jarzynski relation expresses the free-energy difference of an initial and a final state by an exponential average of the nonequilibrium work spent in such a transition. The Jarzynski relation does not explicitly require a definition of entropy on the level of a single trajectory, although one obtains a second-law like inequality for the average work as a mathematical result. The concept of entropy of a single trajectory creates an opportunity to derive equalities different from but related to the Jarzynski relation for the total entropy change directly (Schmiedl et al., 2007).

15.3.6 Stochastic energy and entropy The Langevin dynamics can be applied to an individual fluctuating trajectory. The convention in the first law dW ¼ dU þ dq is that a work applied to the system is positive as is heat transferred into the environment. For a particle in equilibrium ( f ¼ 0 and constant l), no work is applied to the system and hence an increase in internal energy, defined by the position in the potential dU ¼ ðvx VÞdx ¼ dq, must be associated with heat taken up from the reservoir. Applying work to the particle either requires a time-dependent potential V(x,l(s)) and (or) an external force f(x,l(s)). The change in work applied to the particle becomes dW ¼ ðvV=vlÞdl þ fdx where the first term arises from changing the potential at fixed particle position. The heat dissipated into the medium is dq ¼ dW  dV ¼ Fdx

(15.40)

672

CHAPTER 15 Probabilistic Approach in Thermodynamics

This relation above shows that in an overdamped system the total force times the displacement corresponds to dissipation. Integrated over a time interval t, we have the followings individual trajectories (Seifert, 2012)  

W x s ¼

Zt





vV=vl l_ þ f x_ ds

(15.41)

0

 

q x s ¼

Zt

Zt

_ F xds

_ ¼ qds 0

(15.42)

0

and the integrated first law becomes W½xðsÞ ¼ q½xðsÞ þ DV ¼ q½xðsÞ þ Vðxt ; lt Þ  Vðx0 ; l0 Þ

(15.43)

The heat dissipated along the trajectory x(s) is  

p½xðsÞ; lðsÞ q x s ¼ T      dV ¼ Fdx p x~ s ; ~l s

(15.44)

This ratio above compares the weight of the trajectory at its initial point x0 to the weight of the time-reversal trajectory x~ðsÞ ¼ xðt  sÞ under the reversal protocol ~lðsÞ ¼ lðt  sÞ for xt. We can also identify entropy along an individual trajectory. For a simple colloidal particle, the entropy has two contributions. First is an increase in entropy of the medium due to the heat dissipated into the environment:   q½xðsÞ (15.45) DSm x s ¼ T Second is the stochastic- or trajectory-dependent entropy of the system: SðsÞ ¼ ln pðxðsÞ; sÞ

(15.46)

where the probability p(x,s) is obtained by first solving the Fokker–Planck equation for the stochastic trajectory x(s). Thus, the stochastic entropy depends not only on the individual trajectory but also on the ensemble. In equilibrium (for f h 0 and constant l) and quasistatic transition, we have DStot ¼ 0 and thus DSm ¼ DS. Stochastic entropy S(s) obeys the relation, TS ¼ U  G, along the fluctuating trajectory at any time in the form: TSðsÞ ¼ VðxðsÞ; lÞ  GðlÞ (15.47) R The free energy defined in Eqn (15.35): GðlÞ ¼ T ln exp½Vðx; lÞ=Tdx. Along a single stochastic trajectory, the usual thermodynamic relations are valid for ensemble averages in equilibrium (Schmiedl and Seifert, 2007). Using the Fokker–Planck equation and D ¼ Tm, with the Boltzmann constant assumed as unity, the rate of change of the trajectory-dependent total entropy of the system becomes         vs pðx; sÞ Jðx; sÞ  _ _ _ x_ (15.48) þ Stot s ¼ Sm s þ S s ¼  pðx; sÞ  Dpðx; sÞ xðsÞ

xðsÞ

The first term on the right-hand side shows a change in p(x,s), which can be due to relaxation from a nonstationary initial state.

15.4 Fluctuation theorems

673

15.4 Fluctuation theorems The laws of conventional thermodynamics involve averages of the physical properties of macroscopic systems, but ignore their fluctuations. In particular, the second law for irreversible processes states that the average entropy produced internally in an irreversible process has to be positive. Linear nonequilibrium thermodynamics predict that there will be spontaneous entropy production in nonequilibrium systems when they are in the vicinity of equilibrium with local equilibrium holding. This entropy production is characterized by theR entropy source strength s, which defines the rate of entropy production per unit volume R sðr; tÞdr ¼ ðSJi ðr; tÞXi ðr; tÞdr > 0Þ where Ji is the flux (flow or current) and Xi is the thermodynamic conjugate force, r is the position, and t is the time. The fluctuation theorem relates the probability p(ss) of observing a phase-space trajectory with entropy production rate of ss over the time interval s, to that of observing a trajectory with entropy production rate of –ss: specifically, p(ss)/p(ss) ¼ exp(sss/kB). This result describes how the probability of violations of the 2nd law of thermodynamics becomes exponentially small as s or the system size increases. Fluctuation theorems exist for the fluctuations of thermodynamic properties in nonequilibrium stationary, as well as transient states. Fluctuation theorems refine the laws of thermodynamics by taking into account the fluctuations. One of the main concerns in statistical thermodynamics is to describe how reversible microscopic equations of motion produce irreversible macroscopic behavior. One can study the macroscopic behavior of macroscopic systems by considering just one of the very large numbers of microstates that can satisfy the macroscopic properties and then solving the equations of motion for this single microscopic representative trajectory. For an arbitrarily large ensemble of experiments from some initial time t ¼ 0, consequence of the fluctuation theorem is that an ensemble average of the entropy production cannot be negative for any value of the averaging time t: hst i  0. This inequality is called the second law inequality. It can be proved for systems with time-dependent fields of arbitrary magnitude and time dependence. However, It does not imply that the ensemble averaged entropy production is nonnegative at all times. Assume that a finite system is in contact with a heat bath at constant temperature and driven away from equilibrium by some external time-dependent force. Many nonequilibrium statistical analyses are available for the systems in the vicinity of equilibrium. The only exception is the fluctuation theorems, which are related to the entropy production and valid for systems far away from global equilibrium. The systems that are far from global equilibrium are stochastic in nature with varying spatial and timescales. The fluctuation theorem relates to the probability distributions of the time-averaged irreversible entropy production s. The theorem states that, in systems away from equilibrium over a finite time t, the ratio between the probability that s takes on a value A and the probability that it takes the opposite value, A, will be exponential in At. For nonequilibrium system in a finite time, the fluctuation theorem formulates that entropy will flow in a direction opposite to that dictated by the second law of thermodynamics. Mathematically, the fluctuation theorem is expressed as: pðst ¼ AÞ ¼ eAt pðst ¼ AÞ

(15.49)

The fluctuation theorem shows the exponentially declining probability of deviations from the second law of thermodynamics as time increases. This means that as the time or system size increases (since s is extensive), the probability of observing an entropy production opposite to that dictated by the second law of thermodynamics decreases exponentially. The fluctuation theorem proves that the second law is valid for large systems observed for long periods of time. It also provides quantified information on the probability of observing second law violations in small systems observed in a short time. The fluctuation theorem depends on the following assumptions. The system

674

CHAPTER 15 Probabilistic Approach in Thermodynamics

is finite and coupled to a set of baths, each characterized by a constant intensive parameter. The dynamics are required to be stochastic, Markovian, and microscopically reversible. The probabilities of the time-reversed paths decay faster than the probabilities of the paths themselves and the thermodynamic entropy production arises from the breaking of the time-reversal symmetry of the dynamical randomness. Self-organizing processes of biochemical cycles produce less entropy leading to the higher probability distribution of the opposite values of the fluxes (negative entropy production) and increased capacity for collective behavior and robustness (Andrieux and Gaspard, 2006, 2007). The fluctuation theorem deals with fluctuations. Since the statistics of fluctuations will be different in different statistical ensembles, the fluctuation theorem is a set of closely related theorems. Also some theorems consider nonequilibrium steady-state fluctuations, while others consider transient fluctuations. One of the fluctuation theorems state that in a time-reversible dynamic system in contact with constant temperature heat bath, the fluctuations in the time-averaged irreversible entropy productions in a nonequilibrium steady state satisfy Eqn (15.49) (Evans and Searles, 2002).

15.4.1 Transient fluctuation theorems The transient fluctuation theorem is applied to the transient response of a system. It bridges the microscopic and macroscopic domains and links the time-reversible and irreversible description of processes. In transient fluctuations, the time averages are calculated from a zero time with the known initial distribution function until a finite time. The initial distribution function may be, for example, one of the equilibrium distribution functions of statistical mechanics. So, for arbitrary averaging times, the transient fluctuation theorems are exact. The transient fluctuation theorem describes how irreversible macroscopic behavior evolves from timereversible microscopic dynamics as either the observation time or the system size increases. It also shows how the entropy production can be related to the forward and backward dynamical randomness of the trajectories or paths of systems as characterized by the entropies per unit time. R We may define the volumetric (total) irreversible entropy production rate by uðtÞ ¼ V sðr; tÞdV=kB and an average over all fluctuations in which the time-integrated entropy production is positive by h.iut >0 then the transient fluctuation theorem shows the ratio of probabilities for a finite system observed for a finite time satisfies (Evans and Searles, 2002) pðut > 0Þ ¼ hexpðut tÞiut <0 pðut < 0Þ

(15.50)

The equation above indicates that the second law will be satisfied and the ratio increases with increased time of observation or with system size. The corresponding steady state for of Eqn (15.50) becomes valid asymptotically only, for example, in the limit of long averaging times. Transient fluctuation theorem can be derived and applied exactly to transient trajectory segments. The probability that a phase a will be observed in an infinitesimal phase space volume size Va would be pðdVa ðaðtÞ; tÞÞ ¼ f ðaðtÞ; tÞdVa ðaðtÞ; tÞ

(15.51)

where f is the normalized phase space distribution function at the phase a(t) at time t and reversible Liouville equation shows vf ða; tÞ v h  i _ a; t ¼ af (15.52) vt va Langrangian form of the equation above becomes     vf ða; tÞ ¼ L a f a; t vt

(15.53)

where L(a) is the phase space compression factor for the existence of heat bath at constant temperature. Eqn (15.53) shows that the time reversible equations of motion are in line with the number of ensemble members.

15.4 Fluctuation theorems

675

The dissipation function is defined by Zt

f ðað0Þ; 0Þ UðaðsÞÞds ¼ ln  f ðaðtÞ; 0Þ

0

Zt LðaðsÞÞds ¼ Ut t

(15.54)

0

The transient fluctuation theorem can be derived from the probability ratio for observing a certain timeaveraged value of the dissipation function that Ut ¼ A, and its negative that Ut ¼ A:   p Ut ¼ A   ¼ expðAtÞ (15.55) p Ut ¼ A Here the time averages starts from t ¼ 0 with and initial distribution f(a,0) to some arbitrary later time of t. The equation above is valid for any valid combination of ensemble/dynamics, while the precise expression for Ut in Eqn (15.54) is the function of ensemble/dynamics combination (Evans and Searles, 2002).

15.4.2 Steady-state fluctuation theorems The steady-state fluctuation theorems are valid within the limit of long averaging times that it becomes exponentially likely that the time-averaged entropy production will be positive. This exponential grows linearly with size and the length of the averaging time. Therefore, the second law of thermodynamics will be valid either for the large system or long time constraint. The steady-state fluctuation theorems can be derived from the asymptotic form of the fluctuation theorem that they apply asymptotically (t / N) to nonequilibrium steady states as   p Uðt0 ; tÞ ¼ A 1 ¼A (15.56) limt=t0 /N ln  0 t p Uðt ; tÞ ¼ A Here the time averages are approximated by using the time average not from t ¼ 0 but from some time later that is t ¼ t0 and t0 << t. In Eqn (15.56) Uðt0 ; tÞ shows that the time averages are calculated after the relaxation of initial transients to, for example, a nonequilibrium steady state. The probabilities are calculated over an ensemble of long trajectories that initially are characterized by the distribution f(a,0) at t ¼ 0. A random process is ergodic in the mean if the mean of sample average asymptotically approaches the ensemble mean. It is mostly assumed that nonequilibrium steady states are ergodic. In this case steady, state time averages are independent of the initial starting phase at t ¼ 0.

15.4.3 Crooks fluctuation theorem Consider a system in thermal contact with a constant temperature heat bath and driven by a time-dependent process. Crooks fluctuation theorem (Crooks, 1999) is for stochastic microscopically reversible dynamics and given by   pf þ u ¼ expðþuÞ (15.57) pb ð uÞ where u is the entropy production of the driven system observed over some time interval, pf(u) is the probability distribution of the entropy production, and pb(u) is the probability distribution of the entropy production when the system is driven in a time-reversed case. The dynamics of the system is stochastic and Markovian. The concepts of microscopically reversible and detailed balance are different. Microscopic reversibility relates the probability of a certain path to its reverse, while the detailed balance refers to the probabilities of changing states regardless of particular path. Equation (15.57) is valid for systems starting in

676

CHAPTER 15 Probabilistic Approach in Thermodynamics

equilibrium and driven away from equilibrium for a finite time. It is also valid for systems driven into a time symmetric nonequilibrium steady state. The Jarzynski relation (Eqn (15.39)) is also valid for regimes far from equilibrium. This is the relationship between the difference in free energies of two equilibrium ensembles DG and the work W expended in switching between ensembles in a finite time satisfies hexpðW=kB TÞi ¼ expð DG=kB TÞ

(15.58)

Here T is the temperature of the bath and h.i is an average over many repeats of the switching processes. When Eqn (15.57) is valid then the following holds: ZþN hexpðuÞi ¼ N

  pf þ u ½expð uÞdu ¼

ZþN pb ðuÞdu ¼ 1

(15.59)

N

Consider a system initially at state A and in interaction with an environment at temperature T ¼ 1/b where the Boltzmann constant is unity. The system depends on x representing all the dynamical, uncontrolled degrees of freedom and some external, controlled, and time-dependent parameter l. The dynamics of the system satisfy the microscopic reversibility condition: p½ðxðþ tÞjlðþ tÞ   ¼ expf bq½xð þ tÞ; lð þ tÞg  p xð tÞjl t

(15.60)

where p[(x(þt)jl(þt)] is the probability at given l(t) of the following the path x(t) through phase space, p½ðxðtÞjlðtÞ is the probability for time-reversed path, and q is the heat transferred to the bath. The heat is path-dependent and odd under a time reversalPq½ðxðtÞ; lðþtÞ ¼ q½ðxðtÞ; lðtÞ. The entropy of the nonequilibrium ensemble of systems is S ¼  x pðxÞ ln pðxÞ, then the entropy production for a single occurrence of a process between an initial probability distribution p(xt) and a final p(xþt) distribution becomes u ¼ ln pðxt Þ  ln pðxþt Þ  bq½ðxðtÞ; lðtÞ

(15.61)

In general, for a system driven by a time symmetric process, the resulting nonequilibrium steady-state ensemble would be invariant under time reversal. This symmetry ensures that the forward and backward processes become indistinguishable and the entropy production is odd under a time reversal, the fluctuation theorem will be valid for any integer number of cycles:   pðuÞ ¼ exp þ u pðuÞ which is similar to Eqn (15.49). Crooks stationary fluctuation theorem relates entropy production to the dynamical randomness of the stochastic processes. Therefore, it relates the statistics of fluctuations to the nonequilibrium thermodynamics through the entropy production estimations. The theorem predicts that entropy production will be positive as either the system size or the observation time increases and the probability of observing an entropy production opposite to that dictated by the second law of thermodynamics decreases exponentially.

15.4.4 Integral fluctuation theorem Various transient or steady-state forms of Eqn (15.50) are known as integral fluctuation theorems. Fluctuation theorems express universal properties of the probability distribution p(U) for functionals U[x(s)], like work, heat or entropy change, evaluated along the fluctuating trajectories taken from ensembles with well-specified

15.4 Fluctuation theorems

677

initial distributions p0(x0). A nondimensional functional U[x(s)] with probability distribution function p(U) obeys an integral fluctuation theorem if Z (15.62) hexpð UÞi ¼ pðUÞexpð UÞdU The convexity of the exponential functions then implies the inequality hUi  0, which resembles the second law. The integral fluctuation theorem implies that there are trajectories for which U is negative with the exception of the degenerate case, p(U) ¼ d(U) leading to “violation” of the second law. One constraint on the probability distribution p(U). If p(U) is a Gaussian, the integral fluctuation theorem implies the relation hðU  hUiÞ2 i ¼ 2hUi between the variance and the mean of U (Seifert, 2012).

15.4.5 Detailed fluctuation theorem The probability of the second law violations diminishes in a large system and occurs exponentially rarely. This observation essentially reconciles the effective validity of thermodynamics on the macroscale and states that even for large systems, in principle, such violations must occur in even macroscales. Detailed fluctuation theorems corresponding to pðUÞ ¼ pðUÞexpð UÞ

(15.63)

implies that the extent of violations will be small. A variable obeying the detailed fluctuation theorem obeys either a transient fluctuation theorem or a steady-state fluctuation theorem. Generalized Crooks fluctuation theorem compares the probability of the original process with the timereversed process: p0 ð UÞ ¼ pðUÞexpð UÞ

(15.64)

in which p0 is normalized. In the Crooks fluctuation theorem, the work p(W) spent in the “forward” process is related to the normalized work p~ applied in the reversed process where the control parameter is driven according to ~lðsÞ ¼ lðt  sÞ and one starts in the equilibrium distribution: h   i p~ðWÞ ¼ exp  W  DG T (15.65) p~ðWÞ The total entropy change along a trajectory is: DStot ¼ DSm þ DS with hexpðDStot Þi ¼ 1. For an arbitrary initial distribution p(x, 0), arbitrary time-dependent driving l(s), and an arbitrary length t of the process DS and DSm are DS ¼ ln pðxt ; lt Þ þ ln pðx0 ; l0 Þ and DSm ½xðsÞ ¼ q½xðsÞ=T

(15.66)

In nonequilibrium steady-state fluctuation theorem with fixed l and for arbitrary length t, the total entropy production obeys

p~ðDStot Þ ¼ exp DStot (15.67) p~ðDStot Þ If one includes the entropy change of the system, S(s) ¼ ln p(x(s),s), the equation above holds even for finite times in the steady state. Another consequence of the fluctuation theorem is the nonequilibrium partition identity (Carberry et al., 2004): hexp½st ti ¼ 1 for all the time t. The exponential probability ratio given by the fluctuation theorem cancels the negative exponential leading to an average, which is unity. One important implication from the fluctuation theorem is that small machines, such as nanomachines or even mitochondria in a cell, will spend part of their time actually running in “reverse”.

678

CHAPTER 15 Probabilistic Approach in Thermodynamics

15.5 Information theory Definition and quantification of information (like energy) have created broad discussions. “Information system” with its role in living systems is a constantly evolving field. Information may be defined as the capacity to reduce statistical uncertainty in the communication of messages between a sender and a receiver. Later the information theory introduced “structural information” or “functional information” that leads to self-organizing capabilities of living systems, and “instructional information” that is a physical array. However, linkages with the field of semiotics established a much more compatible approach to biological information. Within this trend “control information” is defined as the capacity to control the acquisition, disposition, and utilization of matter, energy, and information flows in purposive processes (Demirel, 2011; Bauer et al., 2012).

15.5.1 Information capacity and exergy Exergy that is the maximum capacity to create work also appears as information capacity; the free energy of the information that a system possesses is kBT ln I, where I is the information about the state of the system, and kB is the Boltzmann constant. A relation between the exergy and information is Ex ¼ kB ln 2To I

(15.68)

A Brownian information machine consisting of an overdamped particle in a time-dependent harmonic trap extracts work from information processing. Although these machines differ in essential aspects from conventional thermodynamic ones (Bauer et al., 2012), on average per cycle, by using the information I, they extract the work Ws. One may define efficiency h as h ¼ Ws =I

(15.69)

The transformation of information from one system to another is often almost entropy-free energy transfer, and the information capacity I in binary units is expressed as a function of the probability p: ! U U X X 1 o o (15.70) I¼ pj ln pj  pj ln pj ln 2 j¼1 j¼1 where U is the number of possibilities, po is the probability at equilibrium (i.e. no knowledge), and p is the probability when some information are available about the system. Information here is used as a measure of order or structure. With small amount of exergy in the form of information, we can control processes involving large amounts of energy and matter. The exergy carried as information is a structural exergy. Living systems survive and evolve by transforming solar exergy into complex, highly ordered structures directed and controlled by the information of the genes. One generation transfers the information to the next generation by deoxyribonucleic acid (DNA) replication. The superiority of biological systems relies on the difference in information transfer techniques between biological and physical systems. Since information must be stored and transported safely, in biological systems, it takes place with a continuous debugging or control. The specific molecular structures and the unique positions of single atoms of DNA molecules make systems far more efficient than technological systems.

15.5.2 Information and biological systems The nucleic acids, the “molecules of genetic information” are digitally organized; they consist of chains of four different units (nucleotides). Nucleic acids consist of a sugar phosphate backbone with purine or pyrimidine bases attached to the sugar molecules. The basic element of nucleic acids, called a nucleotide, consists of a phosphate, a sugar, and a base. The bases DNA are purines guanine, adenine, pyrimidines cytosine, and

15.5 Information theory

679

Table 15.1 Information Sources in Biological Systems and Thermodynamics Information Systems (Sources)

Thermodynamic Potential

Information Information inflow (replication) Information outflow (death) Information exchange (sorting) Internal information processing (growth)

Free energy (exergy) Exergy inflow Exergy outflow (waste exergy) deS di S

Source: Demirel (2011).

thymine. Information is stored in DNA as sequence of base pairs. Ribonucleic acid (RNA) is a nucleic acid polymer consisting of nucleotide monomers that translate genetic information from DNA into protein products; three types of RNA molecules are involved in the translation, and the cooperation of different functions. RNA is very similar to DNA, and serves as the template for the translation of genes into proteins, transferring amino acids to the ribosome to form proteins, and also translating the transcript into proteins. Biological systems diversify at bifurcation points as the information within the system becomes too complex and random. The bifurcation points are stimulated by intrinsic mechanisms or informational entropy and are sensitive to controlling parameters. Living systems consist of organized structures and processes of informed, self-replicating, and dissipative autocatalytic cycles. They are capable of funneling energy, mass, and information flows into their own growth, development, and reproduction. More developed dissipative structures are capable of degrading more energy, and of processing more complex information through developmental and environmental constraints; this establishes mechanisms for energy coupling in the pathways of chemical cycles and transport systems. Table 15.1 illustrates that the source of information through replication may keep the information system away from equilibrium. The unified theory of evolution attempts to explain the origin of biological order as a manifestation of the flows of energy and information on various spatial and temporal scales. For example, the main task of data processing in Escherichia coli is the production of polypeptides at the time they are needed for the metabolism of the cell. The environment of E. coli determines the selection of what is transcribed. The transcription of the enzymes for the cleavage of the sugar lactose into galactose and glucose is controlled; if the environment has no lactose a repressor molecule blocks transcription. Genes, therefore, act as specific informational units with addresses and regulatory support systems to form the required enzymes, regulatory and structural proteins. Information storage in DNA and transforming this information into protein clusters plays the central role in maintaining thermodynamic stability and information processing. The genome is the source of cellular information, also any cellular structure such as lipids and polysaccharides may store and transmit information according to the information theory. Beside these, thermodynamic forces in the form of transmembrane gradients of Hþ, Naþ, Kþ, Ca2þ and consequent electric potential cause significant displacements from equilibrium, and are therefore potential sources of information. Genome–protein system may be a component of a large ensemble of cellular structures, which store, encode, and transmit the information (Gatenby and Frieden, 2007).

15.5.3 Maximum information entropy The maximum information entropy procedure is the derivation of the Gibbs ensemble in equilibrium statistical mechanics, but the information entropy is not defined by a probability measure on phase space, but rather on path space. The path a information entropy is X SI ¼  pa ln pa (15.71) a

680

CHAPTER 15 Probabilistic Approach in Thermodynamics

The maximum of SI under constraints results in pa ¼

1 exp Aa Z

(15.72)

where A is the path action and Z is the partition sum. From Eqns (15.71) and (15.72), the maximum information entropy as a function of the forces X becomes     SI;max X ¼ ln Z X  hAðXÞi z ln WðhAðXÞiÞ (15.73) where WðhAðXÞiÞis the density of paths (Bruers, 2007). Fluctuation theorem allows a general orthogonality property of maximum information entropy to be extended to entropy production. The new derivation highlights maximum entropy production and the fluctuation theorem as generic properties of maximum information entropy probability distributions. Physically, maximum entropy production applies to those macroscopic fluxes that are free to vary under the imposed constraints, and corresponds to selection of the most probable macroscopic flux configuration (Dewar, 2003; Lahiri et al., 2012). The constrained maximization of Shannon information entropy is an algorithm for constructing probability distributions from partial information. Maximum information entropy is a universal method for constructing the microscopic probability distributions of equilibrium and nonequilibrium statistical mechanics. The probability distribution of the microscopic phase space trajectories over a time s satisfies pa fexpðsss =2kB Þ (Dewar, 2003).

15.6 Applications: biomolecules and biochemical cycles For biochemical reactions, an embedding heat bath provides the source of stochastic dynamics. An enzyme or molecular motor stochastically undergoes transitions from one state to another. Energy released by a chemical reaction like hydrolysis of one molecule ATP to ADP and a phosphate may be used by such transitions. Also, the externally maintained nonequilibrium concentrations of these molecules provide a source of chemical energy, to the system. In each transition, this chemical energy is converted into mechanical work, dissipated heat, or changes in the internal energy. Chemical reaction networks consist of a number of possible reactions between different types of reactants. Under nonequilibrium conditions, concentrations (i.e. chemical potentials) of these reactants are either fixed externally or traced along a trajectory. Then, chemical work, dissipated heat, and changes in internal energy can be identified on the level of a single reaction trajectory. Single macromolecules and biomolecular networks constitute a class of systems to which the concepts of stochastic thermodynamics can be applied. Such molecules are embedded in an aqueous solution of well-defined temperature containing different solutes at specified concentrations. Such networks describing, for example, gene regulation, signal transduction, or molecular motors operate typically in nonequilibrium state created and maintained by time-dependent external mechanical and/or chemical stimuli. Stochastic thermodynamics applies to such systems with a few observable degrees of freedom like the positions of colloidal particles or the gross conformations of biomolecules. The unobserved degrees of freedom like those making up the aqueous solution, however, are assumed to be fast and thus always in the constrained equilibrium imposed by the instantaneous values of the observed slow degrees of freedom. This assumption is sufficient to identify a first law like energy balance. Entropy change along such a trajectory consists of three parts: heat exchanged with the bath, intrinsic entropy of the states, and stochastic entropy. The stochastic entropy requires an ensemble from which this trajectory is taken. Likewise, the identification of entropy production only requires the notion of an ensemble, which determines stochastic entropy along an individual trajectory.

15.6.1 Biomolecules Many biomolecules are large proteins and each type of protein structure consists of a precise sequence of amino acids that allows it to fold up into a particular three-dimensional shape or conformation. The lowest

15.6 Applications: biomolecules and biochemical cycles

681

level of hierarchy in protein structure is the polypeptide chain. Under the proper conditions, the polypeptide chain can fold back and form secondary structures known as helices and sheets. These helices and sheets pack against each other and form tertiary, or fully-folded, structures. Strong chemical bonds strengthen the polypeptide backbone and lead to stable protein structures. The highest level of hierarchy occurs when proteins interact with each other and balance internal tensions between the counteracting tendencies for order. The disorder may be represented in the entropy of the backbone chain and the surrounding solvent molecules, while the order is represented by hydrogen bonds, salt-bridges, and disulfide links. Folding may be a mechanism of transporting heat (entropy) away from the protein structure as it moves to more ordered form. The most stable state of the polypeptide chain in solution occurs at a minimum Gibbs free energy of the complete system that is protein structure plus solvent with the larger number of arrangements of surrounding water molecules. This is known as hydrophobic effect or entropy stabilization and may be due to the set of hydrophobic amino acids in proteins (Karsenti, 2008). The substance that is bound by protein is called a ligand. Separate regions of protein surface generally provide binding sites for different ligands, allowing the protein’s activity to be specific and regulated. Proteins reversibly change their shapes when ligands bind to their surfaces. One ligand may affect the binding of another ligand, and metabolic pathways are controlled by feedback regulation in which some ligands inhibit while others activate enzymes early in a pathway. The ability to bind to other molecules enables proteins to act as catalysts, signal receptors, information processors, switches, motors, or pumps. For example, proteins walk in one direction by coupling one of the conformational changes with the hydrolysis of an ATP molecule bound to the protein. The fluctuating conformations of biomolecules in nonequilibrium can be described by either a continuous degree of freedom subject to a Langevin equation or identifying discrete, distinguishable states between which sudden transitions take place. This enables one to formulate a first law, derive entropy production, and the corresponding fluctuation theorems on the single molecule level. The aspects of the stochastic thermodynamics of biomolecular systems consider that the rates are constrained by thermodynamic consistency and each of the states visited along a stochastic trajectory contains many microstates that are unobserved and fast so that thermal equilibrium is reached within each state. Transitions between the states, however, are slower, observable and can be driven by external forces, flows or chemical gradients. Therefore, each of the states carries an intrinsic entropy arising from the coupling to the fast polymeric degrees of freedom and to those of the heat bath (Seifert, 2012). The thermal bath allows biomolecules to exchange energy with the molecules of the solvent through the breakage of weak molecular bonds that trigger the relevant conformational changes. The amount of energies involved in single biomolecules are few tens of kBT, small enough for thermal fluctuations over timescales to be relevant in many molecular processes. In thermodynamics of small systems, a control parameter may define the system’s state; for example, a motor molecule can be described by an internal configuration {xi} and a control parameter x, then U({xi},x) is the internal energy of the system. Upon variation of the control parameter x, energy conservation yields dU ¼

X vU  i

vxi

   vU þ dx ¼ dq þ dW vx fxi g x

(15.74)

The total work done on the system is Zxf   W ¼ F fxi g; x dx

(15.75)

0

where xf is the perturbation for a time tf, and F({xi},x) is the fluctuating force acting on the molecule Fðfxi g; xÞ ¼ ðvu=vxÞfxi g . Since the force is a fluctuating quantity, W, q, and DU will also fluctuate for

682

CHAPTER 15 Probabilistic Approach in Thermodynamics

different trajectories, and the amount of heat or work exchanged with the bath will fluctuate in magnitude and in sign. Therefore, random fluctuations dominate the thermal behavior in small systems. The time evolution of {xi} and therefore the force will change from one perturbation to another, and the system will follow different trajectories. A quantity that characterizes the stochastic nonequilibrium process is the probability distributionRof work values p(W) obtained along different trajectories. The average work over all trajectories hWi ¼ WpðWÞdW is larger than the reversible work and equal to the free-energy difference DG between the equilibrium states defined at x ¼ xf and x ¼ 0. If we define the dissipated work along a given trajectory as Wdis ¼ W  DG, second law can be written as, Wdis  0. The equality occurs only when the perturbation process is carried out infinitely slowly in a quasistatic process to relax to equilibrium at each value of the control parameter. On the other hand, nonequilibrium processes are characterized by hysteresis phenomena and the average work performed upon the system differs between a given process and its time-reversed one. Under the assumption of microscopic reversibility (detailed balance), fluctuation theorems relate the entropy production along a given forward process to backward process by     pf W Wdis ¼ exp (15.76) kB T pb ðWÞ where pf(W) and pb(W) are the work distributions along the forward and backward processes, respectively. Equation (15.76) indicates that a steady-state system is more likely to deliver heat to the bath (W is positive) than it is to absorb an equal quantity of heat from the bath (W is negative) for any finite time. The microscopic configurational degrees of freedom collectively denoted by x are subject to a microscopic potential energy Q(x,l) containing the interactions within the molecule and possibly with some of the surrounding solvent and solute molecules. Under nonequilibrium conditions, an external force applied to the molecule may lead to conformational changes such as changing end-to-end distance. Such a quantity is an example of a mesoscale description that involves a certain number of variables denoted by x. Each such state effectively comprises many microstates {x} in classes Cx such that each x belongs to exactly one Cx. The dynamics of x is supposed to be slow and observable whereas equilibration among the microstates making up one state x is fast and the conditioned probability p(xjx,l) that a microstate is occupied is given by pðxjx; lÞ ¼ exp½ðQðx; lÞ  Gðx; lÞÞ=T

With the constrained free energy X       G x; l ¼ U x; l  TS x; l ¼ T ln exp½Qðx; lÞ=T

(15.77)

(15.78)

x˛Cx

The constrained intrinsic entropy X       G x; l ¼ S x; l ¼ vT G x; l ¼  pðxjx; lÞln pðxjx; lÞ

(15.79)

x˛Cx

And constrained internal energy

  X G x; l ¼ Qðx; lÞpðxjx; lÞ

(15.80)

x˛Cx

Traditionally, efficiency of molecular motors has been studied within ratchet models where the motor undergoes a continuous motion in a periodic potential that depends on the current chemical. Dissipation then involves both the continuous degree of freedom and the discrete switching of the potential.

15.6 Applications: biomolecules and biochemical cycles

683

Example 15.4 Cyclic isomerization kf Discuss a simple model for a unimolecular cyclic isomerization reaction: A % A0 . kb

Solution: kf The chemical potentials for the components for the reaction system A % A0 are mA ¼ moA þ kB T ln cA

kb

and

mA0 ¼ moA0 þ kB T ln cA0

where mo is the standard state chemical potential. When the reaction system above is applied to cellular metabolic networks the concentrations in the chemical potential equations for biochemical species should be understood as activities rather than concentrations. Equilibrium will be reached after a long time, and we have A0eq Aeq

¼



 kf ¼ exp moA0  moA kB T kb

(a)

Here kf and kb are the forward and backward reaction rate constants, respectively, kB is the Boltzmann constant. If, however, the concentrations of A and A0 are maintained by a mechanism the system will reach a nonequilibrium steady state at which the concentrations will not change with time and the flux will not be zero. Jr ¼ Jrf  Jrb s 0

(b)

where Jrf ¼ krf cA and Jrb ¼ krbcA0 are the forward and backward fluxes. The chemical potential difference for the reaction system becomes c 0 Dm ¼ mA0  mA ¼ moA0  moA þ kB T ln A cA In terms of the forward and backward fluxes Eqn (c) becomes   Jrb Dm ¼ kB T ln Jrf The dissipation J becomes 

J ¼ Jr Dm ¼ kB T Jrf  Jrb



  Jrf 0 ln Jrb

(c)

(d)

(e)

The dissipation (JrDm) is the amount of work necessary to maintain nonequilibrium steady state by maintaining the concentrations of B and B0 by pumping B molecules in and B0 molecules out. The dissipation (JrDm) leaves the system in the form of heat (Qian and Beard, 2005). The equality in Eqn (e) holds only if J ¼ 0 and Dm ¼ 0 that the system is in equilibrium. The inequality indicates that one cannot convert heat into work from a single temperature source as the second law of thermodynamics states. The dissipated heat is not the enthalpy differences, since the system is cyclic heat of reaction in forward and backward directions are balanced. As Eqn (e) shows that for an open biochemical network, fluxes and concentrations are important observable variables. Spectroscopic measurements show that concentrations of biochemical species in living cells are fluctuating. Concentrations and the standard state chemical potentials mo yield the nonequilibrium chemical potentials.

684

CHAPTER 15 Probabilistic Approach in Thermodynamics Using Eqn (d) at near equilibrium and (Jr/Jrf) << 1, we have         Jrb Jr kB T ¼ kB T ln 1  z Jr ln 1  x z  x for x << 1 Dm ¼ kB T ln Jrf Jrf Jrf;eq

or in terms of linear fluxeforce relationship is, we have J¼

Jrf;eq Dm kB T

15.6.2 Enzymes Enzyme is in an aqueous solution that consists of molecules of type i with concentrations {ci} and chemical potentials {mi} enclosed in a volume V at a temperature T. Enzyme exhibits a set of states such that equilibration among microstates corresponding to the same state is fast whereas transitions between these states are assumed to be slower and observable. Under these conditions, one can assign to each state n of the enzyme a free energy Ge,n, an internal energy Ue,n, and an intrinsic entropy Se,n. They obey the usual thermodynamic relation: Ge;n ¼ Ue;n  TSe;n

(15.81)

Three possible changes are: (1) pure conformational changes, (2) enzymatic reactions including binding and release of solutes, and (3) motor proteins. 1. If the enzyme jumps from state m to state n, the change in internal energy must be identified with an amount q of heat exchanged with the surrounding heat bath since there is no external work involved. Ue;n ¼ Ue;n  Ue;m ¼ q

(15.82)

2. Enzymatic transitions involve binding of solute molecules Ai, their transformation while bound, and finally their release from the enzyme. A general transition is n k þ

X

rio Ai # nþ k þ

i

X

soi Ai

(15.83)

i

where 1  k  Nk labels the possible transitions. Here, n and nþ denote the states of the enzyme before and after the reaction, respectively, soi ¼ 0 describes pure binding of solutes, and for rio ¼ 0 release of bound solutes. The free-energy difference in such a transition has two contributionsdone for enzyme and one for the bath solution: DGk ¼ DUk  TDSk ¼ DGe;k þ DGsol;k

(15.84)

DGe;k ¼ DUe;k  TDSe;k ¼ DGe;nþk þ DGe;nk X  soi  rio mi ¼ Dmk DGsol;k ¼ DUsol;k  TDSsol;k ¼

(15.85)

where

(15.86)

i

One assigns a first law type energy balance to each reaction of type k in Eqn (15.83). The heat released in this transition is the change of internal energy of the combined system. Essentially the same formalism applies to an enzyme acting as a molecular motor often described by such discrete states. Most generally, if the motor

15.6 Applications: biomolecules and biochemical cycles

685

undergoes a forward transition of type k as in Eqn (15.83) it may advance a distance lk in the direction of the applied force f with the mechanical work Wmech,k ¼ flk with for pure chemical step lk ¼ 0 or for pure mechanical step soi ¼ rio . The motor operates in an environment where the concentrations of molecules like ATP, ADP or Pi are essentially fixed. The first law for a single transition of type k becomes qk ¼ Wmech;k  DUk

n! r!ðn  rÞ!

(15.87)

With the introduction of a chemical work Wchem,k ¼ Dmk the first law becomes Wchem;k þ Wmech;k ¼ DUe;k þ qk

(15.88)

Here the dissipated heat that under steady state conditions would enter as an ensemble average. A trajectory of the enzyme can be characterized by the sequence of the jump times and the sequence of reactions in either direction. An ensemble is defined by specifying the initial probability pn(0) for finding the enzyme in state n and the set of rates J for the reactions in Eqn (15.83) in either direction. Both inputs will then determine the probability pn(s) to find the enzyme in state n at time s. An identification of entropy production along the trajectory requires some input from the rates determining the transitions. For the simple case of pure conformational changes, m#n with the rates h   i Jmn (15.89) ¼ exp  Ge;n  Ge;m T Jnm is required by thermodynamic consistency that the ensemble will eventually reach thermal equilibrium regardless of the initial conditions, h     i pn s /peq;n ¼ exp  Ge;n  Ge T (15.90) with the free energy of the enzyme Ge ¼ T ln

X



exp  Ge;n =T

(15.91)

n

The corresponding relation for transitions that involve enzymatic reactions of Eqn (15.83): h     i Jkþ ¼ exp  DG =T ¼ exp  DG þ Dm k e;k k T Jk

(15.92)

And for molecular motors, we have h   i Jkþ ¼ exp  DG þ Dm  W T e;k mech;k k Jk

(15.93)

The ratio of the rates can also be written in the form   Jkþ ¼ exp DSk þ qk =T  Jk

(15.94)

The equation above shows that this ratio is determined by the change of intrinsic and medium entropy involved in this transition (Seifert, 2012). After summing over all reactions taking place up to time t and adding the change in stochastic entropy DS ¼ ln pnt(t) þ ln pno, the total entropy production along a trajectory becomes X h     i (15.95) DStot ¼ DS þ nj DSkj sj þ qkj sj T j

686

CHAPTER 15 Probabilistic Approach in Thermodynamics

where nj ¼ 1 denotes the direction in which the transition kj takes place at time sj. The equation above shows an enzyme modeled by discrete states at a nonequilibrium steady state generated by nonequilibrium solute concentrations and/or an applied external force in the case of a motor protein. So far, it was implicitly assumed that the rates are time-independent. Under sufficient conditions, an enzyme mechanism exhibits a multidimensional inflection point around which a set of linear flow-force equations may be valid over an extended range outside of equilibrium. Enzyme catalyzed reactions obey approximately the Michaelis–Menten rate equation, which can show a high degree of linearity in the chemical affinity for certain values of substrate concentrations. Example 15.5 Configurational changes of a single enzyme Discuss the states and configurational changes of a single enzyme (Seifert, 2011). Solution: States of aqueous solution: Consider an enzyme placed in an aqueous solution consisting of a set of {Ni} molecules of type i enclosed in a volume V at a temperature T. The microstates of this solution (without the enzyme yet) are labeled collectively by {xsol}. The configurational energy of the whole solution can be expressed by a potential V sol(xsol) leading to the probability and free energy for each microstate xsol: pðcsol Þ ¼ exp½bðVsol ðcsol Þ  Gsol Þ Gsol ¼ kB T ln

X

exp½ bVsol ðxsol Þ

xsol

Here b h 1/kBT is the inverse temperature and kB, Boltzmann’s constant. The (mean) internal energy and entropy of this solution are X Usol ¼ pðxsol ÞVsol ðxsol Þ Ssol ¼ kB

X

xsol

 . pðxsol Þln pðxsol Þ ¼ Usol  Gsol T

xsol

All these quantities depend on T, V and {Ni}. Moreover, we will assume that this solution is large enough that the chemical potential for species i becomes mi ðci ; TÞ ¼ vNi Gsol

with

fci g ¼ fNi =Vg

Solution and enzyme: We add a single enzyme to this solution and distinguish different (mesoscopic) states of the enzyme. Equilibration among microstates corresponding to the same state is fast whereas transitions between these states are assumed to be slower and observable. Under these conditions, we can assign to each state n a free energy Genz,n, an internal energy Uenz,n, and an intrinsic entropy Senz,n. We denote the microscopic configurational degrees of freedom of an enzyme with fixed position of its center of mass collectively by {xenz}. The configurational energy of the system consisting of enzyme and solution becomes         Vtot cenz ; csol ¼ Vsol csol þ V cenz ; csol ¼ Vtot c

15.6 Applications: biomolecules and biochemical cycles

687

where Vtot(xenz,xsol) contains both the interaction within the enzyme and the interaction between enzyme and solution. We now partition all microstates {x} ¼ {(xenz,xsol)} of the combined system into a set of state configurations {Cn} such that each microstate x of the combined system occurs in one such set Cn. For any specific state n, the probability p(xjn) of finding an allowed microstate of the combined system then follows from the assumption of fast equilibration as pðcjnÞ ¼ exp½bðVtot ðcÞ  FnÞ the constrained free energy in state n is Gn ¼ kB T ln for proper normalization 1 ¼

X

exp½ bVtot ðxÞ

x˛Cn

P

pðxjnÞ. The (mean) internal energy in state n is

x˛Cn

Un ¼

X

pðxjnÞVtot ðxÞ

x˛Cn

and the (intrinsic) entropy becomes Sn ¼ kB

X

  pðxjnÞln pðxjnÞ ¼ Un  Gn T

x˛Cn

The defined free energy, internal energy, and (intrinsic) entropy of each state of the combined system will depend on T, V, and {Ni}. For a finite range of the interaction potential V(xenz,xsol), the free energy, internal energy, and intrinsic entropy of the enzyme become       Genz;n ci h Gn Ni  Gsol Ni       Uenz;n ci h Un Ni  Usol Ni and Senz;n

      ci ¼ Sn Ni  Ssol Ni

In the thermodynamic limit of the solution, these quantities become independent of system size and depend only on the concentrations {ci}. Since we keep T and V fixed, we suppress the dependence on these quantities and often that on {ci} as well. Free energy becomes Genz,n ¼ Eenz,n  TSenz,n. The thermodynamic properties of the enzyme (Genz,n, Eenz,n and Senz,n) will depend on the concentrations {ci} which refer primarily to properties of the solution (Seifert, 2011).

Example 15.6 Stochastic MichaeliseMenten kinetics Discuss the three-state MichaeliseMenten kinetics for single enzyme in a steady state. Solution: Consider a single enzyme E with three conformational states in a cyclic reaction in a steady-state for MichaeliseMenten kinetics: kf1

kf2

kf3

kb1

kb2

kb3

E þ S % ES % EP % E þ P where f and b refer to forward and backward respectively. Usually substrate S and product P are not at chemical equilibrium, although their concentrations are approximately constant with a single enzyme in

688

CHAPTER 15 Probabilistic Approach in Thermodynamics

an experimental setup. The three-state model is the simplest model with capability of steady-state nonequilibrium kinetics and a three-state Markov process becomes (Qian and Elson, 2002)

 dpE  (a) ¼ kf1 S  kb3 P pE þ kb1 pES þ kf3 pEP dt 

 dpES ¼ kf1 S pE þ kb1  kf2 pES þ kb2 pEP (b) dt 

 dpEP (c) ¼ kb3 P pE þ kf2 pES þ kb2 pES  kf3 pEP dt where pE, pES, and pEP are the probabilities of the enzyme being in the E, ES, and EP states at time t. respectively. The eigenvalues of the coefficient matrix arising from Eqns (a)e(c) are 

 pffiffiffiffi 1 

l1;2 ¼  (d) kf1 S þ kb1 þ kf2 þ kb2 þ kf3 þ kb3 ½P H D 2 Where D ¼ (kf1[S] þ kb1 kf2  kb2  kf3 þ kb3[P])2  4(kf2  kb3[P])(kf3 kb1) For measurements [P] w 0 for kb1 << kf3 eigenvalues become nonreal and for substrate concentrations pffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi2 pffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi2 kb2 þ kf2 þ kf3  kb1 kb2 þ kf2  kf3  kb1 < S < (e) kf1 kf1 D < 0. Therefore, the enzyme kinetics becomes oscillatory depending on the ratio between imaginary and real parts of eigenvalues for a range of substrate concentrations. The oscillatory kinetics reflects the nonequilibrium nature of the single-enzyme kinetics. The chemical energy in the transformation of S / P becomes the heat dissipated into the aqueous solution in the heat bath. The steady-state probabilities of the three states are    ps;E ¼ kf2 kf3 þ kf3 kb1 þ kb1 kb2 l1 l2  

  ps;ES ¼ kf1 S kf3 þ kb2 þ kb2 kb3 P l1 l2  

  ps;EP ¼ kf1 kf2 S þ kf2 þ kb1 kb3 P l1 l2 A trajectory of a single-enzyme molecule is a stochastic process in which the protein jumps among the three states in random fashion. A single-enzyme trajectory contains information on both steady-state and transient kinetics in terms of probabilities pij(t,t þ s), which is the probability of the enzyme being in state j at time t þ s when it is state i at time t. However, when an enzyme is in a steady state, the transition probabilities are independent of time.

Example 15.7 Hydrolysis of ATP Investigate the enzymatic reaction of ATP. Solution: Consider binding and subsequent hydrolysis of ATP in a solution containing ATP, ADP and Pi, at chemical potentials mT, mD, and mP, respectively. The enzyme undergoes a transition Ek þ ATP ¼ ðm; ATPÞ ¼ n; ADP; Pi ¼ Ek þ ADP þ Pi 1

¼

2

¼

3

¼

4

15.6 Applications: biomolecules and biochemical cycles

689

At state m an ATP and in state n an ADP and a Pi are tightly bound to the enzyme. The overall reaction becomes Ek þ ATP ¼ Ek þ ADP þ Pi: For simplicity, we have constrained the enzyme to be in the same state k (without bound molecules) before and after the reaction. The free-energy difference involving the binding of the ATP becomes   G2  G1 ¼ Genz;m  mT  Genz;k The free-energy difference upon release of ADP and Pi becomes   G4  G3 ¼ Genz;k  Genz;n  mD  mP The overall free-energy difference is DG ¼ G4  G1 ¼ mD þ mP  mT The free-energy difference between the two intermediate states       G3  G2 ¼ G3  G4 þ G4  G1 þ G1  G2 ¼ Genz;n  Genz;m The free-energy difference between these two intermediate states should not strongly depend on concentrations of the solutes. On the state level, internal energy, intrinsic entropy, and free-energy relations are based on an assumption of a timescale, separation between transitions within each state, and the slower and observable transitions between these states. The second law for any timedependent ensemble leads to both stochastic entropy and the local detailed balance condition for the rates. The main difference of enzymes and motors compared to colloidal systems, which have no relevant hidden internal degrees of freedom, is the crucial role the intrinsic entropy of the states plays. It prevents a direct inference of the dissipated heat from a measurement of the ratio of the rates even on the level of a complete cycle (Hayashi et al., 2010). Molecular motors transduce chemical energy released from hydrolysis of ATP into a mechanical work exerted against an external force. They operate under nonequilibrium conditions due to unbalanced chemical potentials of molecules like ATP or ADP involved in the chemical reactions accompanying the motor steps. In contrast to macroscopic engines, fluctuations allow for backward steps even in a directed motion. Generally, such molecular motors are modeled either in terms of continuous flashing ratchets or by a chemical master equation on a discrete state space. The efficiency at maximum power can increase when the system is driven further out of equilibrium. Biological systems are generically out of equilibrium because of mainly mechanical or chemical stimuli provided by external forces or imbalanced chemical reactions (Schmiedl et al., 2007). At nonequilibrium steady state, a net flux in the species occurs if it is possible to adjust the concentrations. Hence, the stationary state violates the detailed balance condition pnJnm ¼ pmJmn where J is the rate of transformation and p is the probability. For such nonequilibrium steady states, a detailed fluctuation theorem is pðDStot Þ ¼ exp½DStot pðDStot Þ Here the entropy definition is for any length of the trajectory and thus the previous results are valid in the long time.

690

CHAPTER 15 Probabilistic Approach in Thermodynamics

15.6.3 Stochastic model equations of biochemical cycles Networks of coupled chemical reactions in a dilute solution should be described by a chemical master equation whenever fluctuations are relevant due to small numbers of at least one of the involved species. The master equation contains the rate constants of all possible reactions. The solution of the chemical master equation gives the dynamics of the probability of finding a certain number of molecules of each species at a given time for a given initial condition. This leads to the stochastic trajectory of the network by recording the time at which each particular reaction took place with its concomitant change of the number of molecules. For biochemically driven reactions, embedding heat bath provides the source of stochastic dynamics. The stochastic model, in the form of the chemical master equation, is an infinite system of mathematically coupled ordinary differential equations (Vellela and Qian, 2009). Assuming that na and nb are the number of substrate molecules, which are fixed for a fixed volume, and pn(t) is the probability of having nX molecule at time t. The stochastic model equations are dp0 ðtÞ ¼ a 1 p1  b 0 p 0 dt   dpn ðtÞ ¼ anþ1 pnþ1 þ bn1 pn1  an þ bn pn dt

where

  bn ¼ k^2f nb þ k^1f na n n  1 ;

(15.96) n ¼ 1; .; N

   an ¼ k^2b n þ k^1b n n  1 n  2

(15.97)

(15.98)

where bn and an are the supply and consumption rates, respectively. The rate constants k^i are related to the number of reactants involved in the ith reaction; for example, for a reaction involving m reactants, we have k^i ¼ ki =V m1 . Steady-state probability distribution would be obtained by the mathematical detailed balance equation: bn1 pn1;s ¼ an pn;s

where the probability fluxes in forward and backward reactions are equal at each state. So the stationary probabilities become pn;s ¼ p0;s

nY 1

bi ; a i¼0 iþ1

p0;s ¼ 1 

N X

pj;s

(15.99)

j¼1

In the chemical master equation, the steady-state probability distribution of the equilibrium steady state is a Poisson distribution. For Schlo¨gl’s model steady-state probability distributions become pn;s ¼ p0;s

qn q e ; n!



cS k1f V k1b

(15.100)

When the detailed balance k1fk2bcS ¼ k1bk2fcP is not satisfied, the system will reach nonequilibrium steady state, in which Jrif s Jrib, and the reaction will be forming a cycle from a substrate (S) to product (P), such as S/X/P/S

which may be quantified by a cycle flux. In biochemical cycles, the concentrations of the substrates could vary in different situations, while the rate constants, which depend on the type of substrates involved, would not change. Biological systems are capable of reaching a unique fluctuating, stochastic, and nonequilibrium stationary states with stationary probability distributions (Ge and Qian, 2009). Under equilibrium conditions, there is only one real root and steady-state behavior of the stochastic system will be the same with that of deterministic system. However, when the system is bistable, the behavior in the deterministic model depends on the initial condition, while the steady-state behavior in the stochastic model

15.6 Applications: biomolecules and biochemical cycles

691

is independent of the initial condition. In the long term, the stochastic model predicts that the system spends almost all its time in the two functional cellular attractors states that correspond to the stable states of the deterministic model, and the proportion of time spent in each is dictated by the ratio of the transition rates between them. The key behavior of a bistable system is the ability of transition between the functional cellular attractors. Schlo¨gl’s model exhibits multiple timescales: at fast scale the system relaxes to one of the functional cellular attractors, while it transitions from one to another functional cellular attractor at a slow scale (Schlogl, 1972). Modeling and analysis of biochemical systems need consideration of multiple scales, which require linking and integrating the models that operate at different temporal and spatial scales. To improve multiscale modeling of data, new and existing strategies for coupling micro- and macrolevel models must be developed and tested (Vellela and Qian, 2009). For a large but finite volume size, the stochastic model predicts that bistable systems will be in the more stable functional cellular attractors and spends most of its time there. However, for small volume, the functional cellular attractors may exchange relative stability with the changes solely in volume. The long-term behavior of bistable systems is determined by the transition rates between the functional cellular attractors. The chemical master equation model is the most valid mathematical model to describe nonequilibrium systems occurring at the miscroscopic and mesoscopic scales. The mesoscopic theory of stochastic macromolecular mechanics and macroscopic nonequilibrium thermodynamics are consistent, and can have applications to cellular and molecular biology. Still, a proper mathematical framework would be based on stochastic approach, which may be explicit and discrete with master equation type or continuous in time and state space with the Fokker–Planck equation.

15.6.4 Biochemical network dynamics For such a driven dynamics, a thermodynamically consistent description of biochemical networks has two essential assumptions of such an approach: (1) A first-law like balance relates external work, internal energy, and exchanged heat dynamically along each fluctuating trajectory. (2) The heat dissipated into the environment leads to a well-defined entropy increase of the surrounding medium. For such systems, an exact relation is the Jarzynski relation, which allows to extract free-energy differences from experiment or simulations performed under nonequilibrium conditions, and the fluctuation theorem, which quantifies the probability of observing entropy annihilating trajectories in the steady state (Schmiedl and Seifert, 2007; Demirel, 2010). Consider an n component network. The n components may be the relevant numbers of metabolites in a metabolic network or the relevant proteins in a gene regulatory network pathway or other quantities specifying the network. The n-dimensional transpose vector qT ¼ (q1,q2,.,qn) is the state variable of the network and the value of jth component is denoted by qj. Let fj(q) be the deterministic nonlinear force on the jth component, which includes both the effects from other components and itself, and zj(q, t) the random force. For simplicity, assume that fj is a smooth function explicitly independent of time. The network dynamics may be generally modeled by a set of standard stochastic differential equations assuming that the noise will be Gaussian and white (Ao, 2005):     dqj ¼ fj q þ zj q; t (15.101) dt In complex biochemical network more complicated noises, such as non-Gaussian and colored, can exist. If an average over the stochastic force z is performed, Eqn (15.101) is reduced to the deterministic equation in dynamical systems. Equation (15.101) does not address how the stochastic force z can be related to the deterministic force f. To do that it can be transformed into the following form:         dq ¼ Vf q þ z q; t S q þT q dt

(15.102)

692

CHAPTER 15 Probabilistic Approach in Thermodynamics

with the semipositive definite symmetric matrix S, the antisymmetric matrix T, and the single-valued scalar function f. The symmetric matrix term is “degradation” q_ T SðqÞq_  0, while the antisymmetric part is nondecaying: q_ T SðqÞq_  0. The degradation is represented by the symmetric matrix S (the friction matrix) and the transverse force by the antisymmetric matrix T (mass, magnetic fields, rotation). f(q) is an emerging property of the network and called the network potential with respect to the network state variable q. It is not the usual free energy, although it plays the same role as an energy function in thermodynamics. The potential f(q) describes what the network eventually like to be under all thermodynamic and other constraints, and determines the robustness of network dynamics. Consequently, the tendency for optimization of network requires the minimum of potential f(q) and without the stochastic effect in Eqn (15.101), no unique potential function can be determined. Eqn (15.102) may identify the dynamical structure and would be suitable for analyzing metabolic networks directly. In order to have a unique identification, a constraint on the stochastic force is imposed so the symmetric matrix in Eqn (15.102) to be semipositive definite:    T  0      zj q; t z q; t ¼ 2S q d t  t0 and hzðq; tÞi ¼ 0 (15.103) where d(t) is the Dirac delta function. This constraint is consistent with the Gaussian and white noise assumption for z in Eqn (15.101). The matrix [S þ T] at the left-hand side of Eqn (15.102) makes the network tend to the minimum of potential function f(q), leading to an optimization. The stationary distribution function P0(q) for the state variable isRa type of Boltzmann–Gibbs distribution: P0 ¼ (1/Z) exp[f(q)] with the partition function Z given by Z ¼ d n q exp½fðqÞ. This allows a direct comparison with the stochastic experimental data at steady states. The present stochastic modeling, Eqn (15.102), indicates two timescales: the very short one characterizing the stochastic force z(q, t) and the timescale on which the smooth functions of f(q), degradation (friction) matrix S(q) and the translocation matrix T(q) are well defined. The stochastic force z(q, t) can arise from the environmental influence on the network, or from approximations such that the continuous representation of a discrete process. The friction matrix S(q) and the frictional force S show that the network has the tendency to approach to a steady state. The present dynamical structure theory requires that the friction is always associated with the noise according to the relation in Eqn (15.103). The networks have the ability to adaptation with friction and the ability to optimization with noise (Nigam and Liang, 2007).

15.6.5 Molecular motors Two systems should be distinguished: a single enzyme or motor and biochemical reaction networks. An enzyme or molecular motor stochastically undergoes transitions from one state to another creating either rotary or directional action. In such a transition, a chemical reaction may be involved like hydrolysis, which transforms one molecule ATP to ADP and a phosphate. These three molecular species are externally maintained at nonequilibrium conditions thereby providing a source of chemical energy (work) to the system. In each transition, this work will be transformed into mechanical work, dissipated heat, or changes in the internal energy. A protein structure called ATP synthase, or the FoF1, couples the energy of the proton electrochemical potential gradient to ATP synthesis, or the energy of ATP hydrolysis in F1 to the proton translocation through the subunit Fo rotation. Many motor proteins can generate directional movement, including the muscle motor protein myosin and the kinesin proteins of microtubules (Sambongi et al., 2000). Muscle fibers lengthen and contract in small volume changes to perform mechanical work as they utilize chemical energy released by the hydrolysis of ATP. Many single biochemical molecules of large proteins, such as enzymes (ranging from 2 to 100 nm), are mainly characterized by length scales in the nanometer-to-micrometer range and dissipation rates of 10–1000 kBT per second. A single macromolecule with n modification sites can exist in 2n microscopic key states. However, understanding protein functions requires knowledge and consideration of their multiprotein

15.6 Applications: biomolecules and biochemical cycles

693

complex or metabolic network. This needs bridging many orders of magnitude in spatial and temporal dimensions. Enzymes are highly specific and efficient in their acceleration of biochemical reaction rates by orders of magnitude. Under physiological conditions, the ATP hydrolysis reaction is energetically favorable but slow; however, enzyme catalysts dramatically accelerate the rate of hydrolysis and harness the energy to repeat a cyclical sequence of catalytic events capable of carrying out useful functions Baker (2004). The free energy from ATP hydrolysis depends on the ratio of [ATP]/{[ADP][Pi]}. The amount of energy required for fulfilling a function, the shape and conformational structure of the enzyme, and the information encoded determine the enzyme dynamics. Biological systems, including molecular motors, in nonequilibrium steady states have net flows and require a continuous input of material, energy, and information to maintain their self-organized steady state as they continuously dissipate net energy. Molecular motors, over the course of their enzymatic cycle, perform work, as they move along a track distance Dx against a constant force F. In some motor models, enzymatic mechanisms explicitly are different from the work-related mechanisms; for example, in the Huxley–Hill model motor force is generated within the biochemical step and work is subsequently performed when a motor relaxes within the potential well of a biochemical state. According to fluctuating thermal ratchet model, a motor force is generated when a ratchet potential is switched on and work is subsequently performed when a motor relaxes. On the other hand, some recent studies support a chemical motor model in which reaction and space coordinates are intimately linked. Force is generated and/or work is performed with a thermally activated biochemical transition. For example, a motor structural change induced by ligand binding or by other effects might directly perform work. Most chemical motor models assume that it is the external work (Wext ¼ FDx), i.e. in moving the track, that is coupled to the free energy for that step. Internal work, on the other hand may involve pulling out compliant elements in the motor, and is performed in stretching these internal elastic elements that are coupled to free energy DG. Motor enzymes, like myosin and kinesin, move along a track while catalyzing a hydrolysis reaction of ATP are self-consistent mechanochemical systems, in which the reaction mechanisms start and end with free enzyme, while the free enzyme is binded with the substrates and unbinded with products in some random order. For a reaction at isobaric and isothermal conditions, the affinity A characterizes the distance from equilibrium. Chemical reactions are usually far from global equilibrium. The fundamental equation for a chemical kf

reaction system operating in a steady state such as B % B0 is 

Jrf A ¼ exp Jrb RT

kb

 with A ¼ 

X

ni m i ;



Jrf ¼ kf B ;



Jrb ¼ kb B0 ;

Jr ¼ Jrf  Jrb

(15.104)

i

Here Jrf and Jrb are the forward and backward reaction rates, respectively, Jr is the net reaction rate, and ni is the stoichiometric coefficient, which is positive for product and negative for reactants. Eqn (15.104) provides a relation between affinities, metabolite concentrations, and reaction flows, and is a generalization of the chemical equilibrium condition A ¼ 0 and Jrf ¼ Jrb, to the case of a chemical system occurring in a nonequilibrium steady state. Most biochemical reactions, however, involve many simultaneous elementary steps, and the change of concentration of a species would be a sum of rates of change due to those elementary steps in which that species takes place (Demirel, 2010). In a stochastic description of a transition, the probability of finding the system in a state of B at time t is given by the master equation:  dpðB; tÞ X ¼ (15.105) kf pB þ kb pB0 dt R The entropy for a stochastic macromolecular mechanics becomes S ¼ kB p ln pdB. In a nonequilibrium stationary states where the probabilities are time-independent (dPs/dt ¼ 0), the entropy production is 1 X ðJrB AB Þ ss ¼ (15.106) 2T

694

CHAPTER 15 Probabilistic Approach in Thermodynamics

where Jr is the flux and A is the affinity



JrB ¼ kf pB  kb pB0 ;

AB ¼ kB T ln

k f pB k b pB 0

 (15.107)

Movement of the motor protein can be modeled by the Smoluchowski equation by assuming that the center of mass of the motor protein as a Brownian motion with the presence of a periodic energy potential:     v2 p x; t vpðx; tÞ v FðxÞ   ¼D p x; t  (15.108) vx2 vt vx b where p(x,t) is the probability density function of the motor protein at position x and for time t, D and b are the diffusion and friction coefficients, respectively. F(x) is the force of the potential and represent the molecular interaction between the motor protein and its track. The driving force for a motor protein comes from the hydrolysis of ATP: kf

ATP þ H2 O % ADP þ Pi kb

This reaction is well characterized by a two-state Markov process (or more generally, m discrete states): dpATP ¼ kf pATP þ kb pADP dt dpADP ¼ kf pATP  kb pADP dt

(15.109)

By introducing internal conformational states to the Brownian particle and to coupling the hydrolysis of ATP with the motor protein movement given in Eqn (15.109) leads to the following reaction–diffusion system for the movement of a Brownian particle with internal structures and dynamics:              v2 p x; n; t vpðx; n; tÞ v FðxÞ  ¼D p x; n; t  kfnk x p x; n; t þ kbkn x p x; k; t (15.110)  vt vx b vx2 where p(x,n,t) is the probability of a motor protein with internal state n and external position x, and kfnk is the transition rate constant from internal state n to state k when the protein is located at x. The states n and k, such as attached and detached states, driven by the ATP hydrolysis leads to a biased motion of the motor protein, in which the chemical energy is converted to the mechanical motion of the motor protein. The reaction rate of a single-enzyme molecule fluctuates, which is a general feature of enzymes. A single-molecule turnover time, which is the time for one enzyme molecule to complete a reaction cycle, also fluctuates. Since these fluctuations are random, their effects average to zero over a long period of time or for a large number of molecules, and the Michaelis–Menten kinetics well describe some enzymatic reactions. Considerable experimental and modeling works exist on the dynamics of linear molecular motors such as actin-myosin or the kinesin-microtubule powered by ATP as well as rotary motors such as FoF1ATPase and bacterial flagellar powered by proton flow across a membrane. The myosin protein uses the chemical energy released by the hydrolysis of ATP to create a directed mechanical motion. All the myosin motor proteins share the same biochemical reaction pathway when hydrolyzing ATP. They operate far from equilibrium, dissipate energy continuously, and make transitions between steady states. The enzyme reactions are also coupled with some other processes, such as transport processes characterized by longer timescales. It is also known that the catalytic activity of enzyme molecule is very sensitive to its molecular conformation transitions, which may occur on longer timescales (seconds) compared with timescales for the enzymatic reactions (milliseconds). The thermodynamic driving force of an enzymatic cycle Dm, can be

15.6 Applications: biomolecules and biochemical cycles

695

extracted by the nonequilibrium turnover time traces of single-enzyme molecules in living cells that might be measurable experimentally. From chemical master equations under nonequilibrium steady state, the ratio between the probability of M forward turnovers p(dnt ¼ M) and that of M backward turnovers p(dnt ¼ M ) is   pðdnt ¼ MÞ Dm ¼ exp M pðdnt ¼ MÞ kB T

(15.111)

where M is the positive integer. The equation above is general as long as the enzyme completes a full cycle, even when the enzyme molecules exhibit more complex kinetic pathways. The kinesin cycle is an asymmetric hand on hand walk along a microtubule. At some point in the kinesin cycle, both kinesin heads are attached to the microtubule. Then one of the head detaches and has many more available states than when docked, and so it also has larger entropy. Therefore docking must be accompanied by an entropy decrease and a corresponding heat loss Q to the environment. Because kinesin is in contact with an environment at constant temperature, the free-energy change poses an upper bound on the work W, which may be extracted from kinesin, W  (DF), with equality obtaining under ideal conditions. If we regard a backward step as the time-reversal of a forward step, then DFcan be related to the ratio of the probabilities for forward and backward steps through the Crooks fluctuation theorem. Since the whole process is isothermal, the work W extracted is bounded by the free-energy drop when kinesin moves one step forward. The efficiency of the ideal kinesin cycle is therefore equal to the ratio of the free-energy drop to the input energy. The Crooks fluctuation theorem is used to estimate the free-energy difference associated with the unfolding of an RNA molecule. In the experiment, a single molecule was repeatedly folded and unfolded. The periodic folding of a single molecule is analogous to the cycles of the kinesin displacement (Collin et al., 2005). The Crooks theorem is also applied to the linear motor kinesin cycle (Calzetta, 2009). Distinct forward and backward trajectories may have different probability weights if the system is out of equilibrium. For example, the probability for a driven Brownian particle having a trajectory from point “a to b” is different from that having the same reverse trajectory from “b to a.” The entropy production arises from the breaking of the time-reversal symmetry in the probability distribution of the statistical description of the nonequilibrium steady state (Andrieux and Gaspard, 2007). In some systems, experiments have verified an overall type of reversibility, such as ATP synthase, which can either produce ATP driven by a proton gradient or hydrolyze ATP to pump protons, depending on conditions. Similarly, a tethered kinesin motor protein has been shown to hydrolyze or synthesize ATP depending upon concentrations of the reactants and products. Closer examination of some reversible processes suggests, however, that the forward and reverse mechanisms may not always coincide. Many biological systems are driven by coupling to a secondary reaction, such as ATP hydrolysis. It is not the hydrolysis that limits the symmetry, since the cell drives the process by synthesizing ATP through an unrelated mechanism. Symmetry could be applied to a system that couples binding and catalysis, when the entire system is analyzed. However, if part of the system is excluded, then the fact that the rate constants will differ for pre-and postcatalytic processes destroys the symmetry. In practical cases of interest where a system’s degrees of freedom are coupled stochastically to a thermal environment, achieving symmetry for a nonequilibrium process does not seem possible in general. For example, the mechanism of a peptide’s insertion into a lipid bilayer differed dramatically from its exit mechanism. This is not a violation of microscopic reversibility as it reflects the altered driving “force” applied in order to observe the insertion and exit processes separately. Although systems in equilibrium exhibit forward-reverse symmetry by detailed balance, experimental observations of pathways are almost always made out of equilibrium (Bhatt and Zuckerman, 2011).

696

CHAPTER 15 Probabilistic Approach in Thermodynamics

Example 15.8 Molecular Motors Discuss the molecular motors operating close to equilibrium. Solution: Assume that molecular motors are isothermal and in local equilibrium. For the motor/filament system is induced by an external force fext such as viscous friction forces between the motor and the bath solvent or viscous load of an object that is carried (Julicher et al., 1997). The chemical potential difference Dm is a generalized force and measures the free-energy change per consumed fuel molecule that is the hydrolysis of ATP to ADP: ATP#ADP þ Pi: Dm ¼ mATP  mADP  mP The dissipation J is J ¼ fext v þ JDm  0

(a)

Equation (a) identifies the independent fluxes and forces. These forces cause motion and ATP consumption characterized by fluxes (currents) that are average velocity v(fext,Dm) and average rate of ATP hydrolysis J(fext,Dm). Molecular motors mostly operate far from equilibrium (Dm w 10kBT ) and the fluxes are not linearly dependent on the forces. Within the linear regime (Dm << kBT), however, a linear fluxeforce relationships hold v ¼ L11 fext þ L12 Dm

(b)

J ¼ L21 fext þ L22 Dm

(c)

Here L11 and L22 are the mobility coefficients, while L12 and L21 (Onsgaer’s relation holds L12 ¼ L21) are the mechanochemical coupling coefficients for polar filaments. Inequality in Eqn (a) will be satisfied when Lii > 0 and

L22 L11  L12 L21 > 0

Various driven and driving process can be identified: Thermal equilibrium (Dm ¼ 0, fext ¼ 0) represents a singular point. When fextv < 0, work is performed by the motor and the chemical work is the driving process, while JDm < 0 requires that chemical energy is generated and the mechanical work is the driving process. When fextv > 0 and JDm > 0, there is no single driving process nor driven process and dissipation is in the form of heat in the thermal bath. This may be passive system (Julicher et al., 1997). When fextv < 0 and JDm < 0, by using energy from the thermal bath, the system performs chemical and mechanical works, which is against the second law of thermodynamics as shown in Eqn (a) that requires J > 0. When two or more processes occur simultaneously in a system they may couple, i.e. interact, and induce new effects. Coupling implies an interrelation between flow i and flow j, so a flow (e.g. heat, or mass flow, or chemical reaction) occurs without its own thermodynamic driving force, or in opposition to the direction imposed by its own driving force. In the sodium pumping, for example, the ions can flow against the direction imposed by their electrochemical potential gradients only by coupling to the hydrolysis of ATP that releases energy. The four active coupled processes are: 1. fextv < 0 (driven) and JDm > 0 (driving): the motor uses hydrolysis of ATP to generate work. 2. fextv < 0 (driven) and JDm > 0 (driving): the motor uses hydrolysis of ADP to generate work. 3. fextv > 0 (driving) and JDm < 0 (driven): the system produces ATP by using mechanical work. 4. fextv > 0 (driving) and JDm < 0 (driven): the system produces ADP by using mechanical work.

15.6 Applications: biomolecules and biochemical cycles

697

The energy-coupling efficiency: The efficiency of energy coupling h is defined as the ratio of output and input powers. When there is no heat effect we have the dissipation equation: J ¼ Jp Xp þ Jo Xo ¼ output power þ input power  0 and the efficiency becomes: h ¼ JpXp/(JoXo) In terms of the normalized flow ratio (j) and the normalized force ratio (x), the energy coupling efficiency becomes:      h ¼ jx ¼  x þ q q þ 1=x ; where j ¼ Jp = Jo Z ; x ¼ Xp Z=Xo pffiffiffiffiffiffiffiffiffiffiffiffi and Z is called the phenomenological stoichiometry defined by Z ¼ Lp =Lo. (Caplan and Essig, 1999; Demirel and Sandler, 2002). Thus, the efficiency depends on the force ratio x and the degree of coupling q. The ratio Jp /Jo is the conventional phosphate to oxygen consumption ratio P/O. The energy coupling efficiency is zero when either Jp or Xp is zero. Therefore, at intermediate values of Jp and Xp, the efficiency passes through an optimum (maximum) defined by:   pffiffiffiffiffiffiffiffiffiffiffiffiffi2 hopt ¼ q= 1 þ 1  q2 Here, q represents a lump sum quantity for the various individual degrees of coupling of different processes of oxidative phosphorylation. This equation shows that optimal efficiency depends only on the degree of coupling and increases with increasing values of q. When oxidative phosphorylation progresses with a load JL, such as the active transport of ions, then the total dissipation becomes: Jc ¼ Jp Xp þ Jo Xo þ JL Xp Here JL is the net rate of ATP utilized, and it is assumed that the phosphate potential Xp is the driving force: JL ¼ LXp. Herepffiffiffiffiffiffiffiffiffiffiffiffiffi L is called the conductance matching of oxidative phosphorylation, and is defined by: L ¼ Lp 1  q2 (Stucki, 1980). Efficiencies for process 1 and 2 are fext v hmec ¼  (d) JDm For processes 3 and 4 are hchem ¼ 

JDm vfext

The maximum efficiency is based on the degree of coupling (q ¼ Lij/(LiiLjj)1/2): !2 q q2 pffiffiffiffiffiffiffiffiffiffiffiffiffi and hopt ¼ h hmax ¼  pffiffiffiffiffiffiffiffiffiffiffiffiffii2 1 þ 1  q2 1þ 1  q2

(e)

698

CHAPTER 15 Probabilistic Approach in Thermodynamics

Example 15.9 Modeling molecular motors Stochastic modeling of molecular motors. Solution: Motor molecules play a key role in muscular contraction, cell division, and cell transport. Molecular motors are microscopic systems that move along one-dimensional periodic structures, They undergo several states, within each of which operates at local equilibrium on timescales small compared to the exchange rates between these states. For example, for the transient response of muscles, the fastest characteristic times of the motors are in the range of miliseconds. Thermal equilibrium occurs on length scales of around 10 nm after around 10 to 100 nanoseconds. The states of the proteins during muscular contraction therefore had to be in local equilibrium. Up to five or six different states could be involved (Julicher et al., 1997). Typically, a point-like particle motion is described by the Langevin equation: x

dx ¼ vx WðxÞ þ FðtÞ dt

(a)

where x is a constant friction coefficient, x the position of the particle, and W(x) the potential energy the particle experiences. The fluctuating force F(t) has zero averaged value hFðtÞi ¼ 0, yet has richer correlation functions than a simple Gaussian white noise. The characteristic cross-over time between underdumped and overdamped behaviors is of the order of a few picoseconds on the 10-nm scale. Consider a point-like particle is placed in a periodic, asymmetric, and time-dependent potential: x

dx ¼ vx Wðx; tÞ þ f ðtÞ dt

(b)

Here W depends explicitly on time, and the random forces f(t) are Gaussian white noise and obeys a fluctuation-dissipation theorem: hf ðtÞi ¼ 0;

hf ðtÞf ðt0 Þi ¼ 2xTdðt  t0 Þ

Eqn (b) corresponds to the motion of a particle fluctuating between different states for which the transition rates between states are constant. For particle fluctuating between the well-defined states, we have    dx x ¼ vx Wi x þ fi t (c) dt Here the index i refers to the considered state i ¼ 1,., N, and fi(t) satisfies D E   fi ðtÞfj ðt0 Þ ¼ 2xi Td t  t0 dij h fi ðtÞi ¼ 0; The dynamics of transition between the states have to be considered independently, which is most conveniently described by a FokkerePlanck formulation. For a two state, a stochastic description of the dynamics is described by the possibility density pi(x,t) for the motor to be at position x at time t in state i. This system progresses with period l. The evolution of the system can be described by two FokkerePlanck equations with source terms: vt p1 þ vx J1 ¼ w1 ðxÞp1 þ w2 ðxÞp2

(d)

vt p2 þ vx J2 ¼ w1 ðxÞp1  w2 ðxÞp2

(e)

15.6 Applications: biomolecules and biochemical cycles

699

Where the currents from diffusion, interaction with the filament, and the action of a possible external force, fext becomes Ji ¼ mi ½kB Tvx pi  pi vx Wi þ pi fext 

(f)

The source terms are determined by the rates wi(x) at which the motor switches from one state to another. The function wi(x) has the symmetry properties of the filament. The set of Eqns (d)e(f) can illustrate the motion of molecular motors as well as how this motion and force generation emerge in terms of an effective one-dimensional equation. The steady-state particle current is J ¼ J1(x) þ J2(x) for l-periodic pi(x). Using p ¼ p1 þ p2 and l(x) ¼ p1(x)/p(x), the current becomes

J ¼ meff kB Tvx p  pvx Weff þ pfext (g) With an effective mobility and an effective potential  

meff ¼ m1 l  m2 1  l     weff x0  weff 0 ¼

Zx0  0

   x0 mi lvx W1 þ m2 ð1  lÞvx W2 dx þ kB T ln meff 0 m1 l þ m2 ð1  lÞ

(h)

With periodic boundary condition, l(x) has the potential symmetry. Thus if the potential is symmetric the integrand in Eqn (h) is antisymmetric and the effective potential is periodic: Weff(nl) ¼ Weff(0) for integer n. The motor therefore is flat on large scales and cannot generate motion. For asymmetric potentials, the effective potential generically has a nonzero average slope [Weff(l)Weff(0)]/l on large scales (Figure 15.1(b)), W1 and W2 are flat on large scales (Figure 15.1(a)). This average slope corresponds to an average force, which is able to generate motion against external forces fext provided that the motor consumes chemical energy. If there is no chemical energy provided the detailed balance is satisfied: 

    W1 ðxÞ  W2 ðxÞ w1 x ¼ w2 x exp (i) kB T For spontaneous action, the detailed balance will be broken by ATP hydrolysis.

(a)

(b) Weff

W

Weff,l – Weff,0

W2 w1(x)

W1

w2(x)

l x

x

FIGURE 15.1 (a) Schematic of two l-periodic asymmetric potentials W1 and W2. Motion is possible when the transition rates w1/w2 is driven away from equilibrium value given in Eqn (i); (b) Schematic of effective potential acting on the particle when the transition rates between the two states (w1/w2) are away from equilibrium.

700

CHAPTER 15 Probabilistic Approach in Thermodynamics

Example 15.10 Stochastic fluctuation theorem Use the master equation to derive entropy production. Solution: Technological advancements made possible to observe the dynamics of single biomolecules acting as linear motors, such as actin-myosin or the kinesin-microtubule, as well as rotary motors, such as FoF1-ATPase and bacterial flagellar motors. These nanometric size motors are powered by the energy released from the hydrolysis of adenosine triphosphate (ATP) or proton currents across a membrane. As a part of cellular metabolism these molecular motors are stochastic as they work under nonequilibrium conditions and are exposed to molecular fluctuations. Therefore, their motions are unidirectional only on average; random steps in the direction opposite to their mean motions are possible. Because of chemical and mechanical effects, the motor is driven out of thermodynamic equilibrium and its random motion stops at equilibrium. For macroscopic systems, nonequilibrium chemical reactions are characterized by their affinities known as thermodynamic forces. For molecular motors, the affinities of the chemical reactions driving the motor can be determined from the fluctuations of the motion of the motor (Andrieux and Gaspard, 2006). Consider the reaction system: kf1

a1 % a01 kb1 kf2

a2 % a02 kb2

..

kfn

an % a0n kbn

For the stochastic description the probability p(a,t) to find the system in a state a at time t obeys the master equation:    

dpða; tÞ X (a) Jrf p a0 ; t  Jrb p a; t ¼ dt f ;a0 where Jrf and Jrb are the forward and backward reaction rates. The equation above describes molecular fluctuations down to the nanoscales as well as the time evolution of entropy defined by   X   0  X S t ¼ p a; t S a  pða; tÞln pða; tÞ (b) a

a

The time derivative of this entropy dS/dt can be split into an entropy flux and entropy production, and the H theorem states that this entropy production is positive for nonequilibrium systems. At stationary state, however, the probabilities become time-independent, dp(a)/dt ¼ 0, and the entropy production becomes di S 1 X Jr ða; a0 ÞAða0 aÞ  0 (c) ¼ dt 2 f ;a;a0 where Jr is the mesoscopic currents and A is the mesoscopic affinity:       Jr a; a0 ¼ ps a Jrf  ps a0 Jrb

15.6 Applications: biomolecules and biochemical cycles

701

    ps a Jrf 0 A a; a ¼ ln ps ða0 ÞJrb The entropy production vanishes when the detailed balance     peq a Jrf ¼ peq a Jrb holds and thermodynamic equilibrium is reached.

Example 15.11 Discrete state model Discuss the discrete model of a rotary motor. Solution: The stator of rotary F1 motor is composed of six proteins. Three of them catalyze the hydrolysis of ATP, which drives the rotation of a shaft. The shaft of this F1 complex is glued to a proton turbine called Fo, which is located in the internal membrane of mitochondria. The whole FoF1-ATPase synthesizes ATP using the proton flow across the inner membrane. The F1 protein complex can function in reverse and serve as a motor performing mechanical work. These motors are modeled as stochastic systems with random jumps between the chemical states. If the rotation follows discrete steps and substeps, then the shaft has motions between well-defined orientations corresponding to the chemical states of the motor leading to a stochastic system based on discrete states. The result still will be the transition rates of the random jumps between the discrete states. These transition rates depend on the mass action law of chemical kinetics. A molecular motor functions on a cycle of transformations between different mechanical and chemical states corresponding to different conformations of the protein complex. For rotary motors these states form a cycle of periodicity L with the revolution by 360 , for linear motors the states undergo the reinitialization. The transitions between the states a are caused by the chemical reactions of the binding a substrate S and releasing the products P: kf1

kf2

kb1

kb2

S þ a % a1 % a2 þ P

(a)

where kif and kib denote the forward and backward reaction rate constants, respectively. The backward reactions allow the system to reach a state of thermodynamic equilibrium if the nonequilibrium constraints are relaxed. For the F1 rotary motor, the overall reaction is the hydrolysis of ATP that is the substrate and the products are ADP and Pi. For transmembrane motors such as Fo or the bacterial motors, the substrate is Hþ on one side of the membrane and the product is Hþ on the other side. The master equation describes the probability to find the motor in the state a:           dpða; tÞ a odd (b) ¼ Jrf2 p a  1; t þ Jrb1 p a þ 1; t  Jrf1 þ Jrb2 p a; t dt           dpða; tÞ a even (c) ¼ Jrf1 p a  1; t þ Jrb2 p a þ 1; t  Jrb1 þ Jrf2 p a; t dt The forward and backward transition rates are



Jrf1 ¼ kf1 S ; Jrb1 ¼ kb1 ; Jrf2 ¼ kf2 ; Jrb2 ¼ kb2 P

(d)

702

CHAPTER 15 Probabilistic Approach in Thermodynamics

and stationary probability distributions become   Jrb1 þ Jrf2  ps a ¼  L Jrf1 þ Jrf2 þ Jrb1 þ Jrb2   Jrf1 þ Jrb2  ps a ¼  L Jrf1 þ Jrf2 þ Jrb1 þ Jrb2



a odd



(e)

  a even

(f)

The steady-state current (flow) is Jrf1 Jrf2  Jrb1 Jrb2  Jr ¼  L Jrf1 þ Jrf2 þ Jrb1 þ Jrb2 Along the cycle the macroscopic affinity is

(g)



kf1 kf2 S Ac ¼ L kb1 kb2 ½P

The mean entropy production is the product of the mean current with the macroscopic affinity: di S ¼ Jr A=T dt At equilibrium detailed balance is satisfied: ½Seq ½Peq

¼

kb1 kb2 kf1 kf2

Example 15.12 Application to the FoF1-ATPase molecular motor Discuss the fluctuation theorem for a rotary motor action. Solution: Adenosine triphosphate (ATP) is synthesized by rotational catalysis in the F1 domain of mitochondrial FoF1-ATPase. The domain Fo consists of one a, two b and a ring of 9e15 subunits c depending on the species. The subunits c form a ring, connected to the domain F1 via the subunit ε and then g and two subunits b and d. Water-soluble F1 domain has the subunits a3b3gdε. Catalytic nucleotide-binding sites are formed by each of three subunits b. The chirality (handedness) of the molecular complex is essential for its unidirectional rotation (Tsumuraya et al., 2009). At equilibrium condensed phase, the rotation of motor has equal probabilities of forward and backward motions based on the principle of detailed balance. Therefore, unidirectional motion results only when the motor is at nonequilibrium state because of some chemical or electrochemical force; hence the motion of motor is a dissipative process taking place at nanoscale and affected by thermal fluctuations. The cycle of the motor corresponds to the full revolution (360 ) with s ¼ 6 substeps and to the hydrolysis of three ATP molecule: kf1 kf2  ATP þ Mi % Miþ1 % Miþ2 þ ADP þ Pi ; i ¼ 1; 3; 5; kb1

kb2

where Mi shows the six successive states of hydrolytic motor.

and M7 ¼ M1



15.6 Applications: biomolecules and biochemical cycles

703

The chemical affinity generates fluctuating flows, which can be the rate of chemical reaction, or the velocity of a linear molecular motor, or the rotation rate of a rotary motor. According to the fluctuation theorem, the probability of backward substeps (s) is given by P(s) ¼ P(s) exp [sA/(6kBT)] where the affinity A is   ½ATP o A ¼ 3DG þ 3kB T ln ½ADP½Pi  with the standard free enthalpy of hydrolysis DGo ¼ DGoATP  DGoATP  DGoPi y 50 pN nm at pH ¼ 7 and T ¼ 23  C. Equilibrium concentrations obey   o ½ATP kb1 kb2 ¼ ¼ eDG =kB T ¼ 4:89  106 =M ½ADP½Pi  eq kf1 kf2 Under physiological conditions, the concentrations are about [ATP] z 103 M, ([ATP]eq z 4.89  1013), [ADP] z 104 M, and [Pi] z 103 M, hence the motor runs in a highly nonlinear regime, that is far from equilibrium, with an affinity A  40 kBT. In this regime, the fluctuation theorem shows that the backward steps are rare, and unidirectional motion can overwhelm erratic Brownian motion. During the unidirectional motion, the motor undergoes a cycle of intramolecular transformations, in which its three-dimensional structure changes with time leading to a temporal ordering as the system is driven far from equilibrium. Some future applications of the fluctuation theorem to molecular machines may be single-molecule pulling experiments on RNA, DNA, proteins, and other polymers to determine their free-energy landscapes. The fluctuation theorem is satisfied for near and far from equilibrium regions, and shows that the ratio of the probability of a forward rotation of the shaft to the probability of backward rotation determines the thermodynamic force, affinity, as the key information for the nonequilibrium thermodynamics of molecular motors.

Example 15.13 Fluctuation in molecular motor kinesin Discuss the fluctuation theorem for a linear motor action. Solution: Kinesin is a large protein which can attach to a load on one end and has two heads on the other end It performs an asymmetric hand on hand walk along a microtubule dragging the load against an external force F and the viscous drag from the environment. Each step in this walk corresponds to a cycle, in which kinesin converts chemical energy released by the hydrolysis of one ATP molecule into a useful work. The amount of energy of the hydrolysis of one ATP molecule is around 25kBT, where kB is the Boltzmann constant and T ¼ 300 K is the bath (environment) temperature. When the head of kinesin is free, it has many more available states than when it is docked, and so it also has larger entropy. Therefore, docking must be accompanied by an entropy decrease leading to a spatial order and heat loss q to the environment. The distance between the attachment sites on the microtubule is Dl ¼ 8 nm. Kinesin can exert a constant force although the free head is subject to the thermal Brownian motion generated by the environment. The free head can dock in the required site or it may be dragged back to its initial position. The probability of a successful forward step over that of backward step is (Bier, 2008; Calzetta, 2009)   pf Dl ¼ exp (a) ½Fst  F ; pr 2kB T

704

CHAPTER 15 Probabilistic Approach in Thermodynamics

where Fst z 7 pN is the stalling force and F is the external force. Eqn (a) shows that the maximum work kinesin can do against the external force is FstDl z 13.3kBT, which is close to half of the input energy of 25kBT. To obtain the free-energy change associated with the one step from observable data, the Crooks fluctuation theorem can be used (Calzetta, 2009). Assume that the macroscopic initial and final states 1 and 2 are the initial and final states, respectively in the kinesin cycle, and the external parameter l measures the progress of the molecule from one pair of docking sites to the next. A backward step implies that the forward work (W ¼ FDl) is reversed. For the free energy G, the Clausius inequality implies W  DG. Initially the system is at state 1. If the pf is the probability that the system ends up in state 2, giving out work W and the pb is the probability that the system, now starting from state 2, ends up in state 1 giving out work W when the evolution of l is reversed. The Crooks fluctuation theorem states that   pf 1 ½DG þ W : (b) ¼ exp pb kB T The equation above and the probability ratio given in Eqn (a) implies that DG ¼

Dl ½Fst þ F; 2

(c)

where DG is the maximum work kinesin performs at constant temperature. Ideally, all the energy available to the kinesin at the start of the cycle is about (2Fst Dl ¼ 26.6kBT) and is dissipated or goes to into the reversible work (DG). For isothermal docking we have Sfree  Sdock ¼ qdock =T where Sfree and Sdock are the entropies of the free head and docked states, respectively. Part of the available work is left to be dissipated as heat, for example by opposing the viscous drag or as excess kinetic energy to be absorbed by the docking site. Besides that, the cycle may fail, with the kinesin stepping backwards rather than forward. Therefore the actual average work is hWi ¼ FhDli, where hDli is the average displacement given by   Dl < Dl >¼ d tanh (d) ½Fst  F : 4kB T This analysis illustrates the estimation of the free-energy change of nonequilibrium dynamics of kinesin by using the Crooks fluctuation theorem.

Example 15.14 Discrete state model for molecular motors Discuss the discrete model of a rotary motor. Solution: The F1 protein complex is composed of three large a and b subunits around a smaller g rotating subunit toward a subunit complex of a3b3g. The three b subunits are the reactive sites for the hydrolysis of ATP corresponding a rotation by 360 in total. Binding of an ATP induces a rotation of 90 followed by a release of ADP and Pi with a rotation of about 30 and hence totaling to 120 at each b subunit. A representative reaction system would be kf1

kf2

kb1

kb2

ATP þ a % a1 % a2 þ ADP þ Pi

15.7 Statistical rate theory

705

The forward and backward transition rates are:





Jrf1 ¼ kf1 ATP ; Jrb1 ¼ kb1 ; Jrf2 ¼ kf2 ; Jrb2 ¼ kb2 ADP Pi The standard free-energy change of hydrolysis is DGo ¼ DGoATP  DGoADP  DGoPi Equilibrium concentrations of the reactants and products satisfy ½ATPeq ½ADPeq ½Pieq

¼

  kb1 kb2 ¼ exp DGoATP =kB T kf1 kf2

Under physiological conditions, the concentrations are about [ATP] w 103 M, [ADP] w 104 M, [Pi] w 103 M, while the equilibrium concentration of [ATP] w 4.89  1013 M, which verifies that the system is typically far from equilibrium. Without the products, the rotation velocity is observed to follow a MichaeliseMenten kinetics J¼

Jr;max ½ATP ½ATP þ KM

where Jr,max ¼ kf2 /3 and KM ¼ (kf2 þ kb1)/kf1. The affinity of the cycle is   kf1 kf2 ½ATP A ¼ 3 ln kb1 kb2 ½ADP½Pi  The affinity is the thermodynamic force and vanishes at equilibrium. The fluctuation theory states that the ratio of the probability of a forward rotation of the shaft to the probability of a backward rotation determines the affinity of the process (Andrieux and Gaspard, 2006).

15.7 Statistical rate theory Onsager’s reciprocal rules are valid for systems that are sufficiently close to global equilibrium, the flows and forces are independent, and are identified from the rate entropy production or dissipation function. It is crucial to determine under what conditions the assumption of linearity will hold. Statistical rate theory may help in verifying Onsager’s reciprocal rules and understanding the linearity criteria. Statistical rate theory is not based on the assumption of near equilibrium, and leads to rate equations consisting of experimental and thermodynamic variables that may be measured or controlled. Statistical rate theory is based on the local thermodynamic equilibrium. It is derived from the quantum mechanical probability that a single molecule will be transferred between phases or across an interface, or that a forward chemical reaction will occur in a single reaction step. Therefore, it should be modified to apply to systems in which simultaneous multiple molecular phenomena would be significant. For a transport process or a chemical reaction process involving single molecular phenomena at some timescale, the statistical rate theory equation for the net rate of the flow J is

    DSf DSb  exp (15.112) J ¼ Jeq exp kB kB where Jeq is the equilibrium exchange rate of molecules between the phases, DSf and DSb are the entropy changes in the isolated systems as a result of a single molecule being transferred forward and backward,

706

CHAPTER 15 Probabilistic Approach in Thermodynamics

respectively, and kB is the Boltzmann constant. In statistical rate theory, the microscopic transition rates between any two quantum-mechanical states of molecular configurations that differ by a single molecule having been transferred between phases (or having undergone a chemical reaction) are equal. That means that the average of these rates does not change, and Jeq is a constant throughout the process and equal to the equilibrium exchange rate. As long as the entropy changes are large, Eqn (15.112) cannot be linearized. For example, chemical reactions and interfacial transport between two phases yield large entropy changes. Statistical rate theory leads to well-defined coefficients that can be measured or controlled, and hence the criteria for linearization may be explicitly expressed.

Example 15.15 Transport in biological cells: osmotic and pressure driven mass transport across a biological cell membrane Consider a compartmental system shown in Figure 15.2. Here a biological cell containing a dilute solute and water solution is immersed in the same solute and water solution. The cell is placed in a thermal reservoir with temperature TR. The cell exchanges the solute and water across the wall, and therefore, undergoes osmotic shrinkage or swelling. We assume that both the water and the solute are incompressible and the saturation concentration of the solute in water does not depend on pressure. The cell is in mechanical equilibrium, although the water concentration or pressure inside and outside the cell is different. The pressure difference inside and outside the cell causes and is balanced by a tension in the cell membrane. The cell and its surroundings are at constant temperature (Elliott et al., 2000). The derivation of the transport equations starts with the formulation of the entropy production rate. A differential change of the entropy of the isolated system dSsys is dSsys ¼ dSo þ dSi þ dSm þ dSR

(a)

where So, Si, Sm, and SR are the entropies of the fluid outside the cell, the fluid inside the cell, the cell membrane, and the reservoir, respectively. The differential entropy of the fluid outside the cell is mw;o ms;o 1 Po dSo ¼ dUo þ dVo  (b) dNw;o  dNs;o T T T T where Uo, Po, and Vo are the internal energy, the pressure, and the volume, respectively, of the fluid outside the cell, mw,o and Nw,o are the chemical potential of the water and the number of moles of water

out, o w s

Cell in, i

Thermal Thermal Reservoir,TTR reservoir,

FIGURE 15.2 Schematic mass transport in a biological cell in a thermal reservoir.

15.7 Statistical rate theory

707

outside the cell, and ms,o and Ns,o are the chemical potential of the solute and the number of moles of solute outside the cell. For the fluid inside the cell, we have mw;i ms;i 1 Pi dSi ¼ dUi þ dVi  (c) dNw;i  dNs;i T T T T The subscript i indicates the properties for the fluid inside the cell. For the membrane, we have X mm;k 1 g dNm;k dSm ¼ dUm  m dAm  T T T k

(d)

where Um, gm, and dAm are the internal energy, the tension, and the surface area, respectively, of the cell membrane. Here, the cell membrane is treated as a two-dimensional phase, mm,k is the chemical potential of the kth molecular species in the membrane, and Nm,k is the number of molecules of the kth species in the membrane. For a quasistatic heat transfer in the reservoir, we have dSR ¼ 

1 1 1 dUo  dUi  dUm T T T

(e)

After substituting Eqns (b) to (e) into Eqn (a) and applying the following constraints dVo ¼ dVi ; we have dSsys

dNw;o ¼ dNw;i ;

dNs;o ¼ dNs;i ;

dNk;m ¼ 0

    mw;o  mw;i ms;o  ms;i ðPi  Po Þ gm ¼ dAm þ dVi  dNw;i þ dNs;i T T T T

(f)

By assuming that mechanical equilibrium holds for the membrane and that the cell is spherical with the radius r, we have  2g  Pi  Po ¼ m r Substituting the equation above into Eqn (f), we obtain the rate of entropy production:     mw;o  mw;i ms;o  ms;i dSsys _ d N w;i þ d N_ s;i ¼ (g) T T dt where N_ w;i and N_ s;i are the rates of change of the numbers of water and solute molecules inside the cell, respectively. The forces in the equation above are related by the GibbseDuhem relation and are not independent. For a dilute solution, the difference in the chemical potentials of an incompressible solvent across the membrane is     mw;o  mw;i ¼ Vw Po  Pi  kB T xs;o  xs;i where Vw is the partial molecular volume of water, kB is the Boltzmann constant, and xs is the mole fraction of solute, which is approximately defined by xs ¼ cs =c w, where cs is the concentration of the solute and c w is the concentration of pure water. For an incompressible solute with a pressure-independent saturation concentration, the difference in the chemical potentials of the solute across the membrane is      

ms;o  ms;i ¼ Vs Po  Pi  kB T ln xs;o  ln xs;i

708

CHAPTER 15 Probabilistic Approach in Thermodynamics

where Vs is the partial molecular volume of solute, and 

    cs;o  cs;i ln xs;o  ln xs;i ¼ cs where c  cs;i  s;o  

ln xs;o  ln xs;i

cs ¼

Substituting the difference in the chemical potentials into Eqn (g), and rearranging, yields

 

    dSsys N_ s;i N_ w;i _ _ ¼ Vs N s;i þ Vw N w;i Po  Pi þ  kB T cs;o  cs;i dt c cw We can identify the flows and forces from the equation above and establish the following phenomenological equations: 

   Vs N_ s;i þ Vw N_ w;i ¼ L11 Po  Pi þ L12 kB T cs;o  cs;i (h) 

    N_ s;i N_ w;i (i)  ¼ L21 Po  Pi þ L22 kB T cs;o  cs;i c cw On the other hand, from statistical rate theory, we have

    DSf DSb  exp N_ s;i ¼ Js;eq exp kB kB

(k)

where Js,eq is the equilibrium exchange rate of solute molecules across the membrane. The forward entropy change is DSf ¼ DSo þ DSi þ DSm þ DSR Each phase is a simple system, and we may write the appropriate Euler relations mw;o ms;o 1 Po DSo ¼ DUo þ DVo  DNw;o  DNs;o T T T T mw;i ms;i 1 Pi DNw;i  DNs;i DSi ¼ DUi þ DVi  T T T T X mm;k 1 g DNm;k DSm ¼ DUm  m DAm  T T T k DSR ¼ 

1 1 1 DUo  DUi  DUm T T T

We formulate the DSb in a similar manner. Using the equations above and the following constraints: DVo ¼ DVi ; in Eqn (k), we obtain

DNw;o ¼ DNw;i ¼ 0;

DNs;o ¼ 1;

DNs;i ¼ 1;

    ms;o  ms;i ms;i  ms;o _  exp N s;i ¼ Js;eq exp kB T kB T

DNk;m ¼ 0

15.7 Statistical rate theory

709

    mw;o  mw;i mw;i  mw;o N_ w;i ¼ Jw;eq exp  exp kB T kB T The equations above are the formulations of nonequilibrium thermodynamics and describe the osmotic transport of solute and water across the membrane. These equations can be linearized for small chemical potential differences, and we obtain  2Js;eq  N_ s;i ¼ ms;o  ms;i kB T  2Jw;eq  N_ w;i ¼ mw;o  mw;i kB T Combining the equations above with Eqns (h) to (i), we have

       2  2 Js;eq Vs Jw;eq Vw 2 2 _ _ Vs N s;i þ Vw N w;i ¼ kB T cs;o  cs;i  Js;eq Vs þ Jw;eq Vw Po  Pi þ kB T kB T cs cw

        2 Js;eq Vs Jw;eq Vw  2 Js;eq Jw;eq N_ s;i N_ w;i P kB T cs;o  cs;i   P þ þ  ¼ o i 2 2 c cw cs cw cw kB T kB T cs

Comparing these statistical rate theory equations, with Eqs. (h) and (i), we obtain the following phenomenological coefficients:  2  Js;eq Vs2 þ Jw;eq Vw2 L11 ¼ kB T   2 Vs Vw Js;eq  Jw;eq L12 ¼ cs cw kB T   2 Vs Vw Js;eq  Jw;eq L21 ¼ cs cw kB T   2 Js;eq Jw;eq þ 2 L22 ¼ cw kB T c2s The equations above show that Onsager’s reciprocal rules hold. The Js,eq and Jw,eq have a microscopic definition represented by perturbation matrix elements and a macroscopic definition represented by the equilibrium exchange rate. As long as the criteria of linearization are satisfied, statistical rate theory may be used to describe systems with temperature differences at an interface besides the driving forces of pressure and concentration differences.

15.7.1 Diffusion in inhomogeneous and anisotropic media Macroscopic diffusion model is based on underlying microscopic dynamics and should reflect the microscopic properties of the diffusion process. A single diffusion equation with a constant diffusion coefficient may not represent inhomogeneous and anisotropic diffusion in macro and microscales. The diffusion equation from the continuity equation yields vP ¼ V$J vt

(15.113)

710

CHAPTER 15 Probabilistic Approach in Thermodynamics

where P and J are the density (probability or number) and diffusion flow of the particles. A definition for the diffusion flow J is (Christensen and Pedersen, 2003)   ^ J ¼  P^ mVV þ DVP (15.114) ^ is the diffusion tensor given by the Einstein relation: where V is an external potential, m ^ is the mobility, and D ^ m ^ kB T ¼ D

In Eqn (15.114), the first term represents the drift in the potential force field V and the second is the diffusional drift given by Fick’s law. Combining Eqn (15.113) with Eqn (15.114), we have   vP ^ ¼ V$ P^ mVV þ DVP (15.115) vt Since the equation above cannot represent systems with inhomogeneous temperatures, we may have the following alternative equation:       vP ^ ¼ V$ P m ^ þ DVP ^ ¼ V$ P^ mVV þ V$DP ^ VV þ V$D (15.116) vt ^ which is sometimes called Equations (15.115) and (15.116) are different because of the drift term V$ðPV$DÞ, a ‘‘spurious’’ drift term. These diffusion equations have different equilibrium distributions and are two special cases of a more general diffusion equation.

15.7.2 Van Kampen’s hopping model for diffusion The hopping model was originally introduced to discuss electron transport in solid materials, but it may be useful as a general model for diffusive motion. In a one-dimensional diffusion equation based on hopping model, the diffusion medium is modeled by a large number of wells/traps in which the particles can get temporarily caught. The density of traps rt is the density times the cross-section of traps and may change throughout the media. In solvents, for example, the density represents the capability of the solvent molecules to form a cage around the suspended particle. The rate of escape of particles (a) is controlled by the local energy barrier F of the trap and the local temperature T: a ¼ a expð F=kB TÞ

Here, a defines the global timescale for escape out of the traps, and incorporates the spatial variation of the escape (a) into the potential barrier (F). Large values of a signifies shallow wells and hence fast diffusion, while large values of rt signify small mean free paths and hence slow diffusion. Inhomogeneities in the medium may cause spatial dependencies of a and rt, such as in micelles, or by the interaction of two diffusing molecules. The isotropic diffusion equation based on van Kampen’s one-dimensional hopping model may be extended to three dimensions using Cartesian coordinates in flat Euclidean space:  

  vP expð F=kB TÞ Vrt VV expð F=kB TÞ þV ¼ aV$ P þ P (15.117) rt vt r2t kB T r2t The equation above implies that the isotropic diffusive motion along the coordinate axes is independent. Here, VV=kB T is the drift due to an external potential force field V, while Vrt =rt represents an internal drift caused by a concentration gradient of the traps. The term PVðeF=kB T =r2t Þ is the ‘‘spurious’’ drift term. Equation (15.117) allows spatial variations of all parameters T, V, F, and rt with inhomogeneous temperature, and the diffusion coefficient becomes D¼a

expð F=kB TÞ r2t

15.7 Statistical rate theory

711

The stationary solution of Eqn (15.117) for systems with a uniform temperature is Ps ¼ C

rt expðV=kB TÞ expðF=kB TÞ

where C is the normalization constant. The stationary distribution depends on the local value of the macroscopic diffusion coefficient D and on the local value of one of the microscopic trap parameters rt or F. Consider three special cases based on a simplification of Eqn (15.115):  

   vP VV 1. rt fexp  F=kB T / ¼ V$ DP þ DVP vt kB T This is the traditional diffusion model given in Eqn (15.115) with the diffusion coefficient D proportional to 1/s. For this case, the so-called ‘‘spurious drift’’ term vanishes because the effects of a and rt cancel each other out in the stationary state. The stationary distribution is proportional to the Boltzmann distribution exp (V/kT) and independent of D.  



   vP VV VV VD 2. rt ¼ constant/ ¼ V$ DP þ VDP ¼ V$ DP þ þ DVP vt kB T kB T D which is similar to the relation given in Eqn (15.116). The stationary solution is proportional to exp (V/kBT)/D, for example, the particles would experience very slow diffusion in regions of low mobility.

   vP VV 1 VD 3. F ¼ constant/ ¼ V$ DP þ þ DVP vt kB T 2 D In which internal drift does not vanish, which is different from both Eqns (15.115) and (15.116). The stationary solution is  pffiffiffiffi Ps ¼ exp V=kB T D For isotropic systems, the diffusion equations for these three cases are mathematically equivalent since they can be transformed into each other by introducing effective potentials. Equation (15.117) has been used widely to model diffusion in liquids, but the above discussion shows that it is valid only where afrt . Equation (15.116) is valid when the concentration of traps is constant, a situation that is more realistic. In all other cases, the diffusion equation is a combination of Eqns (15.115) and (15.116).

15.7.3 Anisotropic diffusion The general diffusion equation, based on the hopping model, is  

    vP expðF=kB TÞ Vrt F expðF=kB TÞ þ V$ ¼ V vP þ aV$ P  P rt vt r2t kB T r2t where F is an external force and v is the velocity field of the medium. If we assume that the parameter a is ^ isotropic while the trap potential is anisotropic and represented by the tensor F:   ^ ^ ¼ a exp  F=kB T D r2t ^ is required to be symmetric because of its relation with the diffusion tensor. Of course, the rt can The tensor F also be anisotropic. The above equation may cover most physical systems and can be used on curved manifolds too.

712

CHAPTER 15 Probabilistic Approach in Thermodynamics

The anisotropy introduces two new features: (1) Equations (15.115) and (15.116) cannot in general be ^ may not be a gradient field. Equation (15.116) can describe transformed into each other, as the drift term V$D systems where the directions of the principal axes depend on the spatial position. (2) Detailed balance implies that the diffusion flow J vanishes everywhere in the stationary state. However, this is not automatically satisfied for anisotropic systems and one needs to exercise extra care in the modeling of such systems. Inhomogeneity does not affect the detailed balance. (3) The diffusive part of the diffusion flow must be ^ while the drift is represented by ðPV$DÞ. ^ represented by J ¼ VDP, In general, the diffusion equation depends on all the microscopic parameters. The microscopic parameters of van Kampen’s model are the local values of the effective trap density rt, which is density times crosssection and work function F. The traditional diffusion relation of Eqn (15.116) is valid only for isotropic diffusion and under the restrictive conditions that rt fexpðF=kB TÞ. It may be unsatisfactory even in a homogeneous system with nontrivial geometry. Eqn (15.116) is valid when the effective trap concentration is constant, which is more realistic for liquids.

15.8 Mesoscopic nonequilibrium thermodynamics The linear nonequilibrium thermodynamics theory applies to a coarsened description of the systems which ignores their molecular nature and assumes that they behave as a continuum medium and the description does not depend on the size of the system. However, at small structures such as clusters or biomolecules, fluctuations may become the dominant factor in their evolution, The functionality of molecular motors, small engines present in many biological systems may be formulated by mesoscopic nonequilibrium thermodynamics by taking into account their nonlinear nature and fluctuations. Small systems evolve in time adopting different nonequilibrium configurations, such as in kinetic processes of nucleation and growth of small clusters, in noncovalent association between proteins, and in active transport through biological membranes. Small time and length scales of a system usually lead to increase in the number of nonequilibrium degrees of freedom denoted by g, which may be, for example, the velocity of a colloidal particle, the size of a macromolecule or any coordinate or order parameter whose values define the state of the system in a phase space (Qian, 2001; Rubi, 2008). The probability density p(g, t) is the finding the system at the mesoscopic state at time t. The minimum reversible work (excluding the electric, magnetic, surface etc.) to bring the system to a state characterized by a certain degree of freedom g is DW ¼ DU  TDS þ PDV  mDN

(15.118)

where U is the internal energy, S the entropy, V the volume, N the number of moles and m the chemical potential. In g-space, entropy variation from the Gibbs entropy is ! Z   pðg; tÞ   dg (15.119) DS ¼ Seq  kB p g; t ln peq g Where Seq is the entropy and peq is the probability density peq ¼ exp [DW(g)/(kBT)] at equilibrium, respectively. Then the variations in entropy and the entropy at equilibrium become ! Z Z       pðg; tÞ 1   dg; dSeq ¼  dS ¼ kB dp g; t ln meq g dp ln g; t dg (15.120) T peq g Comparison of Eqs, (15.119) and (15.120) identify the generalized chemical potential.       pðg; tÞ m g; t ¼ kB T ln   þ meq or m g; t ¼ kB T ln p g; t þ DW peq g

(15.121)

References

713

v m The entropy production is obtained by using the thermodynamic force X ¼ in the space of mesoscopic vg T variable g and generalized flow (flux) J Z 1 vm J dg (15.122) s¼ T vg The entropy production is obtained in terms of the chemical potential expressed in probability density: Z s ¼ kB

! pðg; tÞ J g; t ln   dg vg peq g 

v

(15.123)

Based on the equation above the linear flow-force equation becomes 



   v pðg; tÞ J g; t ¼ kB L g; p g ln   vg peq g

! (15.124)

where L is the Onsager coefficient, which depend on the mesoscopic coordinate and the state variable p(g). The flow is used in the continuity equation vpðg; tÞ=vt ¼ vJ=vg for the diffusion equation:   v pðg; tÞ vpðg; tÞ v   ¼ Dpeq g vt vg vg peq g

! (15.125)

where D is the diffusion coefficient   kB Lðg; pÞ D g ¼ p

Using peq ¼ exp [DW(g)/(kBT)], Eqn (15.125) becomes   vp v vp D vDW ¼ D þ p vt vg vg kB T vg

The equation above is the Fokker–Planck equation to estimate the evolution of the probability density in space. Various forms of the Fokker–Planck equations result from various expressions of the work done on the systems, and are used in diverse applications, such as reaction diffusion and polymer solutions (Rubi, 2008; Bedeaux et al., 2010; Rubi and Perez-Madrid, 2001). A process may lead to variations in the conformation of the macromolecules that can be described by nonequilibrium thermodynamics. The extension of this approach to the mesoscopic level is called the mesoscopic nonequilibrium thermodynamics, and applied to transport and relaxation phenomena and polymer solutions (Santamaria-Holek and Rubi, 2003).

References Adami, C., 2004. Phys. Life Rev. 1, 3–22. Andrieux, D., Gaspard, P., 2006. Phys. Rev. E 74, 011906. Andrieux, D., Gaspard, P., 2007. Phys. Rev. Let. 98, 150601. Ao, P., 2005. Comput. Chem. Eng. 29, 2297. Baker, J.E., 2004. J. Theor. Biol. 228, 467. Bedeaux, D., Pagonabarraga, I., Ortiz de Zarate, J.M., Sengers, J.V., Kjelstrup, S., 2010. Phys. Chem. Chem. Phys. 12, 12780. Bhatt, D., Zuckerman, D.M., 2011. J. Chem. Theor. Comput. 7, 2520. Bauer, M., Abreu, D., Seifert, U., 2012. J. Phys. A: Math. Theor. 45, 162001.

714

CHAPTER 15 Probabilistic Approach in Thermodynamics

Bier, M., 2008. Eur. Phys. J. B 65, 415. Bruers, S., 2007. J. Phys. A40, 7441. Calzetta, E.A., 2009. Eur. Phys. J. B 68, 601. Caplan, S.R., Essig, A., 1999. Bioenergetics and Linear Nonequilibrium Thermodynamics. The Steady State, second ed. Harvard University Press, Cambridge. Carberry, D.M., Reid, J.C., Wang, G.M., Sevick, E.M., Searles, D.J., Evans, D.J., 2004. Phys. Rev. Lett. 92, 140601–140611. Christensen, M., Pedersen, J.B., 2003. J. Chem. Phys. 119, 5171. Collin, D., Ritort, F., Jarzynski, C., Smith, S.B., Tinoco Jr., I., Bustamante, C., 2005. Nature 437, 231. Crooks, G.E., 1999. Phys. Rev. E 60, 2721. Demirel, Y., Sandler, S.I., 2001. Int. J. Heat Mass Transfer 44, 2439–2451. Demirel, Y., Sandler, S.I., 2002. Biophys. Chem. 97, 87. Demirel, Y., 2010. J. Non-Newtonian Fluid Mech. 165, 953. Demirel, Y., 2011. Information and living systems. In: Terzis, G., Arp, R. (Eds.), Philosophical and Scientific Perspectives. MIT Press, Cambridge. Demirel, Y., 2013. 12th Joint European Thermodynamics Conference, Brescia, July 1–5. Dewar, R.C., Juretic, D., Zupanovic, P., 2006. Chem. Phys. 30, 177–182. Dewar, R.C., 2003. J. Phys. A: Math. Gen. 36, 631. Elliott, J.A.W., Elmoazzen, H.Y., McGann, L.E., 2000. J. Chem. Phys. 113, 6573. El-Hani, C.N., Queiroz, J., Emmeche, C., 2006. Semiotica 160, 1–68. Evans, D.J., Searles, D.J., 2002. Adv. Phys. 51, 1529. Frank, T.D., 2002. Physica A 310, 397. Gatenby, R.A., Frieden, B.R., 2007. Bull. Math. Biol. 69, 635. Ge, H., Qian, H., 2009. Phys. Rev. Lett. 103, 148103. Hayashi, K., Ueno, H., Iino, R., Noji, H., 2010. Phys. Rev. Lett. 104, 218103. Jarzynski, C., 1997. Phys. Rev. E 56, 5018. Jaynes, E.T., 2003. Probability Theory: The Logic of Science. In: Brentthorst, G.L. (Ed.). Cambridge Univ. Pres., Cambridge. Julicher, F., Armand, A., Prost, J., 1997. Rev. Mod. Phys. 60, 1269. Lahiri, S., Rana, S., Jayannawar, A.M., 2012. J. Phys. A: Math. Theor. 45, 065002. Nigam, R., Liang, S., 2007. Comput. Biol. Med. 37, 126. Paquette, G.C., 2011. J. Phys. A vol. 44, 368001. Parker, D., Bryant, Z., Delp, S.L., 2009. Cell. Mol. Bioeng. 2, 366. Pe´rez-Madrid, A., Rubı´, J.M., Mazur, P., 1995. Phys. A: Stat. Mech. Appl. 4371, 90329. Qian, H., 2001. Phys. Rev. E 65, 016102. Qian, H., Elson, E.L., 2002. Biophys. Chem. 101, 565. Qian, H., Beard, D.A., 2005. Biophys. Chem. 114, 213. Rubi, J.M., Perez-Madrid, A., 2001. Physica A 298, 177. Rubi, J.M., 2008. AAPP Phys. Math. Nat. Sci. 86 (Suppl. 1), 1. Sagawa, T., Ueda, M., 2012. Phys. Rev. Let., 109, 180602–1-5. Sambongi, Y., Ueda, I., Wada, Y., Futai, M., 2000. J. Bioenerg. Biomembr. 32, 441. Sandler, S.I., 2010. An Introduction to Applied Statistical Thermodynamics. Wiley, New York NY. Santamaria-Holek, I., Rubi, J.M., 2003. Physica A 326, 284. Schlogl, F., 1972. Z. Physik 253, 147. Schmiedl, T., Seifert, U., 2007. J. Chem. Phys. 126, 044101. Schmiedl, T., Speck, T., Seifert, U., 2007. J. Stat. Phys. 128, 77. Schulman, L.S., Gaveau, B., 2001. Foundations Phys. 31, 713. Seifert, U., 2011. Eur. Phys. J. E 34, 26. Seifert, U., 2012. Rep. Prog. Phys. 75, 126001. Shew, W.L., Yang, H., Yu, S., Roy, R., Plenz, D., 2011. J. Neurosci. 31, 55–63. Shin, Y.S., Remacle, F., Fan, R., Hwang, K., Wei, W., Ahmad, H., Levine, R.D., 2011. Biophys. J. 100, 2378–2386. Stucki, J.W., 1980. Euro. J. Biochem. 109, 269. Tsumuraya, M., Furuike, S., Adachi, K., Kinosita Jr., K., Yoshida, M., 2009. FEBS. Lett. 583, 1121. Vellela, M., Qian, H., 2009. J. R. Soc. Interface 1, 16.

References

715

Further Reading Bedeaux, D., Ortiz de Za´rate, J.M., Pagonabarraga, I., Sengers, J.V., Kjelstrup, S., 2011. J. Chem. Phys. 135, 124516. Crooks, G.E., 2000. Phys. Rev. E 61, 2361. Demirel, Y., 2004. Int. J. Exergy 1, 128. Dunne, B.J., Jahn, R.G., 2004. Consciousness, information, and living systems. Cell. Mol. Biol. 51, 703. Grmela, M., 2010. Adv. Chem. Eng. 39, 75. Han, B., Wang, J., 2008. Phys. Rev. E 77, 031922. Huang, Q., Qian, H., 2009. Chaos 19, 033109–33111. Joo, C., Balci, H., Ishitsuka, Y., Buranachai, C., Ha, T., 2008. Annu. Rev. Biochem. 77, 51. Jou, D., Casas-Va´zquez, J., Lebon, G., 2010. Extended Irreversible Thermodynamics, fourth ed. Springer, New York. Karsenti, E., 2008. Nat. Rev. 9, 255. Mehl, J., Blickle, V., Seifert, U., Bechinger, C., 2010. Phys. Rev. E 82, 032401. Moffitt, J.R., Chemla, Y.R., Izhaky, D., Bustamante, C., 2006. PNAS 103, 9006. Ortiz de Zarate, J.M., Sengers, J.V., 2006. Hydrodynamic Fluctuations in Fluids and Fluid Mixtures. Elsevier, Amsterdam. Shannon, C.E., Weaver, W., 1949. The Mathematical Theory of Communication. University of Illinois Press. Seifert, U., 2010. Phys. Rev. Lett. 104, 138101. Sinitsyn, N.A., Hengartnerb, N., Nemenmana, I., 2009. PNAS 106, 10546. Reguera, D., Rubi, J.M., Vilar, J.M.G., 2005. J. Phys. Chem. B 109, 21502. Ross, J., Villaverde, A.F., 2010. Entropy 12, 2199. Russel, D., Lasker, K., Phillips, J., Schneidman-Duhovny, D., Velazquez-Miriel, J.A., Sali, A., 2009. Curr. Opin. Cell. Biol. 21, 97. Sevick, E.M., Prabhakar, R., Williams, S.R., Searles, D.J., 2008. Annu. Rev. Phys. Chem. 59, 603. de Za´rate, J.O., Sengers, J.V., 2011. J. Stat. Phys. 144, 774.