Chapter
2
Brief Review of Thermodynamic Regularities CHAPTER OUTLINE
1. 2. 3. 4. 5. 6.
1.
Preliminary Notes 11 The First LawdEnergy Conservation Principle The Second Law of Thermodynamics 17 Carnot Cycle 19 Formulation of the Notion of Entropy 21 Statistical Substantiation of Entropy 30
14
PRELIMINARY NOTES
Heat engines were mentioned for the first time in the remote past. More than 2000 years ago, Geron working at the Alexandria library designed a device transforming the energy of vapor into mechanical work. However, in those times of slavery his discovery remained unnoticed, as well as his scroll named “Automation”dthe first work dealing with robots. Meanwhile, his other invention, chain-drive, is widely used until today. We can only conjecture about the level of science reached in Alexandria. After all, scientists such as Hippocrates, Archimedes, Democritus, Euclid, whose geometry has been studied for more than 23 centuries, Eratosthenes, who measured the Earth’s diameter, worked there. This scientific institution functioned for more than 600 years and was destroyed by Christian fanatics because most of its leaders and scientists were not Christians. The modern stage of thermodynamics development has coincided with two important eventsdindustrial revolution and the beginning of a modern stage of scientific insight into nature. By convention, the industrial revolution is referred to the year 1769, when James Watt obtained a patent for his steam engine. As we understand today, this event was not only a starting point of technical, industrial, and military science revolution, but also the beginning of a drastic acceleration of the entire human civilization. Until Gibbs’ Entropic Paradox and Problems of Separation Processes. http://dx.doi.org/10.1016/B978-0-444-63919-6.00002-9 Copyright © 2017 Elsevier B.V. All rights reserved.
11
12 CHAPTER 2 Brief Review of Thermodynamic Regularities
then, mechanisms used by humans had been driven by wind, water, or muscle strength of humans or animals. Heat engines induced great interest in the connection of fire with chemical processes and heat transformation into mechanical work. To answer these questions, a new sciencedthermodynamicsdstarted developing. At first, mathematical models developed for describing relations between thermal and mechanical properties of substances had to be adapted to coarse experiments carried out by that time. At first, a certain amount of ungrounded assumptions and gross mistakes had been made, and therefore, the theory needed manifold radical revisions. At first, the theory was dealing only with macroscopic properties of the matter, such as temperature, pressure, amount of heat, and their interrelations irrespective to the structure of the matter forming its basis. Relations between these parameters were found in the form of regularities allowing the prediction of all thermal properties of the substance on the basis of macroscopic notions. All these relations were based on strict mathematical foundations developed by that time. The beginning of modern science is usually related to the creation of classical mechanics developed by Newton, Lagrange, and Hamilton. It was outstandingly successful and continues to develop nowadays, for instance in the field of space flights. One of its characteristic features is a strict determinism. Classical mechanics can predict the behavior of a system with either a small number of elements or many symmetrically located elements. Owing to classical mechanics, it was possible to carry through the problem of the motions of bodies in celestial mechanics, but it becomes completely helpless in a general case of the motion of three bodies. Newton’s works had such a high authority that after his works insistent attempts were made to reduce all laws of nature known by that time to classical mechanics. Finally, all these attempts failed. However, in one field of science these attempts led to excellent resultsdin the theory of thermal phenomena. However, this did not happen immediately, but only in the middle of the 19th century. It was initially known that fire caused chemical transformations, as well as such processes as melting and evaporation. It was known that at the burning of fuel heat is released, which leads to an increase in the volume of any system (liquid, solid, and gaseous). As a result, burning performs work. It became necessary to establish a relation between heat and work. The appearance of thermodynamics marked a new trend in science differing from that of Newton. However, they had something in common. Namely,
1. Preliminary Notes 13
during the 17th and 18th centuries it was realized that although the divine creation of the world, most probably, had taken place, the nature obeys simple universal laws that are cognizable and can be expressed using simple mathematical language. This cognition is based on experiments, quantitative study of physical parameters and mathematical relations between them. Heat studies started with the improvement of a thermometer in the times of Galileo Galilei. The most interesting discovery was made using this device by Newton’s contemporary Black who managed to make a clear-cut distinction between temperature and amount of heat. In the beginning of the 19th century, Fourier suggested a mathematical description of heat propagation inside any body proving that heat flow is proportional to the temperature gradient. In this period, two important notions of energy and work arose in thermodynamics. The disclosure of physical contents of these notions was the most important component in scientific and technical premises of the discovery of energy conservation and transformation law. At first, the notion of work remained within the frameworks of mechanics. Carnot was the first to show the possibility of calculating the work performed by heat as a product of gas pressure by its volume change. Soon it has become clear that when the heat transforms into mechanical motion or decomposes some chemical compound, or else turns into electricity in a thermoelectric column, or when the electricity releases heat in resistors, in all these forms of motion, the energy performs work in the amount corresponding to its initial amount. Starting with Thompson’s experiments, heat and work interconnection became a generally accepted fact. However, until mid-19th century, debates on the nature of heat (whether it is a caloric fluid or one of the forms of energy that can transform into other forms) persisted. As already mentioned, the proof of mechanical energy transformation into heat belongs to Thompson. He submerged halffinished artillery barrels into water and drilled holes therein. Heat released due to mechanical friction made the water boil. This shows that mechanical energy can be transformed into heat and vice versa. The released amount of heat correlates with work by a strict relationship A ¼ iQ
where A is work, kgm; Q is the amount of heat, kcal; i is a coefficient equal, in this case, to 427 kgm/kcal. The equivalence principle is only a particular case of a more general principledenergy conservation law. The discovery of heat and work equivalence principle on the experimental basis was, actually, a completion of the
14 CHAPTER 2 Brief Review of Thermodynamic Regularities
synthesis of two trends in the development of classical physics. One of them originated in debates regarding measures of the energy of motion between Leibniz (E z mv2) and Cartesius (E z mv). Another trend is connected with the struggle of corpuscular ideas regarding the nature of heat against the phlogiston theory. Finally, it led to the appearance of the molecular kinetic theory of heat. Starting from mid-19th century, both constituents of physics have been developing on a scientific foundation based on experimental facts excluding any speculative conjectures and hypotheses.
2.
THE FIRST LAWdENERGY CONSERVATION PRINCIPLE
For a moving body, the conservation of the sum of kinetic and potential energies follows from Newton’s laws. However, this regularity was not applied to physical phenomena already known between 18th and 19th centuries, which were characterized by specific kinds of energy, such as electric, magnetic, thermal, and chemical ones. It was revealed that each of these kinds of energy could be transformed into other kinds. It was established that electric current could generate chemical reactions and be a heat and light source. Chemical reactions generating electric current were discovered. It was also established that electric current generated a magnetic field, and the conditions of heat transformation into electric current (thermoelectricity) were determined. All these discoveries interconnected thermal, chemical, magnetic, and electrical phenomena and put on the agenda a global problem of energy correlations in all these phenomena. Soon it became clear that all the correlations reflect transformations of one and the same substancedenergy. In the course of the development of thermodynamics, relationships between these transformations were found. It was established that despite the diversity of the forms of energy and their everlasting transformations, the total amount of energy remains unchanged. Scientists realized this while clarifying the dependence of macroscopic properties of substance on the behavior of microscopic particles composing the latter. It was caused by the fact that at that time most scientists adhered to a mechanistic approach to nature. They reduced any energy to the kinetic and potential energy of particles. It was far from being proven at that time, but some scientists believed that the substance in any state (gaseous, liquid, or solid) consists of minuscule particles. Depending on the energy value, these particles acquire a certain velocity of linear or oscillatory motion. At that, the value of the total kinetic energy of a body manifests itself macroscopically as its thermal energy. Such interpretation of gas properties
2. The First LawdEnergy Conservation Principle 15
was suggested by Bernoulli. Newton also adhered to this point of view at the nature of heat. Thus, in this case, energy conservation law implies that the total amount of the kinetic and potential energy is conserved for all the particles constituting the substance. In the first half of the 19th century, when the belief in the molecular structure of substance was only taking shape, the first successes in this area were attained as applied to gases. The final achievement of this approach is the kinetic theory of gases developed in fundamental works of Boltzmann and Gibbs. Within this theory created at a speculative level, statistical relations were established between the molecular structures of substance in gaseous, liquid, or solid state and its macroscopic behavior. This theory has allowed expressing pressure, temperature, and other macroscopic parameters of the substance through an averaged characteristicdmean kinetic energy of their molecules. Although the existence of molecules and atoms was not unambiguously proven at that time, this theory was amazingly successful. For about 150 years, its speculative conclusions have been confirmed by the entire array of the available experimental data. The most surprising is that these conclusions were made on the basis of classical mechanics laws and made it possible to create a novel statistical mechanics whose peculiarities will be considered later. Now we emphasize the following. Main laws of classical mechanics are reversible in time. Statistical mechanics, however, contains the feature of irreversibility. Surely, the expanding knowledge of atoms and molecules has made it possible to define more precisely the foundations of statistical mechanics and to extend its application to a broader scope of thermotechnical, material science, and other phenomena. Statistical mechanics connects thermal energy of a substance with mechanical energy of material particles or molecules forming this substance and substantiates the energy conservation law in this case, as well. However, although energy conservation law seems clear and simple, it is, in fact, far from being trivial and self-evident. As a rule, it must express the constancy of a sum of the following three terms: n n n
kinetic energy depending on the motion velocity; potential energy depending on the system or body position; internal molecular energy of the body or system in thermal, chemical, electrical, magnetic, etc., forms.
Sometimes it is difficult to distinguish these components. For example, in case of electrically charged bodies, their electrostatic energy depends not
16 CHAPTER 2 Brief Review of Thermodynamic Regularities
only on the charge value, but also on the body motion velocity and position with respect to other bodies. Besides, according to the relativity theory, a substance having a mass should be considered as a certain form of energy, and any energy possesses a certain mass. Einstein expressed the connection between the mass (m) and energy (E) by a relation E ¼ mc2
where c is the light velocity. In a general case, this law should cover not only energy conservation, but also mass conservation. Besides the matter, there exist fields. It is established that electromagnetic radiation is a process connected with energy transfer. In the beginning of the 20th century, the notions of particles and fields were combined by the quantum theory. It tells that all particles excite fields, and the electromagnetic field is connected with particlesdphotonsdof both wave and corpuscular nature. It is established that the fields of nuclear forces also have respective particles. In all these cases, the energy conservation law connecting fields with particles is valid. In this situation it is hard to understand what is conserveddit is neither substance nor field, neither particle nor wave or energy. It is a certain mathematical function whose physical sense is not intuitively clear. The belief in the conservation law was shaken only once. It was discovered when studying b-decay that the energy of decay products according to Einstein formula does not equal the initial energy of the nucleus. Pauli, who believed in the conservation law, suggested a hypothesis that the missing energy is taken away by an unknown particle weakly interacting with other particle, which did not allow revealing it. The new particle, neutrino, was fixed experimentally only in 1956. Since then, the belief in the conservation law has strengthened and remains unshakable. Therefore, it is clear why a specific kind of energy is usually determined to within an additive constant. Usually, only the change in the energy of a system is fixed, since it is impossible to determine the absolute energy. The most general first law of thermodynamics can be worded as follows: “When a substance or a system undergo transformations, the algebraic sum of various energy changes is independent of the transformation way and depends only on the initial and final states, while the total amount of energy is conserved passing from one kind into another.”
3. The Second Law of Thermodynamics 17
It should be emphasized that in thermodynamics, all determining parameters caused by microscopic-level reasons have a macroscopic expression. No one ever tries to determine temperature and pressure in a gas system by measuring molecule velocities. For this purpose, simple devices are used, which characterize, on the whole, the state of said parametersd thermometers, and manometers of different kinds and designs.
3.
THE SECOND LAW OF THERMODYNAMICS
In contrast to most physical laws, the principal thermodynamic regularities are worded in the negative form. The first law can be formulated as follows: It is impossible to eliminate the energy. Up to now, not a single physical phenomenon contradicting this law has been found. All new data obtained in the course of the development of physics are explained using the former, which contributes to a clearer insight into new phenomena. The second law of thermodynamics is also worded in the negative form. It denies the possibility of inventing a method that could make heat spontaneously flow from a colder body to a hotter one. There are many wordings of the second law, but this one is the simplest and, at the same time, the most general. Meanwhile, one has to admit that today thermodynamics is something greater than the study of relationships between the pressure, volume, and temperature. Most likely, it is a totality of novel approaches and methods in various fields of science, which are often far from the relations between the listed parameters, e.g., in classical and quantum statistics, crystallography, astronomy, etc. The second law imposes more limitations on the heat transfer than the first one. For example, the first law does not forbid heat flow to a body with a higher temperature, since the energy is conserved at that. However, according to the second law, it cannot occur. As it was established, thermodynamic parameters are ultimately determined by the behavior of atoms or molecules, i.e., they are based on the microscopic level of the substance. However, these parameters characterize the behavior and the state of the substance on the whole, as a uniform system, i.e., at the macroscopic level. Correlation between these levels can be demonstrated by the following simple example. Experiment shows that if a gas in a closed vessel is initially in a turbulent regime, after a certain period of time it passes into one of equilibrium states, in which the temperature and pressure in the entire volume become equal. In general, it is known that a nonequilibrium system always tends to a macroscopically equilibrium state. However, in this case only macroscopic parameters characterizing the system on the whole are in equilibrium,
18 CHAPTER 2 Brief Review of Thermodynamic Regularities
while at a microscopic level a total chaos in the motions of gas particles is observed. They move at random in all possible directions. One of the initial statements of thermodynamics is that an isolated macroscopic system passes with time into a state of thermodynamic equilibrium and cannot go out of it spontaneously. Here we usually distinguish equilibrium and nonequilibrium, reversible and irreversible processes. It is assumed that equilibrium processes do not contain time in the equations describing them, i.e., changes in thermodynamic functions depend on other parameters only. Such processes are sometimes called stationary. Any relax ations of systems leading to an equilibrium state are assumed as nonequilibrium, since their parameters depend on time. Subdivision of processes into reversible and irreversible ones according to Planck is stipulated by the second law of thermodynamics. A system transition from equilibrium state A into an equilibrium state B can be considered reversible, if its return into the initial state from B to A is realized without any changes in external bodies. Otherwise, the process is irreversible. It is absolutely clear that real processes connected with heat transfer are always irreversible, while reversible processes are their idealized version. The comparison of these processes sometimes causes confusion. Thus, Leontovich (1983) identifies equilibrium processes with reversible ones and nonequilibrium with irreversible ones. However, it is not true. As early as in the beginning of the 20th century, Carathéodory convincingly proved that the process reversibility by no means follows from its equilibrium character. He has also shown that equilibrium processes model real processes in their extreme infinitely slow interpretation, where at any moment of time internal parameters of a system can be assumed equilibrium. At the same time, the existence of equilibrium states that the substance spontaneously tends to if it is left on its own, does not logically follow from thermodynamics. This also refers to thermodynamic parameters that are unambiguously determined by equilibrium states. Although all this has been confirmed at a purely empirical level by all experimental results without any exception obtained in more than 200 years, obviously, it has acquired the character of a global law. A more careful examination of the issue of the origin and physical nature of the laws reveals essential difficulties that have not been overcome until today. Here a striking contradiction between the theory and experiment arises. According to the second law of thermodynamics, the Universe should have reached the state of statistical equilibrium very long ago. Meanwhile, properties of nature have nothing in common with the property of an
4. Carnot Cycle 19
equilibrium systemdthe Universe constantly evolves. Besides, some general properties of nature have been discovered, which are beyond any doubts. First, since the time of Newton it was clear that laws of nature can be described by rather simple mathematical formulas. Even when it is difficult to interpret some phenomena, for instance, the law of gravity, its mathematical formula provides comprehensive information about all nuances of this phenomenon. Obviously, because of that mathematics was called the language of nature. Second, laws of nature possess another universal property. Nature is inclined to optimization, i.e., economy. Natural phenomena often proceed so that a certain physical quantity spontaneously reaches a characteristic extreme value. For example, when a beam of light passes through various media, it is refracted on their boundaries so as to minimize the time of their passage. The main principle of extremum in thermodynamics consists of the fact that all isolated systems spontaneously evolve to an equilibrium state where entropy reaches its maximal value. At the same time, at a constant volume and entropy, any system evolves to a state with minimal energy. Einstein has demonstrated a very subtle perception of the universality of thermodynamics writing that it is very remarkable that two basic laws of thermodynamics are formulated so simply while they cover so numerous and various parameters and have such a broad scope of applications.
4.
CARNOT CYCLE
In the beginning of the 19th century, a young 28-year-old engineer Sadi Carnot made a decisive contribution to the development of thermodynamics. He analyzed the work performed by a heat engine owing to a heat flow and came to a conclusion of the existence of a basic limit of work that can be obtained. He was not at all embarrassed by the fact that he was formulating the law that would be called afterward the second law of thermodynamics before the first law had been formulated. The most important point in Carnot’s discovery is the assertion that in a heat engine there is no dependence on the mechanism and way of performing work, and there is only a defining dependence on the temperature difference causing the heat flow. According to Carnot, the driving force can be realized only in the presence of temperature difference. A heat engine can accomplish mechanical work by transferring heat from a hot reservoir to a cold one, the temperature change being stipulated solely by the volume increase.
20 CHAPTER 2 Brief Review of Thermodynamic Regularities
Heat transfer does not require a material medium, as it happens, e.g., between the Sun and the Earth. However, it is possible to accomplish some work at the expense of heat only in the presence of a certain material medium capable of receiving energy at heating and giving it away when producing work. Usually, a gas or vapor is used as such a medium. In all heat engines, the net power is liberated on a rotating shaft. The latter rotates due to the impact of the elastic medium on the blades, plates, or piston displacing due to the pressure difference or the use of kinetic energy, but not due to the direct heat transfer. Hence, the heat is first transferred to the elastic medium, and then turns into work. After that, the elastic medium, which is often called a working medium, should return into its initial state, thus passing a certain cycle of changes. The word “cycle” implies a closed sequence of thermodynamic transformations. The idea of a cycle was developed by Carnot. He has shown that after performing work owing to a heat flow, the engine returns into its initial state and then repeats the entire cycle. At that, it is impossible to realize all the cycle on the basis of heat exchange with one source only, i.e., with one medium with a certain temperature. The heat obtained by the working medium from a hot source is not totally transformed into work. The rest of the heat should be transferred to the cooler. In the course of performing work, the elastic medium is expanded, so that its volume exceeds the initial volume, which has received the heat. The return to the normal state requires the volume decrease, i.e., power consumption, which reduces the share of usable work. Thus, the second law expresses the necessity to spend energy for returning the working medium into its initial state, which allows the renewal of the closed thermodynamic cycle. Limitations imposed by the second law on heat transformation into work are not connected with some secret qualities of heat. They are just explained by the fact that such transformation necessarily requires an increase in the volume of the elastic medium. The second law asserts that there exists an upper boundary of the transformation of the amount of heat into work. The requirements related to the processes forming the cycle provide for their reversibility (Chapter 3). Clearly, this condition is never realized ideally in practice. Heat exchange irreversibility can be attributed to two factors. First, heat exchange always occurs at finite temperature differences and is accompanied by heat losses. Second, any displacement of elements of a mechanism is always accompanied by friction. Hence, reversible processes never happen in reality. They represent an abstract construction meant for fixing a limit that can be theoretically achieved putting aside all imperfections of the existing heat engines.
5. Formulation of the Notion of Entropy 21
This assertion is connected with the fact that it is impossible to draw a certain amount of heat out of a heat reservoir and totally transform it into work. Burning fuel or nuclear fuel can serve as an approximate equivalent of a heat reservoir with a constant temperature. In the course of heat withdrawal, burning of a new portion of fuel compensates the loss in the reservoir. According to the second law, the withdrawn heat has a limited share to be transformed into work. To attain useful work, i.e., heat consumption, the cycle must include an element whose temperature is below that of the source. Otherwise, if the temperature is everywhere the same, no useful work is produced.
5.
FORMULATION OF THE NOTION OF ENTROPY
All the notions introduced by Carnot were profoundly analyzed, and their role was understood after generalizations performed by Clausius who introduced in 1886 a new physical quantitydentropy. Clausius paid sufficient attention to the mathematical aspect of the theory and tried to substantiate rigorously his theoretical conclusions. He has shown that the Carnot cycle is characterized by such a correlation between any elementary amount of heat dQ and temperature T at which this amount has been taken from the dQ source that an integral of T over the entire cycle for a reversible process equals zero, i.e., Z
dQ ¼ 0 T
(1)
The integrand itself characterizes a certain parameter of the system state called entropy of said system dH ¼
dQ T
(2)
For irreversible processes this integral over a closed cycle is Z
dQ < 0; T
(3)
dQ T
(4)
therefore dH
where the equality is observed for reversible processes, and the inequality for irreversible ones. In the process of spontaneous heat transfer in a thermally isolated system, the entropy always grows. If the process is organized so that in some
22 CHAPTER 2 Brief Review of Thermodynamic Regularities
system entropy decreases, then in another system connected with the former the entropy grows to a greater extent. Thus, it is accepted that in the course of an irreversible process, the entropy of the Universe always grows. If the temperature of a system is higher, its entropy is lower than in the case of the same amount of heat used at a lower temperature. Therefore, entropy is a function of the system state, since it shows different changes in the system at the same heat consumption. In this respect, entropy characterizes the extent of system disorganization. Irreversible processes intensify disorder, and, thus, increase entropy. Reversible processes simply transfer entropy from one body to another keeping its value constant. The fact that in all practical applications the integral of Clausius obeys an inequality having one and the same sign has been causing disputes among researchers until today. The operation of any real machine is accompanied by entropy growth, since it inevitably accompanies irreversible processes. Meanwhile, the irreversibility takes place in all natural physical phenomena, since friction processes, heat losses, etc., are always present therein. Hence, the sum of entropies of the bodies participating in these phenomena always grows. Clausius extended this conclusion over the Universe on the whole considering it as a closed isolated system. As we can see from Eqs. (1)e(4), entropy has the same dimension as specific heat capacity. It is the amount of heat ascribed to a temperature unit. Owing to such dimension, for many years until nowadays entropy has been considered as a special kind of energy. Usually, the specific heat capacity C has a definite value if the way of heat transfer to the gas or vapor is indicated (at a constant pressure or constant volume). It is assumed that entropy is a function of the system state, but its value is independent of the gas or vapor state in the vicinity of the specified parameters. Therefore, heat capacity is considered a variable quantity and entropyda static one. All this made it possible to formulate the physical meaning of entropy as follows, “Entropy represents energy required for a reversible return of the working medium into the initial state after an adiabatic process completed at the temperature corresponding to this initial state.” At that, it is stipulated that we deal with the minimal energy. From this standpoint, it is assumed that the meaning of entropy becomes clearer taking into account Berthelot’s principle. He has established that in a system allowing several chemical transformations, the one releasing the greatest amount of heat actually takes place. Berthelot’s principle proves to be approximate at normal temperatures, while with decreasing temperature its precision grows, and at very low temperatures it completely agrees with experimental data.
5. Formulation of the Notion of Entropy 23
Considering entropy increment dH during a short period of time dt, two components are usually distinguished: dH ¼ de H þ di H
The quantity deH has an external cause manifesting itself as mass or heat exchange with the surroundings. This part of entropy is reversible in the sense that it can be both positive and negative. The quantity diH is an increment of internal entropy obtained at the expense of irreversible processes taking place in real heat engines due to friction, heat losses, diffusion, etc. This part of entropy, which is internal with respect to the system, never changes its sign. In the long run, this component reflects irreversible changes inside the system, which corresponds to a spontaneous evolution of the system. The main feature of this overall function is that it changes in one direction only and always grows, i.e., dH ¼ de H þ di H > 0
Even if, due to some reasons, deH can decrease, it is assumed that the overall dH value is always positive. It is established that the efficiency of a reversible (ideal) thermal cycle is independent of the physical and chemical nature of the engine. It is a function of the temperature difference T1 and T2 of the hot and cold reservoirs (Fig. 2) determined by the relation
Q1
W
Q2 n FIGURE 2 Distribution of heat flows in heat machine.
24 CHAPTER 2 Brief Review of Thermodynamic Regularities
h ¼ 1
T1 Q1 ¼ 1 T2 Q2
where h is the efficiency; Q1 is the amount of heat released by the hot reservoir; and Q2 is the amount of heat transferred to the cold reservoir (cooler). It is accepted that if the heat is consumed by a system, its increment is positive for the system, i.e., dQ > 0, and if it is released by the system, its value is negative, i.e., dQ < 0. As noted, for a closed reversible Carnot’s cycle the following relation is valid: I H ¼
dQ ¼ 0 T
Another interesting property of entropy is that if this integral is calculated between two sets of heat engine parameters I and II, its value is independent of the character of these parameters change between I and II. It depends only on the values of these parameters at the initial point I and final point II. Hence, for a reversible process we can write ZII HI HII ¼ I
dQ T
If the process is isothermal, i.e., at a constant temperature, the entropy change for a reversible cycle amounts to DH ¼
Q T
Carnot’s theorem becomes an assertion that in an inverse cycle the sum of internal entropy change at a cyclic return to the initial point is always zero, i.e., H ¼
Q1 Q2 ¼ 0 T1 T2
In an irreversible cycle with a lower efficiency, a smaller part of Q1 turns into the mechanical work. It means that the amount of released heat, i.e., its share transferred to the cooler Q12 is greater than Q2. Therefore, Q1 Q12 <0 T T
Meanwhile, the amounts of heat transferred to the reservoirs and to an irreversible engine have, as it was agreed, opposite signs. Therefore, the total change of thermal reservoirs entropy is a positive value because
5. Formulation of the Notion of Entropy 25
ðQ1 Þ ðQ2 Þ >0 T T
Thus, for the internal entropy of a system we always obtain I
dQ 0 T
For the surroundings the internal system exchanges heat with, the entropy is always positive I
dQ 0 T
At the end of each cycle, either reversible or irreversible, there are no entropy changes inside the system, since the cycle starts and finishes at the same parameters. In case of irreversible real cycle, a system gives a part of heat to the environment at the expense of both heat losses and mechanical friction, whose energy also turns into heat. This heat dissipates in space, i.e., it leads to an increase in the entropy of the environment. As a result, any real system that undergoes a cycle of operations and returns into the initial state can function only by increasing the entropy of the environment it contacts with. The analysis of thermodynamic laws has allowed Clausius to make three fundamental conclusions regarding entropy and energy. n n n
The sum of entropy changes in the environment cannot decrease. Energy of the Universe is constant. Entropy of the Universe tends to a maximum.
If a system obtains only energy, but no substance from an external source, the following relation is valid: de H ¼
dQ dU þ pdV ¼ and di H 0 T T
(5)
where dQ is the amount of heat obtained by the system during a certain time dt; dU is a total change in the closed system energy; pdV is work. In a general case, for open systems that exchange substance and energy with the environment, dU þ PdV s Q
In this case,
26 CHAPTER 2 Brief Review of Thermodynamic Regularities
de H ¼
dU þ PdV þ de H0 and di H 0 T
(6)
where deH0 is entropy change stipulated by the substance flow. This quantity is usually calculated using the chemical potential m. In any case, diH 0. This is the most general wording of the second law of thermodynamics. Thus, for closed systems, the first and the second thermodynamic laws can be expressed by the relationships dU ¼ dQ þ dA
(7)
dQ þ di H T
(8)
dH ¼
If the system state is transformed only through a reversible process, the entropy change is caused only by heat flows dU ¼ TdH þ dA ¼ TdH þ pdV
It is noteworthy that the above wording of the second law allows one to calculate the entropy change, i.e., to determine it with the precision to an additive quantity. The notion of chemical potential was introduced by Gibbs. He examined a heterogeneous system consisting of several homogeneous parts S1; S2;.Sn with the masses m1; m2;.mn. These substances do not react chemically with each other (no chemical transformations occur). He took into account only material exchange between these homogeneous parts. Assuming that energy change dU of a certain homogeneous part must be proportional to the change in the mass of substances, dm1; dm2; dm3;.dmn, he has suggested an equation that is true in any homogeneous part of the system dU ¼ TdH pdV þ m1 dm1 þ m2 m2 þ . þ mn mn
(9)
He called the coefficients mk chemical potentials. Gibbs took into account the classical definition of entropy according to which the system had to be in an equilibrium state, and transformations between equilibrium states had to be reversible, i.e., such that dQ ¼ TdH
The amount of substance can be written in grams, moles, or, in case of gases, in the quantities of particles dU ¼ TdH pdV þ m1 dN1 þ m2 dN2 þ .mn dNn
5. Formulation of the Notion of Entropy 27
This expression can be rewritten as dU ¼ TdH pdV þ
n X
mk dNk
(10)
1
It can be obtained from Eq. (10) that vU vU vU ¼ T; ¼ p; ¼ mk vH V; Nk vV H; Nk vNk H; V; Nk sj
As Prigogine has shown, using Euler’s theorem we can prove that entropy in the nondifferential form is represented by a relation H ¼
U pV X mk Nk þ T T T k
(11)
Other generalized characteristics are also used in thermodynamics. The relation F ¼ U TH
(12)
is the Helmholtz free energy. If the temperature and the volume are maintained constant, this parameter tends to a minimum, which follows from the second law of thermodynamics. Another generalized parameter is called the free energy of Gibbs. G ¼ U þ pV TH
(13)
At a constant pressure and temperature, this parameter also tends to a minimum. It is also noteworthy that the entropy parameter introduced by Clausius is an invariant specifically characterizing a thermotechnical system. To clarify this issue, one should first look into the parameters of the specific heat capacity of gas and examine the correlation between the amount of heat and the work. As known, any expanding system displaces the surrounding bodies, i.e., performs a certain work. If an expanding gas shifts a piston by the quantity dh, it performs the work dA ¼ F$dh
(14)
where F is the force acting on the piston on the part of the gas F ¼ p$S
(15)
where p is the gas pressure, S is the piston area. If we substitute Eq. (15) into Eq. (14),
28 CHAPTER 2 Brief Review of Thermodynamic Regularities
dA ¼ pSdh ¼ pdV
(16)
where dV is the volume of gas expansion. This dependence is simple, but important. At the expansion of gas from volume V1 to V2 at p ¼ const, the work amounts to A ¼ pðV1 V2 Þ
At T ¼ const, for 1 g of molecule p ¼
RT V
Therefore, dA ¼ pdV ¼
RT dV ¼ RT ln V V
At that, the work amounts to A ¼ RT ln
V2 V1
If a system does not obtain energy (heat) from the outside, the work is performed at the expense of the system internal energy E consumption dE ¼ dQ p$dV
(17)
where dE is the internal energy, which is a function of state. If at the consumption of the amount of heat dQ the body temperature increases by dT, then the ratio dQ ¼ C dT
is called the heat capacity of a body. We distinguish CVdheat capacity at a constant volume, and Cpdheat capacity at a constant pressure. If in Eq. (17) dV ¼ 0, then dQ ¼ dE, i.e., the entire heat is spent for increasing the internal energy of the body; therefore, we can write dE CV ¼ dT V
(18)
If p ¼ const, the heat is spent not only for increasing the internal energy, but also for performing work: dQ ¼ dE þ pdV ¼ dðE þ pVÞ
5. Formulation of the Notion of Entropy 29
Hence, Q ¼ W ¼ E þ pV
(19)
This function is called enthalpy. It is also a function of the body state. Hence, at p ¼ const, the heat capacity amounts to Cp ¼
dQ dT
From the comparison of Eqs. (18) and (19) it is clear that always Cp > CV
It may seem that this inequality is connected just with the work produced by the system at its expansion with heating. However, it is not the case. This inequality equally refers to the few bodies whose volume decreases with heating. Here the so-called Le-Chatelier’s principle becomes valid. Its essence is that external impacts driving a system out of thermodynamic equilibrium induce processes in it, which tend to weaken the results of this impact. Now we examine 1 g-molecule of gas, with the respective molar heat capacities CV and Cp. Owing to the equation pV ¼ RT, the thermal function of 1 mol of gas is connected with its internal energy by the relation Q ¼ E þ pV ¼ E þ RT
Differentiating this equality with respect to the temperature, we obtain Cp ¼ CV þ R Cp CV ¼ R R ¼ 8:3 J=grad$mole ¼ 2 cal=grad$mole
It is easy to find the heat capacity of a mono-atomic gas. In this case, the internal energy of gas is just a sum of kinetic energies of the translational motion of its particles. This energy for one particle equals 32 kT. The internal energy of 1 mol of gas is E ¼
3 3 N0 kT ¼ RT 2 2
Hence, CV ¼
3 R ¼ 12:5 J=grad$mol 2
30 CHAPTER 2 Brief Review of Thermodynamic Regularities
Cp ¼ 20:8 J=grad$mol
These quantities are generally independent of temperature. This theory was supplemented by Planck. He has shown that a general expression of entropy in the form Z H ¼ 0
T
Q dT T
does not contain any integration constants. In the low-temperature region, the specific heat capacity change with temperature obeys the law established by Einstein on the basis of quantum theory. He represented each atom of a solid as an elementary oscillator whose energy varies by portions multiple to the product hn (n being the Eigen frequency of an elementary oscillator and hdthe Planck constant), and not continuously. The average energy in the ensemble of oscillators equals kT, where k is the Boltzmann constant. At very low temperatures, kT can become smaller than the quantum hn, so that the greatest part of oscillators is at rest. This explains the zero value of the heat capacity. At the same time, according to the theory of heat capacity suggested by Einstein, the mechanism of energy absorption by molecules remains, in his opinion, not clear enough as yet.
6.
STATISTICAL SUBSTANTIATION OF ENTROPY
Classical thermodynamics operates with directly measurable quantitiesd pressure, temperature, volume, amount of heat, etc. It represents a completed system. It assures a total and exhaustive maintenance of the respective technological processes and needs in various fields of industry. Starting from the second half of the 19th century, it has allowed a detailed estimation of furnaces, boilers, turbines, various heat exchangers, and somewhat laterdvapor, diesel, and petrol engines. These methods have retained their topicality until today and are widely used without any significant modernization. Actually, at that time there was no social need for a deeper insight into theoretical foundations of these regularities. However, scientists are always more interested in finding an answer to the question “why?” than to the question “how much?” Despite the perfection of the existing thermodynamics, this interest required a deeper insight into its mechanism to understand the causes of the observed phenomena. It has become clear thanks to Boltzmann’s works. His approach was based on the identification of heat with the motion of molecules. The kinetic theory of gases, the simplest one, was developed first. It was a purely speculative model, which
6. Statistical Substantiation of Entropy 31
could be devoid of any real sense, if it were not confirmed by the entire experimental material accumulated by nowadays. It is difficult to imagine today how bold was the idea to identify heat with the motion of atoms and molecules in the second half of the 19th century, when their existence was not proven yet. This theory connects thermal energy with mechanical energy of material particles. The application of this theory to an ideal gas is the most visual. Gas molecules are considered as vanishing small balls moving at high linear velocities. They permanently collide with each other and with the vessel walls. The kinetic energy of molecules is macroscopically revealed as the thermal energy of the gas under study. Thus, all the internal energy of gas is reduced to the kinetic energy of its molecules. However, it is impossible to take into account the contribution of individual molecules into gas parameters. This contribution can be an object of a certain averaging, i.e., statistical generalization. The merit of a successful solution of this problem belongs to Boltzmann. He introduced the notions of discreteness and probability into the science making a revolution in physics of his time. From the standpoint of statistical approach, he established rather simple relations between the mechanical energy of molecules and all thermodynamic functions, in particular, entropy. This approach was caught up and developed by such representatives of the new physics of the beginning of the 20th century as Gibbs, Poincaré, Lorentz, Planck, and Einstein. The general result of their works that can be considered fully established is the existence of a connection between the entropy of a certain state and the probability of this state. The existence of a connection between the entropy and the probability is accepted a priori, because these two quantities characterizing a system always vary in the same direction. In fact, according to the principle of Clausius, any system evolves in such a way that its entropy grows, and at the same time, this evolution is always directed toward more probable states. This can be illustrated by a simple example. Imagine a certain closed space (Fig. 3) divided by an imaginary partition into two parts. Introduce a
pVT
n FIGURE 3 A box with a partition.
pVT
32 CHAPTER 2 Brief Review of Thermodynamic Regularities
great number of gas molecules N, whose mean kinetic energy characterizes the gas temperature, into the left-hand part of the space. Let these molecules move spontaneously and observe their position after a certain time: there will be N1 molecules in one part and N2 in another. Evidently, each molecule taken separately has the same chance to be located in each part of the space. For this case, the number of outcomes of a random distribution or the number of complexes, according to Landau, for N ¼ N1 þ N2 is 4 ¼
N! N1 !N2 !
(20)
An ensemble or a complex of systems simultaneously represents all possible states of the system, whose community is connected with the fact that all of them consist of the same number of molecules N and possess the same energy whose value is defined. Boltzmann, who was the first to discover the essence of entropy as a measure of molecular chaos, came to a conclusion that the law of entropy growth reflects an increasing disorganization. According to such disorganization, the most probable case of distribution is an approximate equality of the numbers of molecules in both parts, i.e., in the long run, N1 ¼ N2 ¼ N2 . In the process of evolution, 4 grows, and for N1 ¼ N2 the value of 4 reaches its maximum. If we connect the 4 value with the probability of state measured by the number of complexes, entropy growth corresponds to the evolution to the most probable state of the system, i.e., to the equilibrium. The parameter 4 is usually called the probability of the state. Strictly speaking, it is not the case. In fact, this parameter represents the number of dynamic states corresponding, for example, to energy constancy. The true probability is the ratio of this number to all possible dynamic states of the system. Boltzmann has realized that an irreversible entropy increase can be considered as a manifestation of an increasing molecular chaos, and the distribution asymmetry leads to a decrease in the number of complexes 4. He identified entropy with the number of complexes using the formula H ¼ k log 4
(21)
where k is a universal constant determining the mean kinetic energy of a molecule. Irrespective of the initial distribution, its evolution finally leads to a uniform distribution. (N1 ¼ N2). The k value was numerically obtained with the purpose to conserve a common dimension with Clausius’ entropy from the relation
6. Statistical Substantiation of Entropy 33
k ¼
R NA
where R is the gas constant for a gram-molecule; NA is the Avogadro number. Such an ensemble of systems can be visually represented as a distribution function expressing the probability Pf of the fact that a system selected out of the ensemble is in some specific state. Main relations connecting the distribution function Pf with thermodynamic properties of a macrostate were formulated after Boltzmann in a more general form by Planck in the form of a relationship H ¼ k
X
Pf ln Pf
(22)
P
under the condition that Pf ¼ 1: The higher the 4, the greater the k ln 4. Hence, this dependence satisfies the ideas of the behavior of entropy. It allows us to measure more precisely the disorder, which in this sense represents the absence of information about the exact state of the system. It should be emphasized that for systems consisting of a large number of particles, all states different from uniform distribution are improbable. The collision equations derived by Boltzmann look quite natural. Therefore, it seems surprising that they lead to extremely paradoxical consequences, namely, to the idea that thermodynamic entropy can only grow. Speaking about the most probable behavior of a system, one should keep in mind that, in fact, the probability of a transition to a state with greater entropy is so overwhelmingly high in comparison with the probability of its noticeable decrease that the latter actually can never be observed in nature, excluding small fluctuations. In this connection, a question arisesdhow and why it happened that our Universe started with such small entropy, and it has been constantly growing since then. The discussion of this issue leads to the theory of Big Bang, which is identical to the act of creation. However, it is another problem leading us away from the main topic. In the middle of the 20th century, the Boltzmann ratio led Shannon to a connection between entropy and information. It was an unexpected discovery, although the notion of thermodynamic entropy was already based on some aspects of informational approach. The interpretation of entropy consists of the fact that it is a measure of disorder of a complicated system. Hence, the greater the entropy, the less we know about the system. In his fundamental works, Shannon formulated the informational entropy in the form of a relation analogous to the Boltzmann formula
34 CHAPTER 2 Brief Review of Thermodynamic Regularities
H ¼ k ln P
(23)
where k is the proportionality coefficient, P is the event probability. Some authors perceive a connection between the information theory and thermodynamics in this dependence. For that, the coefficient k in Eq. (23) is assumed to be equal to the coefficient k in Boltzmann’s formula. In this case, it turns out that the dimension of thermodynamic and informational entropy is the same. It can be substantiated by a seemingly undeniable factd the production, transformation, transfer, and receipt of information always require energy consumption. This consumption can be different in different conditions. Therefore, instead of a real amount of the spent energy, the respective entropy is applied, because information is measured in entropy units, assuming that entropy is, in a certain sense, energy. However, it is absolutely obvious that this connection is artificial, since the transferred information is in no way connected, for example, with the surrounding temperature. After all, in practical applications, informational entropy assumes a dimensionless form and is expressed in units that are in no way connected with energy, namely, in bits. Clausius formulated the notion of entropy resorting only to macroscopic parameters of thermodynamics. Boltzmann gave its definition using only microscopic parameters. There is, at least, one field where these two aspects of entropy manifest themselves simultaneously, the theory of mixing and separation of gases.