systematic study of the relationship between heat, work, temperature, and energy, now encompassing the general behaviour of physical systems in a condition of equilibrium or close to it. It is a fundamental part of all the physical sciences. Historically, the term energy, which may be defined as the capacity to produce an effect, was used as early as the 17th century in the study of mechanics. The transfer of energy in the form of heat was not correctly associated with mechanical work, however, until the middle of the 19th century, when the first law of thermodynamics, or the principle of the conservation of energy, was properly formulated. In 1824 the French military engineer Sadi Carnot introduced the concept of the heat-engine cycle and the principle of reversibility, both of which greatly influenced the development of the science of thermodynamics. Carnot's work concerned the limitations on the maximum amount of work that can be obtained from a steam engine operating with a high-temperature heat transfer as its driving force. Later that century, his ideas were developed by Rudolf Clausius, a German mathematician and physicist, into the second law of thermodynamics, which introduced the concept of entropy. Ultimately, the second law states that every process that occurs in nature is irreversible and unidirectional, with that direction being dictated by an overall increase in entropy. It, together with the first law, forms the basis of the science of classical thermodynamics. Classical thermodynamics does not involve the consideration of individual atoms or molecules. Such concerns are the focus of the branch of thermodynamics known as statistical thermodynamics. This field attempts to express macroscopic thermodynamic properties in terms of the behaviour of individual particles and their interactions. It has its roots in the latter part of the 19th century, when atomic and molecular theories of matter began to be generally accepted. The 20th century has seen the emergence of the field of nonequilibrium, or irreversible, thermodynamics. Unlike classical thermodynamics, in which it is assumed that the initial and final states of the substance being studied are states of equilibrium (i.e., there is no tendency for a spontaneous change to occur), nonequilibrium thermodynamics investigates systems that are not at equilibrium. Early developments in nonequilibrium thermodynamics by the Norwegian-American chemist Lars Onsager concerned systems near, but not at, equilibrium. The subject has since been expanded to include systems far away from equilibrium. systematic study of the relationship between heat, work, temperature, and energy, now encompassing the general behaviour of physical systems in a condition of equilibrium or close to it. It is a fundamental part of all the physical sciences. A central consideration of thermodynamics is that any physical system, whether or not it can exchange energy and material with its environment, will spontaneously approach a stable condition (equilibrium) that can be described by specifying its properties, such as pressure, temperature, or chemical composition. If the external constraints are changed (for example, if the system is allowed to expand), then these properties will generally alter. The science of thermodynamics attempts to describe mathematically these changes and to predict the equilibrium conditions of the system. The subject was developed in the 19th century, when much interest centred on the question of the efficiency of heat engines, in which heat is converted into useful work. In all such devices there is an irreversible dissipation of useful energy because heat can never be converted to work with 100 percent efficiency. The key concept in understanding these transformations is not energy but entropy. The law of conservation of energy, known as the first law of thermodynamics, ensures that, whenever energy is converted in form, its total quantity remains unchanged. In thermodynamics, interest centres on the usefulness of energy. Roughly speaking, ordered energy is useful, whereas disordered energy cannot be harnessed to do work. Thus, the energy stored in a battery is useful, whereas the energy from a fire that has dissipated into the environment is effectively lost. Entropy provides a measure of the degree of disorderedness of energy. Disordered energy has a high entropy. When the entropy of a system reaches a maximum, the system is totally disordered and no further large-scale change will spontaneously occur. This is the condition of equilibrium. The second law of thermodynamics states that, in a closed system, the entropy does not decrease. That is, if the system is initially in a low-entropy (ordered) state, its condition will tend to slide spontaneously toward a state of maximum entropy (disorder). For example, if two blocks of metal at different temperatures are brought into thermal contact, the unbalanced temperature distribution (which represents a partial ordering of the energy) rapidly decays to a state of uniform temperature as energy flows from the hotter to the colder block. Having achieved this state, the system is in equilibrium. The approach to equilibrium is therefore an irreversible process. The tendency toward equilibrium is so fundamental to physics that the second law is probably the most universal regulator of natural activity known to science. An insight into entropy and the second law can be provided by the study of the microscopic constituents of matter (atoms and molecules). The temperature and pressure of a gas are traced to the agitation of the gas molecules. The entropic progress from order to disorder may then be viewed as an example of the general tendency of chaotic disruptions to disturb organization and structure. Any system that is subject to random agitations will eventually attain its most disordered condition. The concept of temperature enters into thermodynamics as a precise mathematical quantity that relates heat to entropy. The interplay of these three quantities is further constrained by the third law of thermodynamics, which deals with the absolute zero of temperature and its theoretical unattainability. Absolute zero (approximately -273 C) would correspond to a condition in which a system had achieved its lowest energy state. The third law states that, as this minimum temperature is approached, the further extraction of energy becomes more and more difficult. Additional reading General works Textbooks include Joseph H. Keenan, Thermodynamics (1941, reissued 1970), a classic; John R. Howell and Richard O. Buckius, Fundamentals of Engineering Thermodynamics, 2nd ed. (1992), incorporating a distinct presentation of entropy and the second law; George N. Hatsopoulos and Joseph H. Keenan, Principles of General Thermodynamics (1965, reissued 1981), utilizing the axiomatic approach; for engineering students, Richard E. Sonntag and Gordon J. Van Wylen, Introduction to Thermodynamics, Classical and Statistical, 3rd ed. (1991); Michael J. Moran and Howard N. Shapiro, Fundamentals of Engineering Thermodynamics, 2nd ed. (1992); and Gordon J. Van Wylen, Richard E. Sonntag, and Claus Borgnakke, Fundamentals of Classical Thermodynamics, 4th ed. (1994); for chemistry and chemical engineering, Gilbert Newton Lewis and Merle Randall, Thermodynamics, 2nd ed. rev. by Kenneth Pitzer and Leo Brewer (1961), a classic; and Olaf A. Hougen, Kenneth M. Watson, and Roland A. Ragatz, Chemical Process Principles, 2nd ed., vol. 2, Thermodynamics (1959); Mark W. Zemansky, Michael M. Abbott, and Hendrick C. Van Ness, Basic Engineering Thermodynamics, 2nd ed. (1975), a combination of physics and chemical engineering viewpoints; Herbert B. Callen, Thermodynamics and an Introduction to Thermostatistics, 2nd ed. (1985), a classic; and, at an advanced level, Joseph Kestin, A Course in Thermodynamics, 2 vol. (196668, reissued 1979), with alternative presentations of the second law; Howard Reiss, Methods of Thermodynamics (1965), including special topics; L.C. Woods, The Thermodynamics of Fluid Systems (1975, reissued 1985), a mathematical presentation; and E.A. Guggenheim, Thermodynamics, 8th ed. (1986), on chemistry.Related books and works treating special applications include J. Willard Gibbs, The Collected Works of J. Willard Gibbs, vol. 1, Thermodynamics, ed. by W.R. Longley and R.G. Van Name (1928, reissued 1957), the papers of one of the most renowned thermodynamicists; John W. Mitchell, Energy Engineering (1983), discussing the energy and economic analysis of a variety of applications involving thermodynamics and heat transfer; T.J. Quinn, Temperature, 2nd ed. (1990), a comprehensive account of the principles of thermometry; Robert C. Reid, John M. Prausnitz, and Bruce E. Poling, The Properties of Gases and Liquids, 4th ed. (1987), a standard reference in the calculation of thermodynamic properties; Ascher H. Shapiro, The Dynamics and Thermodynamics of Compressible Fluid Flow, 2 vol. (195359), the definitive work in the field; Harry A. Sorenson, Energy Conversion Systems (1983), the application of thermodynamics and heat transfer to various power-generation plants and engines; and W.F. Stoecker, Design of Thermal Systems, 3rd ed. (1989), a widely used text in design applications including thermodynamic analysis. Richard E. Sonntag Statistical thermodynamics A readable historical development of thermodynamics and statistical thermodynamics may be found in Stephen G. Brush, Statistical Physics and the Atomic Theory of Matter (1983). Terrell L. Hill, An Introduction to Statistical Thermodynamics (1960, reissued 1986), is an excellent early text, and his Statistical Mechanics (1956, reissued 1987), a classic, although somewhat dated, is still an excellent source for the Ising model. Arnold Munster, Statistical Thermodynamics, 2 vol. (196974; originally published in German, 1956), is encyclopaedic but written at an accessible level. E. Atlee Jackson, Equilibrium Statistical Thermodynamics (1968), is a good elementary text, primarily for physics undergraduates. Standard graduate-level texts are Kerson Huang, Statistical Mechanics, 2nd ed. (1987), for physics; and Donald A. McQuarrie, Statistical Mechanics (1975), for chemistry and engineering. Two others for chemistry containing more modern developments are David Chandler, An Introduction to Modern Statistical Mechanics (1987); and Charles E. Hecht, Statistical Thermodynamics and Kinetic Theory (1990). Edward A. Mason and T.H. Spurling, The Virial Equation of State (1969), is a readable text. H. Eugene Stanley, Introduction to Phase Transitions and Critical Phenomena (1971, reprinted 1987), is an excellent and thorough source. Kenneth G. Wilson, "Problems in Physics with Many Scales of Length," Scientific American, 241(2):158179 (August 1979), offers a popular account of critical phenomena and critical exponents. Donald A. McQuarrie Nonequilibrium thermodynamics S.R. De Groot and P. Mazur, Non-equilibrium Thermodynamics (1962, reissued 1984), is the most complete exposition of the nonlinear theory and includes a historical introduction that focuses primarily on contributions from the European school. Herbert B. Callen, Thermodynamics and an Introduction to Thermostatistics, 2nd ed. (1985), chapter 14, gives a brief, readable introduction to the linear, phenomenological theory. Terrell L. Hill, Thermodynamics for Chemists and Biologists (1968), chapter 7, contains a simple introduction to the phenomenological theory for chemical biological systems and develops diagrammatic methods for their analysis. Joel Keizer, Statistical Thermodynamics of Nonequilibrium Processes (1987), written at the advanced undergraduate and graduate level, provides an outline of the nonlinear theory and details applications to molecular fluctuations in physical, chemical, and biological systems. Donald D. Fitts, Nonequilibrium Thermodynamics (1962), details applications to heat transport, binary diffusion, and thermal diffusion. A. Katchalsky (A. Katzir-Katchalsky) and Peter F. Curran, Nonequilibrium Thermodynamics in Biophysics (1965, reissued 1975), illustrates how the phenomenological theory can be applied to realistic problems in membrane transport and biochemical reactions. K.S. Frland, T. Frland, and S.K. Ratkje, Irreversible Thermodynamics (1988), an undergraduate text, gives an introductory account of the phenomenological theory with applications to membranes, electrochemistry, and bioenergetics. R. Kubo, M. Toda, and N. Hashitsume, Statistical Physics, vol. 2, Nonequilibrium Statistical Mechanics, 2nd ed., trans. from Japanese (1991), a graduate text, provides a detailed introduction to the mechanical underpinnings of the fluctuation-dissipation theorem and irreversible thermodynamics. H.J. Kreuzer, Nonequilibrium Thermodynamics and Its Statistical Foundations (1981), describes a range of interesting linear and nonlinear phenomena and deals with mechanical issues underlying the recurrence theorem, linear response theory, and the Boltzmann equation and includes a very complete bibliography of early historical references. Albert Einstein, Investigations on the Theory of the Brownian Movement (1926, reissued 1956), contains original papers. The papers by L. Onsager, Reciprocal Relations in Irreversible Processes, The Physical Review, 37:405426 (Feb. 15, 1931) and 38:22652279 (Dec. 15, 1931), form the foundation of the phenomenological theory. N.G. Van Kampen, Stochastic Processes in Physics and Chemistry, rev. and enlarged ed. (1992), written for physicists and chemists, documents the mathematical theory of the master equation that underlies the nonlinear statistical theory of nonequilibrium thermodynamics. Ralph H. Abraham and Christopher D. Shaw, DynamicsThe Geometry of Behavior, 4 vol. (198288), provides a visual introduction to the mathematics underlying the complex phenomena that can occur far from equilibrium. Nelson Wax (ed.), Selected Papers on Noise and Stochastic Processes (1954), contains many seminal papers dealing with the mathematical physics of fluctuation phenomena. L.D. Landau and E.M. Lifshitz, Fluid Dynamics, 2nd ed., rev. (1987; originally published in Russian, 1953), contains a description of molecular fluctuations for fluids. Sydney Chapman and T.G. Cowling, The Mathematical Theory of Nonuniform Gases, 3rd ed. (1970, reprinted 1990), provides a detailed account of the Boltzmann equation and its predictions of transport coefficients. Joel Keizer, Dissipation and Fluctuation in Nonequilibrium Thermodynamics, Journal of Chemical Physics, 64(4):16791687 (Feb. 15, 1976), introduces the canonical form for nonlinear, nonequilibrium thermodynamics to describe irreversible processes and fluctuations far from equilibrium. Rolf Haase, Thermodynamics of Irreversible Processes (1968, reissued 1990; originally published in German, 1963), contains a comprehensive treatment of the linear theory with numerous applications. Bruce J. Berne and Robert Pecora, Dynamic Light Scattering (1976, reprinted 1990), provides a comprehensive treatment of theoretical and experimental aspects of laser light scattering. Joel E. Keizer Classical thermodynamics Thermodynamic relations A very low-density gas has little intermolecular potential energy. Its behaviour may therefore be described by the ideal gas model, in which all the internal energy is possessed by the individual molecules. Internal energy or enthalpy changes can be calculated using equation (12), and entropy changes using equation (36) (or an appropriate integral of the temperature term if specific heat is not constant). From this viewpoint, the thermodynamic properties of any real substancegas, liquid, or solidmay be considered to comprise the ideal gas contributions plus those due to the real intermolecular forces. To evaluate the latter, one must develop the appropriate thermodynamic relations with which to calculate these contributions. Consider a gas at a very low pressure P* (state 1 on Figure 11), such that it has only ideal-gas properties. If the gas is compressed, the pressure increases to P2, a state at which real-gas contributions also exist. Further increases in pressure at this constant temperature add to these real-gas properties until state 3, the saturation pressure, is reached. At this point the gas is termed a saturated vapour, and further compression does not increase the pressure but instead causes condensation to the liquid phase. When all the vapour is condensed to liquid, the substance is termed a saturated liquid (state 4). Now further compression results in increased pressurefor example, to P5. It is seen that to calculate changes in thermodynamic properties from the ideal-gas state 1 to any of the real states 2 through 5 requires different strategies. For any real-gas state up to the saturated vapour state 3, it is necessary to develop a set of thermodynamic relations for a homogeneous real single phase (gaseous, in this case), in which the properties to be calculated need to be expressed as continuous functions of the appropriate independent properties, either T and P or T and v. Thus, it is necessary to have an equation of state for the real gas in order to evaluate properties in this region. The change from saturated vapour to saturated liquidi.e., from state 3 to state 4on the other hand, requires a different type of mathematical relationship. Moving into the compressed liquid regionas, for example, to state 5results in the same type of mathematical relationship as before, although in this region it will be necessary to have an equation of state for the real-liquid phase. Maxwell relations In developing thermodynamic relations to represent the homogeneous phase calculations, one useful mathematical device is the Maxwell cross-partial derivatives. The exact differential of a variable z that is a continuous function of x and y can be written in the form dz = Mdx + Ndy, where M = (z/x)y is the partial derivative of z with respect to x (the variable y being held constant) and N = (z/y)x is the partial derivative of z with respect to y (the variable x being held constant). Because it does not matter in what order a second partial differentiation of the function z is performed, or (M/y)x = (N/x)y. From this equation, expressions relating P, v, T, and s can be derived. Consider the thermodynamic property relation, equation (24), rewritten here per unit mass as du = Tds - Pdv. This equation expresses internal energy as a function of the variables entropy and specific volume. The Maxwell derivative for this expression states that This equation is termed a Maxwell relation, an expression among thermodynamic properties. It is not a particularly useful expression, because entropy, held constant on the left side, is one of those properties that cannot be measured directly but must instead be calculated from properties that can be measured. In a similar manner, the property relation expressed in terms of the enthalpy, equation (25), rewritten per unit mass, yields a second Maxwell relation: Like equation (41) and for the same reason, equation (42) is not particularly useful. In order to develop more useful Maxwell relations, it is necessary to be able to write the property relation in terms of variables that are measurable properties. One such form results from the definition of a thermodynamic property termed the Helmholtz function, or A, where Rewriting equation (43) per unit mass, differentiating, and substituting equation (24) gives This form of the property relation yields the Maxwell relation Equation (45) gives a relation for calculating the entropy change along an isotherm in terms of the equation of state, the right side of the expression. This results from the form of equation (44), which expresses the Helmholtz function in terms of the independent properties temperature and volume, both of which are measurable properties. Another useful Maxwell relation results from the definition of the Gibbs function G as The specific Gibbs function is then g = h - Ts. Differentiating and substituting equation (25) into this gives This is a particularly useful form of the property relation, as it is written in terms of the independent properties temperature and pressure. Equation (47) yields the Maxwell relation another expression for calculating entropy change along an isotherm in terms of the equation of state for the substance. Note that equation (45) is particularly useful for an equation of state (such as the virial equation, equation ) in which pressure is explicit, while equation (48) is particularly useful for an equation of state in which specific volume is explicit. It is also necessary to develop expressions for calculating changes of internal energy or enthalpy for real substances along an isotherm. Rearranging equation (24), rewriting it per unit mass and differentiating, then substituting equation (45) gives Similarly, using equations (25) and (47) yields In addition, from the definition of enthalpy, resulting in a set of three equations that can be used to calculate changes in the two properties u and h in terms of the equation of state. In practice, either equation (49) or equation (50) is used, depending on the form of the equation of state, along with equation (51). Nonequilibrium thermodynamics Not long after Clausius and Kelvin formulated the principle now known as the second law of thermodynamics, scientists began to search for mechanical explanations of entropy. This search was beset with difficulties, because the mechanical equations predict reversible motions that can run both forward and backward in time, while the second law of thermodynamics is irreversible. Indeed, in the words of Clausius, the second law was simply that heat cannot, of itself, move from a cold to a hot body; i.e., it moves irreversibly from hot to cold. While this statement of the second law seemed simple enough to understand, the entropy function that is derived from it was not. Convincing investigations of the mechanical theory of heat were initiated by the Scottish physicist James Clerk Maxwell in the 1850s. Maxwell argued that the velocities of point particles in a gas were distributed over a range of possibilities that increased with temperature, which led him to predict, and then verify experimentally, that the viscosity of a gas is independent of its pressure. In the following decade Boltzmann began his investigation of the dynamical theory of gases, which ultimately placed the entire theory on firm mathematical ground. Both men had become convinced that the entropy reflected molecular randomness. Maxwell expressed it this way: The second law has the same degree of truth as the statement that, if you throw a tumblerful of water into the sea, you cannot get the same tumblerful of water out again. These were the seminal notions that in the 20th century led to the field of nonequilibrium, or irreversible, thermodynamics. Phenomenological theory: systems close to equilibrium Despite its molecular origin, the first consistent formulation of nonequilibrium thermodynamics was phenomenologicali.e., independent of the existence of molecules. In this formulation the definition of the entropy function is extended to apply to systems that are close to, but not actually at, equilibrium. Called the local equilibrium assumption, it means that the time derivative of the entropy S can be written where the variables that are proportional to the size of the system (the extensive variables) are E, the internal energy; V, the volume; and Ni, the number of molecules of kind i. The temperature T and the other intensive variables (the pressure P and the chemical potentials mi) retain their usual meanings. This is the first phenomenological assumption. The second assumption is based on the second lawnamely, that the time rate of change of the entropy consists of two terms: one due to changes that are reversible and another due to irreversible processes, called the entropy production or dissipation function (F), which can never be negative. In the phenomenological theory, the dissipation function can be written as a sum over the thermodynamic fluxes J and their conjugate forces X, or Examples of fluxes include the heat or mass flowing through unit area in unit time in a fluid, an electric current, and the time rate of change of the number of molecules or atoms in a chemical reaction. These various fluxes all can be thought of as the time rate of change of an extensive variable and can be written Ji = dni/dt, with ni representing one of the extensive variables. The conjugate forces are identified from phenomenological equations, such as Fourier's law of heat transportwhere the thermodynamic force is the temperature gradientor Ohm's law, in which the current in a resistor is proportional to the voltage across it. Close enough to equilibrium, theseand all the other familiar kinetic and transport lawscan be expressed in the linear form: where the constant coefficients Lik are called phenomenological coupling coefficients. Written in this fashion, the thermodynamic forces are differences between the instantaneous and equilibrium value of an intensive variable (or their gradients). Thus, for example, 1/T - 1/Te is the thermodynamic force conjugate to the internal energy E, which is Newton's law of cooling. Equations such as (161) are referred to as linear laws. In addition to the kinetic and transport equations that were established experimentally in the 19th century, the linear laws suggested new types of transport phenomena. Indeed, in the hands of the chemist Lars Onsager and later the German physicist Josef Meixner, this rewriting led to a deep connection between the phenomenological coupling coefficients and thermodynamics. Because the dissipation function is positive, the coupling coefficients are not arbitrary. Consider, for example, the case of energy and charge transport in a thermocouple. This device, illustrated in Figure 32, involves two lengths of metal wire of different composition connected in a loop with the metal junctions immersed in thermal reservoirs at different temperatures (say, ice and boiling water). According to the linear laws, the flux of energy Je and the flux of electrons Jq in the wire can be written as: where v is the (electrochemical) potential difference and D(-v/T) is the thermodynamic force for the electron flux. The first term in equation (162) is a form of Newton's law of cooling, and the second term in equation (163) is a form of Ohm's law, so Lee and Lqq are related to the heat and electrical conductivity, respectively. Restrictions on the coefficients follow from the fact that the entropy production cannot be negative. This means that both of the direct coupling coefficients, Lee and Lqq, must be positive. Furthermore, the cross coupling coefficients Leq and Lqe must satisfy the inequality (Leq + Lqe)2 4LqqLee. Thus, the second law constrains the size of the cross coupling coefficients. The thermocouple illustrates another aspect of the cross coupling of fluxes and forces: namely, that a temperature difference might give rise not only to a heat flux but also to an electric current, while a potential difference can give rise to an electric current and a heat flux. Experimental measurements demonstrate not only that these phenomena, known as the Seebeck and Peltier effects, exist but that the two cross coupling coefficients, Lqe and Leq, are equal within experimental error. By introducing statistical ideas into nonequilibrium thermodynamics, Onsager was able to prove for irreversible processes that Lik = Lki, a fact now referred to as the Onsager reciprocal relations. Another restriction on the cross coupling coefficients, called the Curie principle, dictates, for example, that the thermodynamic force for a chemical reaction in a fluid, which has no directional character, does not contribute to the heat flux, which is a vector. Experiments to test the validity of the reciprocal relations are not always easy. Some of the most accurate deal with mass diffusion at uniform temperature. For example, in a solution of table salt (NaCl) in water, there is only a single thermodynamic force for diffusion (the gradient of the chemical potential of NaCl), but in an aqueous solution of NaCl and potassium chloride (KCl), gradients in the chemical potentials of both salts act as thermodynamic forces for diffusion. Within the limits of experimental uncertainty, the cross coupling coefficients for diffusion in this and other three-component solutions have been shown to be equal. The magnitude of the cross coupling coefficients is frequently comparable to the direct coupling coefficients, which has made cross coupling a significant phenomenon in electrochemistry, geophysics, and membrane biology. During World War II, thermal diffusion, a cross coupling in which a temperature gradient causes a diffusion flux, was used to separate fissionable isotopes of uranium. In 1931 Onsager enunciated another principle of nonequilibrium thermodynamics that applies to certain systems at steady state. A steady state is a condition in which none of the extensive variables change in time but in which there is a net flux of some quantity. This contrasts with thermal equilibrium, in which all the fluxes vanish. For example, if no current is drawn from the thermocouple in Figure 32, a steady state is attained with a heat flux Je, given by equation (162), and a flux of electrons Jq = 0. According to equation (163) this implies that the difference in the temperature of the two reservoirs DT maintains a steady potential difference determined by the condition: Onsager discovered that this type of steady state condition was implied by a property of the dissipation function F. The steady state in equation (164) turns out to be the state of least dissipation when the temperatures of the two reservoirs are held fixed. The proof of this requires that the linear laws and the reciprocity relations are valid; it is referred to as the Rayleigh-Onsager principle of least dissipation or the principle of minimum entropy production. Statistical thermodynamics Thermodynamics is the study of the various properties of macroscopic systems that are in equilibrium and, particularly, the relations between these various properties. Having been developed in the 1800s before the atomic theory of matter was generally accepted, classical thermodynamics is not based on any atomic or molecular theory, and its results are independent of any atomic or molecular models. This character of classical thermodynamics is both a strength and a weakness: classical thermodynamic results will never need to be modified as scientific knowledge of atomic and molecular structure improves or changes, but classical thermodynamics gives no insight into the physical properties or behaviour of physical systems at the molecular level. With the development of atomic and molecular theories in the late 1800s and early 1900s, thermodynamics was given a molecular interpretation. This field is called statistical thermodynamics, because it relates average values of molecular properties to macroscopic thermodynamic properties such as temperature and pressure. The goal of statistical thermodynamics is to understand and to interpret the measurable macroscopic properties of materials in terms of the properties of their constituent particles and the interactions between them. Statistical thermodynamics can thus be thought of as a bridge between the macroscopic and the microscopic properties of systems. It provides a molecular interpretation of thermodynamic quantities such as work, heat, and entropy. Research in statistical thermodynamics varies from mathematically sophisticated discussions of general theories to semiempirical calculations involving simple, but nevertheless useful, molecular models. An example of the first type of research is the investigation of the question of whether statistical thermodynamics, as it is formulated today, is capable of predicting the existence of a first-order phase transition. General questions like this are by their nature mathematically involved and require rigorous methods. For many scientists, however, statistical thermodynamics merely serves as a tool with which to calculate the properties of physical systems of interest. The Boltzmann factor and the partition function Two central quantities in statistical thermodynamics are the Boltzmann factor and the partition function. To understand what these quantities are, consider some macroscopic system such as a litre of gas, a litre of some solution, or a kilogram of some solid. From a mechanical point of view, such a system can be described by specifying the number N of constituent particles, the volume V of the system, and the forces between the particles. Even though the system contains on the order of Avogadro's number of particles, one can still consider the Schrdinger equation for this N-body system, where HN is the Hamiltonian operator; yj are its associated wave functions, which depend on the coordinates of all the particles; and Ej are the allowed energies of the system. The energies depend on both N and V and may therefore be written Ej(N,V). For the special case of an ideal gas, the total energy Ej(N,V) will simply be a sum of the individual molecular energies because the molecules of an ideal gas are independent of one another. For example, for a monatomic ideal gas, if one ignores the electronic states and focuses only on the translational states, then the ei are just the energies of a particle in a three-dimensional box: where h is Planck's constant, m is the mass of the particle, and a is the length of the box. It should be noted that Ej(N,V) depends on N through the number of terms in equation (75) and on V through the fact that a = V1/3 in equation (76). For a more general system in which the particles interact with each other, the Ej(N,V) cannot be written as a sum of individual particle energies, but the allowed macroscopic energies Ej(N,V) can still be considered, at least conceptually. Now consider a system with N constituent particles in a volume V and at a temperature T. Thus, from a thermodynamic point of view, the system is specified by N, V, and T. What is the probability that the (macroscopic) system is in the jth quantum state with an energy Ej(N,V)? To answer this question, it is necessary to construct a mental collection of identical systems, essentially infinite in number, each with N, V, and T fixed. Generally, a mental collection of identical systems is called an ensemble, and a canonical ensemble in particular if N, V, and T are fixed for each system. The probability pj(N,V,T) that a system is in the quantum state j with energy Ej(N,V) is related to the energy by where the quantity k is a fundamental constant called the Boltzmann constant, whose numerical value is 1.3807 10-23 joule per kelvin. The Boltzmann constant is the molar gas constant R (in the equation PV = nRT) divided by Avogadro's number. The factor e-Ej/kT, which occurs throughout the equations of chemistry and physics, is called the Boltzmann factor. Proportionality (77) can be converted to an equation by virtue of the fact that the sum of pj(N,V,T) over all values of j must equal unity (because the system must be in some state). The resulting equation is where where the summation is carried over all values of j or over all possible quantum states. The quantity Q(N,V,T) is called a (canonical) partition function and is a central quantity of statistical thermodynamics. The partition function Q(N,V,T) is related to the Helmholtz energy A by the equation This equation is remarkable in that the right-hand side depends on molecular properties through the quantum mechanical energies Ej(N,V), whereas the left-hand side is a macroscopic, classical thermodynamic quantity. Thus, equation (80) serves as a bridge between classical thermodynamics and statistical thermodynamics. It allows thermodynamic properties to be interpreted and calculated in terms of molecular properties. As a concrete, simple example, consider the partition function of a monatomic ideal gas, such as argon, given by where m is the mass of the atom. Substituting equation (81) for Q(N,V,T) in equation (80) and then using the thermodynamic formula gives which is the ideal gas equation of state. Furthermore, the thermodynamic energy can be calculated by means of the equation to obtain the well-known result from the kinetic theory of gases, U = 3/2NkT = 3/2nRT. The molar heat capacity CV(U/T)NV is then 3/2R. The entropy S can be expressed in terms of Q(N,V,T) by using the fact that A = U - TS, where A is obtained from equation (80) and U from equation (84): Using equation (81) for Q gives which is called the Sackur-Tetrode equation. The calculated value for the standard molar entropy of argon at 298.2 K is 154.7 joules per kelvin per mole (J/Kmol), compared with the experimental (calorimetric) value of 154.8 J/Kmol. In general, the statistical thermodynamic entropies are in excellent agreement with experimental (calorimetric) values. The summation in equation (79) is carried over all possible quantum states of the N-body system. If W(Ej) is the degeneracy, or the number of quantum states with energy Ej, then the value exp(-Ej/kT) occurs W(Ej) times in the summation. Rather than listing exp(-Ej/kT) W(Ej) separate times, one can simply write W(Ej)exp(-Ej/kT) and then sum over different values of E. Equation (79) can then be written in the form In equation (79) the summation is over the quantum mechanical states of the system; in equation (87) it is over levels.

# THERMODYNAMICS

## Meaning of THERMODYNAMICS in English

Britannica English vocabulary. Английский словарь Британика. 2012