# The Second Law of Thermodynamics

Thermodynamics is an extremely powerful framework for making *quantitative* predictions about the behavior of macroscopic parameters for systems with huge numbers of interacting components. The problem, of course, lies in justifying this thermodynamic framework from more fundamental physical principles like Newton’s equations of quantum mechanics. The first law of thermodynamics easily lends itself to a more fundamental interpretation, since

is nothing more than a bookkeeping method to classify changes in the total energy into (i) those arising from changes to the macroscopic parameters or (ii) to energy absorbed or emitted from the remaining microscopic degrees of freedom.

The second law (which, in one common form, states that “the entropy of an isolated system cannot decrease with time”) is a far trickier beast. Its origins are thought to arise from statistical considerations of the chaotic nature of the system’s time evolution through the microstate space. However, to the best of my knowledge, a truly satisfying derivation has not yet been found. Nevertheless, the experimental evidence in favor of the second law is overhwhelming, and so I find it very reassuring to know that the concept of entropy and the second law can be rigorously derived completely within the macroscopic framework of thermodynamics using only two additional empirical observations.

The first of these is Kelvin’s postulate that

There is no process that can convert heat into work with perfect efficiency.

In other words, it is impossible to transfer all the energy in the microscopic degrees of freedom to the macroscopic degrees of freedom — there is some qualitative difference between the two.

From this postulate, we can show that for any cyclic transformation of a system,

with equality in the case of a reversible process. This implies that for a reversible process, the fraction is an *exact* differential, which we denote by

and so there exists a function of state with the property that

for any reversible path from state to state . This function is known as the *entropy*.

Now consider a cycle formed by a (possibly irreversible) transformation from to followed by a reversible transformation back to state . We combine the two expressions given above to obtain

For an isolated system, must vanish everywhere along the path, and hence the left hand side of this equation vanishes as well. Thus, we obtain the statement that

There exists a function of state such that in an isolated system, if a state is adiabatically accessible from an initial state , then .

This statement gives us lots of information about the entropy landscape, but it still does not contain any information about the dynamics of the system. After all, just because a change *can *happen, doesn’t necessarily mean that it *will* happen. To obtain predictive statements about the dynamics of the system, we must appeal to second empirical assumption, which is that

All systems are constantly subjected to tiny fluctuations in their macroscopic degrees of freedom.

From this assumption, we conclude that there are no adiabatically accessible states for an isolated system in equilibrium — otherwise these small fluctuations would drive the system into a different state and the equilibrium condition would be violated. Combining this result with the entropy inequality implies that an isolated system in equilibrium sits at a local entropy maximum.

Thus, we see that the existence of the entropy function and its maximization principle follow naturally from two rather simple empirical postulates.