HOME»Workshop»Articles»2nd Law FAQ

### FAQ About the Second Law of Thermodynamics

Entropy is a physical concept often casually described as disorder, but it differs significantly from the conventional use of the word.

John Rennie, Scientific American, 287(1), 82, 2002.

1. To what kinds of systems does the second law apply?
Some form of the second law applies to all systems, but the particular form that applies depends on how the system can interact with its surroundings. Interactions may be limited by a boundary that separates the system from its surroundings. For example, the boundary may prevent matter from entering or leaving the system; when this is so, the system is said to be closed.
2. What other kinds of systems are important here?
The boundary might be insulated and shielded, so no heat can enter or leave the system; some people call this an athermal boundary. Processes occurring in these systems are said to be adiabatic.
Other boundaries can prevent any form of work from being done on or by the system; processes in these systems are said to be workfree. This means all forms of work are excluded, including work to change the system volume, surface work, electrical work, etc. If the boundary prevents all forms of matter and energy from entering or leaving the system, then the system is isolated. Thus, an isolated system is closed, adiabatic, and workfree.
3. What's a simple statement of the second law?
Consider a system that is closed and surrounded by an athermal boundary, so no mass or heat can enter or leave the system. Then, during any process taking place in such a system, the second law asserts that the system entropy can only increase or remain constant:
(1)
Here, S represents the entropy of the system; it has dimensions of (energy/temperature). Typical units of S are Joules per Kelvin degree and BTU per Rankine degree. The Δ-operator indicates the change resulting from a process; that is, ΔS is the difference between the entropy at the end of the process (S2) and that at the beginning (S1),
(2)
4. Does this mean that entropy always increases?
No. The form of the second law in (1) applies only to closed systems having athermal boundaries. If the boundary is not athermal, then we can easily reduce the system entropy … simply remove heat. For example, I have an unopened can of my favorite beverage; initially it's at room temperature. The beverage is the system; the can is its boundary. To start a process, I place the can in a refrigerator. Initially, the temperature of the beverage is higher than that of the air in the refrigerator, the boundary (the aluminum can) conducts heat, so heat is removed from the beverage, and it cools. During this process, the entropy of the beverage decreases. However, this does not violate the second law (1): since the boundary is not athermal, the form of the second law in (1) does not apply to this situation.
5. For the example in Question 4, how do you know S decreases?
The response of S to a change in temperature T is related to a thermodynamic quantity called a heat capacity. In our example, the volume of the liquid beverage is very nearly constant during the cooling, so we can appeal to a constant-volume heat capacity, Cv, which is related to entropy by
(3)
The partial derivative on the right tells us how the entropy responds when the temperature is changed at constant volume v.
The important point here is that heat capacities can be measured; for example, Cv can be measured by placing a sample of our beverage in a constant-volume calorimeter. The only other restriction on (3) is that the system must be a single homogeneous phase.
Now, experimentally we find that heat capacities are always positive, and, in fact, Willard Gibbs proved that heat capacities must be positive if the system is thermally stable. The temperature T in (3) must be an absolute temperature, so it too is always positive. Hence, the derivative in (3) is always positive,
(4)
This means that for a closed system at constant volume, S must increase when T increases, and S must decrease when T decreases. This must be true regardless of the nature of the substance that forms our system—it could be water, olive oil, peanut butter, whatever. In the example in Question 4, we are cooling the beverage in the refrigerator, its temperature decreases, and therefore, by (4), its entropy must also decrease.
6. The form of the second law in (1) contains both an equality and an inequality; what's the distinction?
The equality applies to reversible processes, while the inequality applies to irreversible ones. A reversible process is a model process in which the driving forces are differential, so that no energy is wasted in overcoming friction. Common examples of friction include the energy converted to waste heat when solid surfaces rub together and the energy that creates turbulence in certain fluid flows. During a reversible process on closed systems with athermal boundaries, the entropy remains constant: the equality in (1) applies.
However, a reversible process is a model—a limiting case. All real processes are irreversible: they are driven by finite driving forces and some energy is necessarily wasted to friction. Hence, during any real process on a closed system with athermal boundaries, the entropy increases: the inequality in (1) applies.
Further, ΔS measures the degree of irreversibility: the more energy wasted to friction, the larger the value of ΔS. The amount of energy wasted in a process is reflected by an efficiency: small efficiencies imply large amounts of energy wasted and therefore large increases in entropy. In many engineering design situations, we try to improve efficiencies by keeping entropy changes small.
7. My thermo textbook says the second-law form (1) applies to isolated systems. Is that wrong?
No, that's correct. The isolated system is closed, adiabatic, and workfree; therefore, it is a subset of those systems that are closed and adiabatic. The second-law form (1) applies to any closed, adiabatic system, including those that are isolated.
But for engineers, isolated systems are not particularly interesting, because no mass or energy can enter or leave an isolated system. However, if a system is only closed and adibatically insulated, then we can still interact with it via all kinds of work modes. These are important in many applications, and the second law (1) applies.
8. Since the second law in (1) applies to only certain situations, it isn't very useful, is it?
I think you would get arguments about this statement, but I'll avoid those and, instead, show you that (1) generalizes to situations that you should accept as useful.
Recall that we have restricted the form (1) to closed systems. If the mass of the boundary is negligible (it often is, compared to the mass of the system), then the closed system is one of constant mass. Now let's generalize to open systems, that is, ones that can exchange matter with their surroundings. In particular, let's consider an open system in which matter is flowing into and out of the system at constant rates. Further, the total mass flow into the system exactly balances the total mass flow out of the system: this is called a steady-state mass flow.
The system still has an athermal boundary, so no heat transfer occurs. But there may be work modes acting between the system and its surroundings; the energy flows via these work modes are also restricted to steady flows.
Since the system is undergoing a steady-state process, the total mass in the system remains constant, and the form (1) for the second law still applies:
(5)
The distinctions between (1) and (5) are (a) in (5) S now represents a rate, so it has dimensions of entropy/time, and (b) the Δ-operator now means the net difference in entropy rates between outlets and inlets,
(6)
The second-law form (5) states that in any steady-state, adiabatic system, the system entropy must either increase or remain constant: it cannot decrease.
9. Why is the steady-state form (5) important?
Because many practical devices operate adiabatically and at steady-state. Examples are turbines, compressors, and pumps. In addition, the second law (5) implies an important limitation on the operation of heat engines, which are devices that convert heat into useful work. Examples of heat engines include the internal combustion engine in your car, gas turbines in many airplanes, and the power stations that generate electricity.
10. What limitation does the second law impose on heat engines?
A defining characteristic of any heat engine is that it operates in a cycle. For example, at a coal-fired electric-generating station we take heat from a furnace and introduce it into a boiler, to convert water to steam. The steam is then directed onto the blades of a turbine, rotating a shaft. The turbine shaft is coupled to the shaft of a generator, to create electricity.
We have two principal objectives: (1) we want to extract as much work as possible from the steam and (2) we want to operate in a cycle. The second requirement means we want the water exhausted from the turbine to be pumped back to the boiler, so we can use the same water to again create steam.
In other words, we want the heat engine to operate in a steady-state cycle, wherein the water continuously and repeatedly passes through the same thermodynamic states. After any one complete cycle, the water returns to the same thermodynamic state; in particular, the values for the entropy at the start and at the end of one cycle are the same. Hence S2 = S1, and
(7)
Now consider what happens in the turbine. The turbine is well insulated, so the expansion of the steam against the turbine blades is adiabatic, and by the second law (5),
(8)
The entropy of the water increases at the turbine. Therefore, to satisfy (7) and close the cycle, we must decrease the entropy of water after it leaves the turbine. This means we must remove heat from the water: hence, we place a heat exchanger downstream from the turbine, but upstream of the boiler.
The presence of the heat exchanger means that not all the heat taken from the furnace can be converted into useful work at the turbine. Thus, no heat engine can operate at 100% efficiency: some heat must be removed to close the cycle. The upper bound on the efficiency is called the Carnot efficiency.
11. Ok, we need heat exchangers to generate electricity, so what?
The quantities involved are nontrivial. For example, ten miles from where I sit are three nuclear reactors, each with a capacity of generating 1500 MW of electricity. Let's consider just one.
The typical heat engine for generating electricity operates at about 30% efficiency; that is, only 30% of the heat taken from the reactor is converted to electricity. If we create 1500 MW of electricity, then the heat to the boiler must be

Qin = 1500 MW/0.3 = 5000 MW (9)

To close the cycle, we must remove the difference; i.e., by the first law,

Qout = 5000 - 1500 = 3500 MW (10)

Note that the amount of heat removed is more than twice the amount converted to useful work. What do we do with all that excess heat? It has to go into the environment.
Usually, a lake or river provides cooling water to the heat exchangers that remove the excess heat. To protect the environment, we only allow the cooling water to be heated by (say) 15 Fahrenheit degrees above its inlet temperature. Then, to remove 3500 MW of heat, we need 1.6 million gallons of water flowing through the heat exchanger every minute. That's a volume of water that is 60 feet long, 60 feet wide, and 60 feet deep—every minute.
To supply cooling water, the power company that built the generating station near me also built a lake, which covers over 18,000 acres to an average depth of about 54 feet (full pool) and that has over 300 miles of shoreline. This lake wasn't built for environmental reasons or to provide recreation for the local inhabitants—it was necessary to satisfy the second law.
12. But what is entropy?
In thermodynamics, entropy is defined by this differential equation,
(11)
where T is the absolute temperature and Qrev is the heat transferred during a reversible process. In other disciplines, such as statistical mechanics and theories of information, the entropy is defined in other ways.
The significance of (11) is that it relates an inexact differential (dQrev) to an exact differential (dS). Whenever we must solve an ordinary differential equation, we first determine whether the differentials involved are exact or not. If they are exact, then we can immediately integrate; for example,
(12)
If they are not exact, we look for ways to convert them into exact differentials. One way is to find an integrating factor. The definition (11) identifies (1/T) as the integrating factor that converts dQrev into an exact differential. The variable S in that differential is named the entropy.
Quantities that form exact differentials have many mathematical and physical features that simplify problem analysis and problem solving; so, we prefer to deal with those quantities whenever we can.
13. But what is the physical interpretation for entropy?
I hate to disappoint you, but there really isn't one. Thermodynamic entropy is an abstract mathematical concept that was invented to economize our thinking about certain physical situations and to simplify problem-solving for those situations. This is no different from other abstractions that we have invented. Take temperature as an example. Temperature is an abstract concept—not a physical thing. You cannot see temperature, or touch it, or hear it. You probably cannot even define temperature—I'll bet you cannot write down a defining equation for temperature analogous to the definition (11) that I gave you for entropy. (The equation defining temperature is given in the article on temperature measurements, elsewhere on this website.)
And yet, you can deal successfully with temperature and use it to advantage, even though you are probably unsure as to what it really is. The only significant difference between temperature and entropy, as concepts, is that we can measure temperatures, but we can't measure values for entropy. (For some situations, we can measure changes in entropy.) But whether or not a quantity is measurable is separate from your ability to use that concept to advantage.
You don't question your lack of physical understanding of temperature because you are familiar with it and you can manipulate it successfully. You may be uncomfortable with entropy because of a lack of familiarity, not because of a lack of some physical interpretation.
14. But doesn't entropy have something to do with order?
Some people like to say that entropy measures the disorder of a system, but these kinds of statements really only trade a well-defined concept (entropy) for a poorly defined one (order). Ever notice that people who talk like this rarely define what they mean by "order"? Usually, they don't write down an equation that defines order and they don't tell us how their order could be measured or otherwise how numbers could be assigned to their order.
For some processes, order parameters can be defined and the scales for those parameters can be arranged so that increasing values of the order parameter imply more disorder. Then such scales can be made to correspond to increases in entropy. Some kinds of mixing processes can be described in this way. But these situations are special cases that fail to generalize.
I'm sure that attempts to assign a physical interpretation to entropy are well-intended—they are attempts to help novices. But I think that such attempts are misguided and have the potential to mislead, thereby doing more harm than good. If you equate entropy with disorder, and you think that mixing always imparts more disorder, then you must believe that any mixing process must increase the entropy. But in fact, even for mixing ideal gases, the entropy may increase, decrease, or remain constant, depending on how the mixing is done. None of these possibilities violates the second law.
If your mental development is sophisticated enough to be reading this, then it's sophisticated enough to accept that a concept can be useful even though it lacks a physical interpretation.
15. Is the entropy of the universe increasing?
If the universe is an isolated system and if our laws of science apply throughout the universe, then the total entropy of the universe is increasing, in accordance with the second law. Most scientists accept the two premises, though there is no firm evidence for or against either one.