The Laws of Thermodynamics PART 1 click here
The Laws of Thermodynamics PART 2 click here
The Laws of Thermodynamics PART 3 click here
The Third Law of Thermodynamics
The last in this four-part series of articles explores the third law, which focuses on the properties of systems that approach absolute zero. The eerie quantum behaviour of matter becomes observable in extremely cold systems. At much warmer temperatures, molecules and atoms have so much kinetic energy they cannot form the chemical bonds that hold liquids or solids together. However, as absolute zero approaches, such random particle motion is quieted down enough to begin to reveal matter's underlying quantum nature. A tremendous amount of research is underway to explore how how ultra cold matter behaves. New and exotic behaviours, such as superfluidity, emerge as the system's thermal energy approaches zero. The third thermodynamic law is all about the energy of matter as it approaches absolute zero.
This law can be stated a few different ways. Put most simply, the entropy of a pure substance approaches zero when its temperature approaches absolute zero (0 K, -273.15°C, -459.67°F). It is a statement about the limits of temperature and entropy in a system. What is absolute zero? A substance, if it could reach exactly 0 K, would have no thermal energy left at all. Although it is theoretically possible, 0 K doesn't exist in nature, and this law really explores the reason why that is so. The coldest measured object in the universe is the Boomerang Nebula, which contains gases cooled to just 1 K. These gases are even colder than the empty space around them and that initially presented a puzzle. Background microwave radiation warms empty space to 2.73 K, so how is it possible to cool an object down more than the space around it? This posed a mystery to scientists until recently. They think that the Boomerang Nebula was created when a dying red giant star exploded as a smaller-mass companion star crashed into it, creating an exceptionally powerful explosion that ejected stellar gases outward at such velocity (10 times faster than a single exploding red giant of comparable mass) that the gas adiabatically expanded into an ultra-cold gas. The gas cloud would then gradually absorb heat from the microwave radiation bathing the space it occupies, until it reaches equilibrium with it, at 2.73 K
As matter approaches 0 K, atoms lose thermal energy. The electrons in atoms eventually fall down into low energy states. Does zero thermal energy mean that a system has zero entropy as well? Not necessarily, and this is the key point. When many substances freeze, they settle into a three-dimensional lattice-like arrangement. Crystallography is the science that explores those arrangements that atoms assume when a substance freezes into a crystal. Only a perfect crystal of a pure substance could actually reach absolute zero. Only in such a flawless atomic arrangement could atoms lock into such a perfect alignment that has zero thermal energy. Atoms in it would not have any kinetic energy. They would not move about slightly in the lattice or undergo any translational or rotational movements. Even at 0 K, however, each atom would still vibrate (in its lowest energy state) about its equilibrium position within the crystal, but this motion is not transferable as heat. Individual molecules and atoms can never be totally frozen. They are quantum clouds that are always in motion associated with the uncertainty principle, and electrons by their nature are never stationary. None of this motion is thermal motion.
A perfect crystal would have to contain only one kind of atom or molecule. Otherwise there would be entropy associated with the mixing of two or more microstates (you can also say there is more than one microscopic configuration possible). We are reminded that entropy is also about the multiplicity of possible states in a system.
A perfect crystal, though impossible to create, is useful because it offers a benchmark we can use to describe what happens to entropy as real substances are cooled to absolute zero. Microscopic imperfections always form when substances crystallize, no matter how controlled the procedure is. Any imperfection in a crystal lattice adds disorder and therefore entropy. Even the most flawless diamond, for example, is never perfect. Defects inevitably get frozen into a crystal that pull the lattice pattern out of alignment. If just a single boron atom replaces a carbon atom in a diamond crystal, for example, it will shift the whole alignment microscopically, reducing the microscopic order and increasing its entropy. All pure substances that in theory would condense into perfect crystals have, in reality, point defects. Such a defect could be a single replacement (like boron for carbon) or a hole created by a missing atom. It could be a linear defect or a planar defect. These are all called crystallographic defects. Every crystallographic defect adds residual entropy to a crystal, and would prevent it from ever reaching 0 K.
Some pure substances, even if they could be entirely defect-free, will still never settle into an absolutely perfect lattice formation. These substances cannot reach 0 K because of their atomic makeup. They will always retain a certain amount of residual entropy. A classic example is carbon monoxide (CO). A CO molecule has a small dipole moment, which means it has a slightly lopsided charge (it has "a left" and "a right"). When the gas is cooled down far enough it will eventually form crystals. Those crystals will not be perfect because each CO molecule can be oriented CO or OC. The dipole moment is too small to assure that all align in one direction so there is a chance that CO could crystallize in another pattern such as CO:OC:CO instead of CO:CO:CO, for example. We can estimate this contribution to the crystal's entropy by calculating how many possible microstates a crystal sample of a certain number of molecules can achieve. For carbon monoxide we know there are two ways each CO molecule can exist in the lattice so we can give a value w = 2. For an N number of CO molecules in the crystal, there are wN ways, or 2N ways in this case, that the CO molecules can be arranged. There are 2N different microstates possible, and entropy, we now know, is all about the number of possible states a system can be in.
When carbon monoxide freezes into a crystal, it becomes locked into one of many possible ground states, each of which corresponds to a specific microscopic configuration. It therefore must have residual entropy even at 0 K. Put more scientifically, carbon monoxide has a degenerate, or asymmetrical, ground state. Asymmetrical ground states are very interesting to study. Times crystals, mentioned earlier, show that an asymmetrical ground state can be one in space (as with the carbon monoxide crystal) or one in time and space (as in a time crystal). These crystals have a very unusual property. While ordinary crystals, such as carbon monoxide, have an atomic structure that repeats in space, time crystals repeat in time as well: they maintain a constant perfectly regular atomic-level oscillation while in their ground energy state. Researchers thought this was impossible - the atoms in substances at ground state should be locked in place and shouldn't change because there is no energy available to change. Yet a time crystal changes from moment to moment, leading researchers to wonder if it is doing work without any input of energy, a microscopic sort of perpetual motion machine that (gasp!) breaks the second law of thermodynamics.
Technically a time crystal is indeed a system with no thermal energy available to do work. It was also assumed to be an isolated system, where no thermal energy is transferred into the system. Upon close study, however, physicists discovered that a time crystal system is not closed. Even though it is in its ground state and remains in its ground state, it is actually an open non-equilibrium system (and the first non-equilibrium ground state system ever discovered).
Proposed theoretically in 2012, time crystals were recently created in the lab. One way to create a time crystal is to line up a set of ytterbium atoms, all of which have quantum-entangled electrons, that is laser-cooled to its ground state. Then it is deliberately kept out of equilibrium by hitting the atoms with two alternating lasers. It is open to the environment and energy is continuously being supplied to the crystal. If the lasers are turned off the crystal stops oscillating. The fine point here is that it's not a transfer of thermal energy because the atoms stay in ground state. If even very slight changes were made to the magnetic field or to the laser pulses, the ytterbium line "melted," changing its phase of matter, a clear signal that thermal energy was absorbed by the atoms. The time crystal therefore does not convert thermal energy into mechanical work. To make the time crystal "tick," one laser sets up a magnetic field that the electrons respond to and the other one flips the spins of some of the atoms. All the atoms are entangled so they all settle into a perfectly stable repetitive spin-flipping pattern that, strangely, is exactly half the rate of the laser pulses. There is no heat exchange or change in entropy because it is in only one ground state at a time. Rather than being a thermodynamic process, this motion, without thermal or kinetic energy, appears to be the first example of broken time translation symmetry observed in nature. Time translation symmetry is the fundamental assertion that the laws of physics don't change over time, and the conservation of energy in nature depends on it. Scientists make the distinction that if time symmetry is broken explicitly, then the laws of nature can no longer be counted on. In a time crystal, however, the symmetry is broken spontaneously, which means that, only in a specific case, nature chooses a state that doesn't obey time translation symmetry. This exception to the rule is analogous to the breaking of CP and time symmetry observed by the weak force mentioned earlier.
Conclusion
The laws of thermodynamics have an all-encompassing scope. They preside over every other science from biology, to geology, chemistry, astrophysics and quantum physics, to name just a few. They outline the basic rules that every process within the vast universe to the sub-atomic realm must follow, which means that these rules must work to explain the extremes of nature, such as black holes. Every scientific discipline from engineering to the applied chemistries to quantum computing must take thermodynamics into account when systems are designed. The laws can also be harnessed as a tool to probe into the mysteries of space-time and quantum behaviour.
Thermodynamics impacts us all personally. It underlies everything we do in life. It is essential to understanding how all life processes work, and how life evolved on this planet. It will be an essential component in our search for efficient energy sources of the future. It will even help us look for and recognize life on other worlds, as scientists imagine all the ways living systems could utilize exotic sources of energy. Anytime energy is converted from one form into another, or into work and vice versa, in any process, thermodynamics determines what can happen and what can't happen and why.
Thursday, January 4, 2018
Wednesday, January 3, 2018
The Laws of Thermodynamics PART 3
The Laws of Thermodynamics PART 1 click here
The Laws of Thermodynamics PART 2 click here
The Second Law of Thermodynamics
The concepts of heat, internal energy and thermal energy we just explored can easily be confused. We still casually but incorrectly talk about heat as if it is something that an object contains, or as a property of that object. Internal energy and thermal energy are sometimes incorrectly used interchangeably even in textbooks (they are interchangeable only in a theoretical ideal gas, where there is no potential energy - the particle-particle interactions considered are perfectly elastic collisions between atoms which are treated as small hard spheres). Despite these challenges, the second law is no doubt the most misunderstood law in thermodynamics. Even by experts. We could look at it as something to be feared, but we can also see this law as the source of high quality mental fun. Borrowing from physicist Lidia del Rio once again this is where the village witch lives and it is where she does her best magic.
The second law introduces another term called entropy. This law states that entropy can never decrease over time in an isolated system. Unlike energy, entropy is not conserved in an isolated system. Remember, an isolated system is one in which neither energy nor matter can ever enter or leave. As a possible consequence, the universe as an isolated system might eventually suffer an ultimate fate called heat death, which is based on the second law of thermodynamics. The universe's entropy will continue to increase until it reaches a state of maximum possible entropy, or thermal equilibrium, where heat exchange between molecules and atoms is no longer possible. All thermodynamic processes die as a result. Entropy is an interesting and far-reaching concept. It does not always relate specifically to the internal energy of a system. Sometimes, entropy is (too-broadly) defined as the level of disorder in a system. It can also be defined as the number of possible microstates within a system. Often, entropy is a measure of the amount of information in a system.
Is the universe actually hurtling toward heat death? Is it truly isolated? Considering that highly organized structures evolve in the universe over time, the notion of increasing entropy in the universe presents some controversy. Wikipedia's brief "controversies" entry on heat death offers arguments against even assigning entropy to the universe as a whole, and there are many more arguments for and against to be found online. One question I find intriguing is whether a gravitational field itself has entropy. Gravitational fields have a way of keeping objects out of thermal equilibrium with the space around them. This idea is explored in the controversial theory of entropic gravity. Here, gravity is treated as an emergent and entropic force: at the macroscopic scale it is homogenous but at the quantum scale it is subject to quantum disorder, that is, from quantum entanglement of bits of space-time information. This disorder is expressed as a force we define as gravity. It agrees with both Newtonian gravity and general relativity and it offers an explanation for "dark energy," as a kind of positive vacuum energy within space-time.
A common way of phrasing the second law is to say that a system always tends toward disorder over order, but this statement can be misinterpreted. In physics, entropy is closely related to the concept of multiplicity. I think it is probably the easiest way to think of entropy. Take as an example a system of 20 books. There are many more ways (a multiplicity of ways) to accomplish a jumbled pile (no rules) than to stack them up neatly (specific rules). We can say that the randomly jumbled mess of books has higher entropy than the neat stack does. What happens when we come along and straighten up the books? Does the book "system" go against entropy's natural tendency toward disorder? No, because we were involved in the process we became part of the system. We did work on the system to decrease its entropy. Overall, the entropy of the "us plus the books" system increased.
The second law also states that systems tend toward a state of equilibrium. To explain this, let's take a different example. We have two volumes of gas separated by a movable wall inside a really good Thermos bottle, a bottle so well made we can consider the system isolated. The wall itself is also made of a perfect insulating material so no heat transfer can happen. We can think of this arrangement as two systems in mechanical contact with each other. To start with, we have one gas that is hot and the other one is cold. The hot gas will exert thermal pressure and push the wall into the cold side. We will assume that the wall movement is frictionless. The hot gas will expand and cool and the cold gas will compress and warm up (this is another example of an adiabatic process). The system does mechanical work (the expansion and compression of gases as well as the movement of the wall). The wall will stop moving when both gases reach the same pressure. If it is the same type of gas on each side, the compartments will also reach the same temperature. The two systems will come to rest at a state of thermodynamic equilibrium, and in this state, no part of the two-gas system can do any more work on the other part. This system still has thermal energy but that energy can't be used to do any work (within the system). This state has the highest entropy it can have under these circumstances. In this case, it is not self-evident that the system has reached a state of maximum disorder, or even that it has achieved the greatest multiplicity of possible states. We do know, however, that it has evolved toward an end state of thermodynamic equilibrium, which is also an evolution toward maximum achievable entropy.
As we can see, there is more than one way to look at entropy, like looking at a magic trick from different angles. No one angle gives it completely away.
The system's movement toward equilibrium is an irreversible process. It can't return to its original state of disequilibrium unless work is done on it. We could set up a real system of two gases as close as we can to the one just described and we would discover why this move toward equilibrium is irreversible. It might seem that we could restart the experiment over and over forever. Each time we would do exactly the same mount of work to heat and cool our gases to their starting temperatures. But even the best insulating material is imperfect and only superfluids are truly frictionless. Some heat will be lost to the outside of the Thermos, and some energy will be lost as heat loss through friction by the wall as it moves. If the experiment was repeated over and over using the same Thermos and the same initial work input, the overall thermal energy available to the system would eventually decrease and the entropy of the system (which now includes the room into which the heat dissipates and the mechanism we use to heat/cool the gases) would increase.
Another way of defining an irreversible process considers the role of chaos in systems. Any system of interacting molecules will include interactions between them that are chaotic in nature. We often think of chaos as what our kids do to their rooms but it is also a scientific theory. Systems are very sensitive to initial conditions, so even very tiny deviations in conditions at the beginning of a process can result in significant differences in how that system progresses over time. A hurricane is a chaotic system and that's why its strength and path are impossible to predict with certainty even a few days out. Almost every real process contains one or more chaotic elements. If we try to reverse a process back to its initial conditions we cannot rely on a series of predictable step-by-step transformations to arrive at exactly the same starting state. The specific process is irreversible and the outcome is not repeatable.
Is there any truly reversible process in thermodynamics? Even an atomic clock, which relies on extremely stable microwave cavity oscillations, shows minute frequency drift over time. In these clocks, laser-cooled atoms "tick" back and forth between an excited state and ground state. The energy difference between the states is always perfectly precise because it is quantum in nature. However, even the NIST-F1 cesium clock loses a second every 300 million years. Rather than heat loss per se, this system loses energy because the field transitions dissipate energy (they do work at the quantum scale just to keep going). The clock's entropy increases, albeit very slowly.
A process that doesn't generate entropy is a reversible process. The second law of thermodynamics (that entropy tends to increase in systems) is a consequence of the irreversibility of processes. Maxwell's demon was once held up as a theoretical system in which entropy could hold steady during a thermodynamic process. But under close scrutiny, even this famous thought experiment devised by James Clerk Maxwell, is not a reversible process. The idea itself though is intriguing: A microscopic demon guards the gate between two halves of a room. He lets only slow molecules into one half and fast molecules into the other half. Eventually you expect one half of the room to be warmer than the other half. This reduces the randomness of the molecular arrangement of gases and therefore reduces the entropy of the room (system). Looked at another way, it takes a system in equilibrium out of equilibrium. The argument seems rock-solid until you realize that the entropy of the demon itself (all that sorting work it is doing) actually increases the system's entropy overall more than it decreases it. In nature, many systems appear to have decreasing entropy. In all living systems, molecules are ordered into, and maintained as, intricate arrangements. Highly ordered galaxies appear to form from disordered clouds of gas. Disordered water vapour molecules flash-freeze into beautiful crystal patterns on a winter window. The key to all of these systems, living and non-living, is that they are open to some extent to their surroundings, like any real system is. In each case, surrounding entropy increases by an even greater amount, leading to a net entropy increase overall.
Scientists can only approximate a perfectly reversible process. An example is a process that is reversed by introducing an extremely tiny change to some property of a system that starts at equilibrium with its surroundings. If the tweak is small enough, the system remains so close to equilibrium that any deviation from it cannot be accurately measured yet the process itself does reverse.
In an ideal system designed to do work, such as a theoretical engine that has 100% efficiency, none of the work performed would be lost to heat transfer. The efficiency of a real engine, however, is always less than 100%, significantly less. If it is a piston engine or a steam engine, its efficiency can be analyzed quite easily by plotting a pressure-volume curve as it goes through a compression/expansion cycle. The area bound by the curve is the work done. The more efficient the engine is, the more closely that curve will follow an ideal equilibrium curve for that engine. An efficient engine never deviates far from its equilibrium state. A Carnot heat engine, mentioned earlier, is a theoretically ideal thermodynamic cycle, in which no energy is lost as "waste" heat transfer and there is no net increase in entropy. It represents an engine with 100% efficiency. An irreversible (real) process always strays away from that ideal equilibrium curve.
An example of a process that strays far from equilibrium is the carbon dioxide fire extinguisher once again. When you trigger the fire extinguisher, the carbon dioxide gas sprays out of the canister so fast that the air/carbon dioxide system has no time to reach equilibrium at first. The carbon dioxide cools adiabatically. The original amount of thermal energy is now spread over a large volume of gas cloud. Almost all of the potential energy of the pressurized system is lost through the work of adiabatic expansion. Even more work would be required to compress that gas back into the canister, much more than the work originally done because this process is now far from equilibrium. It is definitely not reversible and there is a significant increase in the system's entropy. However, entropy is also a statistical concept. Even in this case there is a chance, an infinitesimally small chance, that all the carbon dioxide molecules could spontaneously re-arrange themselves back into the canister (against their pressure gradient) through purely random molecular movements, reducing the system's entropy, and reminding us that the second law itself is statistical in nature. In a macroscopic system full of billions of atoms, it is vanishingly unlikely, but in a system of just a few atoms, the random chance of them all doing something together goes up. A triggered extinguisher, like all thermodynamic processes, proceeds in one direction only, and that is in the direction of increasing entropy. This direction, in turn, implies a forward arrow of time.
Thermodynamic Arrow of Time
The universe follows the second law of thermodynamics, where all real processes are irreversible, with the consequence that time must flow in one direction only. The increasing entropy of an evolving system gives us an impression of time and means that we can distinguish past events from future ones. The exception is a system that is in prefect equilibrium. In this case, the entropy remains the same and it is impossible to distinguish a past state of that system from a present state. The arrow of time would have seemed self-evident to Rudolf Clausius, who coined the term "entropy" in the mid 1800's, defining it as heat that incrementally dissipated from heat engines. The fact that time moves forward then seemed obvious but that confidence wasn't about to last. In just a couple of decades, quantum field theory was formulated to allow for charge, parity and time reversal (CPT symmetry). At around the same time, the idea that space and time are sewn together into a four-dimensional fabric was quickly becoming accepted science. General relativity and special relativity (which preceded general relativity) both treat time as something that is malleable and part of the system rather than outside of it. These theoretical developments brought our assumptions about time, and the second law of thermodynamics itself, into question.
Time, as we experience it, is a broken symmetry; there is no mirror in which time flows backward. Broken eggs can't reassemble. Aging doesn't reverse itself except in The Curious Case of Benjamin Button. Why this is so is actually a deep mystery in physics. Charge, parity and time reversal (CPT symmetry) is a fundamental symmetry of physical laws (except thermodynamics). The implication of this symmetry is that there is no theoretical reason why a "mirror" universe, one with an arbitrary plane of inversion (which you could think of as a vast three-dimensional mirror), with reversed momenta (complete with time running backward) and populated with antimatter (carrying opposite charges), couldn't evolve under our very same physical laws. It might even start from a vast Big Bang and shrink in volume rather than expand as ours does. Entropy in such a universe, we might assume, would tend to decrease rather than increase. However, even this assumption might be too simple. A number of physicists speculate that even in a universe that oscillates - it expands and contracts over and over - entropy might continually increase.
We might think that CPT symmetry, a mathematical theorem particularly useful in quantum physics, is a violation of the laws of thermodynamics. It should dissolve the one-way arrow of time and the rule that entropy never decreases in an isolated system. For example, we know that antimatter particles exist in our universe and we might assume that they travel backward through time. This assumption is wrong but it is not so easy to understand why. We can take the positron, the electron's antimatter twin particle, as an example. According to a theory called quantum electrodynamics (QED), an antimatter particle, mathematically speaking, travels backwards in time. If you look at a Feynman diagram of particle interaction, backward time travel by particles is commonly depicted. Below, we can see that an e+ (positron) and an antiquark (the q with the line over it) both travel backward in time in this depiction of electron/positron (e-/e+) annihilation (see the black arrows angled toward the left).
Positrons are used everyday in large hospitals and we know that positrons are never emitted before a PET scanner is turned on in the hospital (thankfully). Why? A positron is exactly the same as an electron, but with positive charge. "Flip" time and you have an electron. A perhaps unsatisfying way to understand this problem is to treat time as a mirror-like parity that is flipped in a dimensional kind of way (think of four-dimensional space-time) rather than as the continuous forward flow we experience.
Even though all of our physical laws (except thermodynamics) display a fundamental CPT symmetry, it doesn't mean that all processes obey it. Three of the four fundamental forces of nature, the strong force, the electromagnetic force and gravitation (which, as general relativity, is not formulated in terms of quantum mechanics) obey CPT symmetry but the weak fundamental force occasionally violates both parity and charge symmetry at the quantum level. Recently, researchers discovered that this quantum process also sometimes violates time symmetry as well. Oscillations between different kinds of B-meson particles during the weak interaction happen in both directions of time but the rates are slightly different - this disparity doesn't have anything to do with the thermodynamic reason for time's broken symmetry however. An article from SLAC National Accelerator Laboratory at Stanford University offers an excellent technical comparison between broken T-symmetry at the macroscopic and quantum scales. For a thorough discussion of what this means philosophically, try this 2017 article posted by the Journal of General Philosophy of Science. I found it a good read.
As we've seen, at the quantum scale, processes are time-reversible (with the weak force exception). Quantum processes, according to the Copenhagen interpretation, are governed by the Schrodinger equation, which has T-symmetry built into it. Wave function collapse, however, does not. This is a mathematical framework that links indeterminate ("fuzzy") quantum particle behaviours to the determinate macroscopic behaviours of substances. Therefore, it finds itself at the epicenter of how time symmetry in the quantum world breaks down at the larger scale we experience.
Mathematically, a quantum system is laid out as a superposition of several equally possible states (technically called eigenstates (https://en.wikipedia.org/wiki/Introduction_to_eigenstates)), which reduce to a single eigenstate when the system is observed or measured. How this happens, and even if this happens, physically, is up for debate. It is simply a mathematical description, which means that the process itself is a black box, but it does provide a link between quantum indeterminacy and determinate macro-processes, thermodynamics being one of them. Somehow, inside the "black box," time switches from reversible to irreversible. How do two well established and experimentally proven but mutually exclusive theories (thermodynamics versus quantum mechanics) co-exist? This problem is encapsulated in Loschmidt's Paradox. No one knows the solution to Loschmidt's Paradox, although theorists have been working on it for decades.
What is known is that the second law of thermodynamics (and its arrow of time) is a statistical principle based (somehow) on the behaviors of countless quantum time-symmetrical particles. If we keep the statistical nature of these descriptions in the front of our minds and go back to the simple example of a stack of books, we could add that, yes, there are countless more ways those books can fall into a messy pile than there are ways to stack them up neatly BUT there is no rule against the books spontaneously falling into a nice neat stack either. It's just extremely unlikely. If this happened, it would not mean that the second law of thermodynamics just broke. It means that the underlying statistical nature of the arrow of time is revealing itself. A puzzle appears, however, when we think again about the increasing entropy of the universe. We could assume from that that the universe initially had very low entropy. There were no indistinguishable particles or forces at the very beginning. Through a series of symmetry-breaking processes, four distinct fundamental forces and all the myriad particles of matter emerged as the universe expanded and cooled. The question is, if the universe started with very low entropy wouldn't it have been extremely unlikely as well?
We might wonder if the arrow of time is an aspect of dynamics that emerges at the macroscopic scale. Emergence might not be the best description because at the subatomic particle level, time seems to exist but it runs in either direction. Time's one-way arrow only emerges within a large collection of molecules. Time's arrow might be better described as a symmetry that breaks at the macroscopic scale.
All these questions, I hope, point out that our understanding of time itself is a problem when we think about the second law of thermodynamics and entropy. Time is not a unified concept in physics. Depending on the theoretical framework you chose - quantum mechanics, classical dynamics or even general relativity - time can be reversible. Or it can be a one-way arrow. Or it can be illusion altogether, because general relativity treats time as one dimension in a four-dimensional stretchy fabric where all past and future times are equally present and our present time is non-important.
Black Hole Entropy: Testing Thermodynamics
A black hole tests all of our scientific laws and it is especially interesting when viewed as a thermodynamic object, which it surely is. A black hole is a region of space-time bent so severely by gravity that nothing, not even electromagnetic radiation such as light, can escape. Once energy or matter crosses the black hole's event horizon, it is lost from our observations. Black holes can be directly observed when in-falling matter is heated by internal friction, creating an accretion disk external to the event horizon, and it can be extremely bright. A black hole also gradually emits radiation, called Hawking radiation, and it has a temperature, which means that these objects should be subject to the same laws of thermodynamics as any other object.
Entropy, as we've explored already, can be understood in several different ways. Some physicists might argue that entropy is best understood as a statement about information rather than about order or disorder. Generally the various descriptions of entropy agree with each other but under specific circumstances, individual weaknesses with each approach become evident. Theorists appear to be best equipped to tackle the question of black hole entropy by interpreting entropy as information. More information encoded in a system means it has higher entropy. As an isolated system's entropy can never decrease, the information encoded by an isolated system can never decrease. This is a slightly different take on the idea of entropy as a multiplicity of microstates.
Black holes are wonderfully mysterious objects. Inside a black hole, matter becomes inaccessible to our observations. But, the momentum and charge of that matter is conserved, and its mass remains to bend space-time around it. Most black holes, especially those formed by massive collapsing spinning stars, are expected to have not only charge but also significant angular momentum. According to the second law of thermodynamics, we expect that matter disappearing into a black hole should increase the black hole's entropy. This implies that a black hole has non-zero entropy. A number of theoretical physicists are currently working on how entropy works with black holes and how to measure it. The new methodologies they have come up with so far are surprising. We immediately realize how unusual a black hole is when we learn that instead of using volume to calculate the entropy of what we assume is a spinning spherical object, we must use the area bound by its event horizon instead. In 1973, Jakob Bekenstein calculated black hole entropy as proportional to the area of its event horizon divided by the Planck area (the area by which a black hole surface increases when it swallows one bit of information). In 1974, Stephen Hawking backed up this work and additionally showed that black holes emit thermal radiation, which means they have a specific temperature. So how does the second law of thermodynamics enter? It has actually been rewritten for black holes to say that the total area of the event horizons of any two colliding black holes never decreases. This is part of the closely analogous set of laws for black hole mechanics.
How do we get an intuitive picture of how black hole entropy works? We can start by looking at a black hole's entropy statistically. Each of all the countless billions of particles that have fallen down the gravity well of a black hole will be in a specific thermodynamic state. Each microstate will contribute to what should be an enormous number of possible microstate arrangements. Microstates in a macroscopic system are all the different ways that the system can achieve its particular macro-state (which is defined by its density, pressure, volume, temperature, etc.).
By treating these microstates statistically, we can come up with an approximation of the black hole's overall entropy. This would be straightforward if black holes didn't present us with a unique and bedeviling twist: the no-hair theorem. The no-hair theorem argues that this approximation cannot be done. Aptly named, it tells us that a black hole can be described by only three classical parameters: its mass, its charge and its angular momentum. All the particles that fell into a black hole do not contribute to any kind of unique character to it. A black hole that swallowed up a cold hydrogen gas cloud looks the same as one that swallowed an iron-dense planet. Instead, the no-hair theorem treats a black hole as a ubiquitous and enormous single "homogenous" particle. A possible analogy might come from another housekeeping chore. The vacuum cleaner sucks up all the stuff off the floor. A CSI investigator could look in the canister afterward and determine, through skin flakes, hairs and other debris, who lived in the room, and perhaps even what they were doing over the week. In the case of black holes, the "canister contents" appear to become generic once they cross the event horizon. You can't tell what particular atoms fell in, when they fell in, and what their velocities, etc., were. The no-hair theorem is a mathematical theorem: it is the solution to Einstein-Maxwell equations of electromagnetism in curved space-time. It turns all different forms of matter/energy into a generic electromagnetic stress-energy tensor, which bends space-time. What this theorem implies is that all of the information encoded by the special quantum characteristics of each matter particle, such as baryon number, lepton number, colour charge, and even whether it is matter or antimatter, is annihilated when it falls into a black hole. More specifically, it implies that the black hole as a system will have much less entropy than the ordinary matter originally had. Matter and energy being lost in a black hole, if treated as an isolated system, appears to represent a process that decreases entropy, and that is a violation of the second law.
We can't just toss out the no-hair theorem as a mathematical curiosity that might well prove invalid in nature. d. Its validity is backed up by recent observations of black holes by LIGO, a gravitational wave observatory. To add to this entropy problem we could draw the additional, and no less astounding, conclusion that the no-hair theorem suggests that a black hole is actually just one a single microstate (just one "particle"), which means it should have not just low entropy but zero entropy, if we interpret it as there's only one way to assemble a single microstate.
According to quantum mechanics, the quantum information encoded in a particle (its spin, angular momentum, energy, etc.) must be conserved during any process, which is a variation on the first law of thermodynamics, except that the focus here is on information rather than energy. Where does that information go inside a black hole? Hawking radiation, which leaves the black hole system, is a natural place to look, but Hawking's theory suggests that this radiation is "mixed," which means that it is generic; it doesn't contain any of the specific particle information that went into it. To make things even more interesting, there is some serious debate about whether Hawking radiation, as described using quantum theory, actually is a form of thermal radiation. The conclusion that a black hole has a temperature comes not from direct observation (which could be understood using classical statistics). It is based, instead, on quantum mechanics. Thermal radiation, emitted from a black body (a physical object that absorbs all incoming electromagnetic radiation), contains information about the body that emitted it. Hawking radiation contains no such information. It is based on the law of conservation of energy only. In space-time, virtual particles pop in and out of existence all the time everywhere. They exist because space-time has a non-zero vacuum energy, which is a consequence of the uncertainty principle. Close to the very high-energy space-time environment around a black hole, some theorists suspect that virtual particles, as particle-antiparticle pairs, have enough energy to become real particle pairs, with mass. If one of the pair falls in, it must have negative energy (according to an outside observer) in order to preserve the conservation of energy law. This also means it has negative mass (there is a mass-energy equivalence), which means that the black hole itself loses mass and appears to emit a particle (again, as observed by an outside observer). Hawking radiation has not been observed yet, but by using an analogue acoustic black hole made in a lab, scientists have found strong evidence that suggests Hawking radiation exists around real black holes.
Good evidence for the existence of black hole radiation, whether it is thermal or not, might solve the issue of conservation of energy but it doesn't appear to conserve information. Information can be lost by one of a pair of entangled particles when it falls into a black hole and is lost. Entangled particles are thought to be very common in space-time and they can also be physically very far apart in space, across the universe in fact. Distance is irrelevant to quantum entanglement. Because of quantum mechanics, an entangled pair or a group of particles can be described by a singular unique quantum state (spin, angular momentum, energy, etc.) just as a single particle is. From a quantum mechanics point of view, the pair or group becomes a single particle (some theorists think that matter inside a black hole might be quantum-entangled). If one of a pair of quantum entangled particles falls into a black hole and loses its quantum signature, does its entangled partner pop out of existence somewhere else in the universe at the same time? It seems that this process would continuously decrease the entropy of the universe as a whole. The dilemma this presents is called the black hole information paradox. One could argue that because time slows down to a stop in the infinite gravity well at the event horizon of a black hole, nothing really ever goes in. Its quantum information remains somehow encoded, smeared somehow across the event horizon, scrambled up and out of reach. Newer models of quantum gravity suggest that the particle left behind remains entangled by whatever form of matter/energy its partner is now in inside the black hole, thus dissolving the paradox. Another argument emerging among physicists is also exciting. It actually uses quantum entanglement to solve the black hole paradox. It uses wormholes to link the paradox phenomenon with the Einstein-Rosen Bridge (or wormhole), described by two previously unrelated theories. It is laid out in this Quantum magazine article. Entangled particles inside and outside a black hole could remain connected through the continuous space-time that would exist inside a wormhole, solving the information paradox. No one's sure yet if that idea holds up theoretically.
The holographic principle, which is gaining momentum in theoretical physics, is yet another way to explore the information paradox. It suggests that a black hole encodes all of the particle information just outside the event horizon as statistical degrees of freedom. Degrees of freedom are a measure of information, closely related to the idea of multiple microstates. How this information is stored, and what form it is in, is impossible to visualize, however, because a black hole is treated theoretically (in almost all theories) as a four-dimensional (space-time) object. Although a black hole should appear as a sphere to an observer, it is not a three-dimensional sphere we can relate to. It is a singularity of mass. The event horizon, likewise, is not a physical two-dimensional barrier shell around a black hole. It is the last distance from which light can escape the gravitational well, measured as the Schwartzchild radius. For example, Earth has a Schwartzchild radius of about 2 centimetres, which means that if Earth's mass was compressed into a sphere of 2 cm radius, it would be so dense that it would spontaneously collapse into a black hole singularity.
From our perspective, information is last observed at the event horizon of a black hole. This approach helps us understand why entropy is a measure of event horizon area, rather than volume. It also implies that a black hole, rather than being a zero-entropy object, is a maximum entropy object. Depending on how we look at it, generic information can be thought of as equivalent to maximally mixed information, an equilibrium state.
No matter how we look at the information paradox, information inside a black hole seemingly must get back out through black hole evaporation (as Hawking radiation). A black hole that doesn't feed on matter should gradually shrink and eventually disappear, meaning that somehow, the quantum information of all that matter must get back out. If the information is irretrievably lost from our universe, then black holes either do no obey thermodynamics or they represent some kind of door into some other entity and the universe is not an isolated system after all. It could even mean that ordinary matter and energy as we know it is actually an illusion. It could be information encoded on a surface area, making the universe, by extension, a hologram of that data.
Black hole entropy has also recently been calculated based on a supersymmetric black hole in string theory. This technical 2007 review by T. Mohaupt describes the reasoning and process. This solution ties in with the holographic principle (which is also based on string theory) and its solution closely matches that of Jakob Bekenstein (who based black hole entropy on the area of the event horizon). The fact that the two calculations match up closely gives a number of theorists hope that string theory could be a route to ultimately solving the information paradox.
Based on all of these and other developments, physicists Brandon Carter, Stephen Hawking and James Bardeen have formulated a series of black hole mechanics laws, which are analogous to the laws of thermodynamics. While thermodynamics is a classical science, these laws attempt to integrate general relativity, quantum mechanics and thermodynamics together. As I hinted at earlier, these mechanical laws offer up some seemingly odd conclusions. Uniform gravity at the event horizon is analogous to thermal equilibrium (the first law of thermodynamics). Entropy is analogous to the increasing surface area of the event horizon (the second law of thermodynamics). New theories about black hole entropy offer some serious food for thought because we approach entropy not just through the lens of classical mechanics but through the lenses of general relativity and quantum mechanics as well. Being a universal rule of nature, shouldn't entropy find its expression there too?
One of the most interesting approaches to black hole entropy was done under a specialized mathematical framework. It is laid out by physics theorist Sean Carroll in one of his blog entries from 2009 (I wholeheartedly recommend his blog). As a black hole's rotation and charge increases, its entropy approaches zero. This statement is analogous to the third law of thermodynamics, which will be explored next. According to this law, no system can have exactly zero entropy, so this means there is a limit to the spin and charge of a black hole. The third black hole law of mechanics might represent the deepest puzzle yet for theorists.
A black hole at the spin/charge limit is called an extremal black hole. It is a black hole that has the smallest possible mass at a given charge and angular momentum. It is therefore the smallest possible mass black hole that could theoretically exist while rotating at a constant speed. If these objects existed, they would be microscopic, but they are only theoretical. In theory, an extremal black hole could be created in which all of its energy comes from the charge, or electrical field, and none from matter. Such a black hole is a product of Euclidean quantum gravity, a theory of space-time in which time is treated exactly like another spatial dimension. The entropy of such a black hole can be calculated by using string theory, and it comes out to be exactly zero, which is forbidden by the third law of thermodynamics. A first reaction could be, well, that's the end of that thought-stream, and Dr. Carroll suggests that this is exactly what the authors of the original paper thought. But then the idea was revisited because it seemed to hint at something very interesting.
In an extremal black hole, entropy discontinuously drops as charge is increased, and eventually its hits a limit where the mathematical solution the researchers used splits into two different space-times! Part of this mystery includes the fact that the mathematics of all charged black holes gives them not one but two event horizons. The outer event horizon is the one you expect - a point of no return. The inner event horizon is located between that point of no return and the singularity itself. What's unique is that an object between the two horizons isn't forced to crash into the singularity. Inside the black hole, moving forward in time means moving inward toward the inner event horizon. That part is inevitable. However, outside the black hole and inside the inner event horizon, time moving forward looks normal and an object in either of those spaces isn't forced anywhere.
As you increase the charge and keep the mass the same, the two horizons come together. You are moving toward an extremal black hole. You would expect that the region of space-time between the horizons will eventually disappear, but it doesn't. It reaches a finite size and stays there, until you reach an exact extremal black hole, and which point it suddenly and discontinuously disappears. The entropy, when calculated, decreases smoothly alongside the increasing charge until exactly when an extremal black hole is reached and at that point it suddenly slips to zero. This asks the question: is there a theoretical problem here or does the entropy suddenly escape into some new and different space-time (as space-time itself appears to split as well)? Does it offer a clue about where matter (and all that missing entropy) goes inside a real black hole? This new space-time is a mathematical solution called two-dimensional anti-de-Sitter space-time on a two-dimensional sphere. Dr. Carroll himself wonderfully refers to this hidden space-time as "Whoville" In his fascinating post on this mysterious theoretical journey. You can download a pdf of the scientific paper he and the original authors written on it here.
Although we might not know exactly what kind of space-time black hole particles find themselves in if that's what they finds themselves in, black hole physics seems to be a great tool to use to search out the limits of the second law of thermodynamics. Black holes go an additional step by demanding that all of our disparate laws of nature come together to describe them. Dr. Carroll puts it well: black holes are fertile "thought-experiment laboratories" to test our understanding of thermodynamics, especially the second law.
For The Laws of Thermodynamics PART 4 click here
The Laws of Thermodynamics PART 2 click here
The Second Law of Thermodynamics
The concepts of heat, internal energy and thermal energy we just explored can easily be confused. We still casually but incorrectly talk about heat as if it is something that an object contains, or as a property of that object. Internal energy and thermal energy are sometimes incorrectly used interchangeably even in textbooks (they are interchangeable only in a theoretical ideal gas, where there is no potential energy - the particle-particle interactions considered are perfectly elastic collisions between atoms which are treated as small hard spheres). Despite these challenges, the second law is no doubt the most misunderstood law in thermodynamics. Even by experts. We could look at it as something to be feared, but we can also see this law as the source of high quality mental fun. Borrowing from physicist Lidia del Rio once again this is where the village witch lives and it is where she does her best magic.
The second law introduces another term called entropy. This law states that entropy can never decrease over time in an isolated system. Unlike energy, entropy is not conserved in an isolated system. Remember, an isolated system is one in which neither energy nor matter can ever enter or leave. As a possible consequence, the universe as an isolated system might eventually suffer an ultimate fate called heat death, which is based on the second law of thermodynamics. The universe's entropy will continue to increase until it reaches a state of maximum possible entropy, or thermal equilibrium, where heat exchange between molecules and atoms is no longer possible. All thermodynamic processes die as a result. Entropy is an interesting and far-reaching concept. It does not always relate specifically to the internal energy of a system. Sometimes, entropy is (too-broadly) defined as the level of disorder in a system. It can also be defined as the number of possible microstates within a system. Often, entropy is a measure of the amount of information in a system.
Is the universe actually hurtling toward heat death? Is it truly isolated? Considering that highly organized structures evolve in the universe over time, the notion of increasing entropy in the universe presents some controversy. Wikipedia's brief "controversies" entry on heat death offers arguments against even assigning entropy to the universe as a whole, and there are many more arguments for and against to be found online. One question I find intriguing is whether a gravitational field itself has entropy. Gravitational fields have a way of keeping objects out of thermal equilibrium with the space around them. This idea is explored in the controversial theory of entropic gravity. Here, gravity is treated as an emergent and entropic force: at the macroscopic scale it is homogenous but at the quantum scale it is subject to quantum disorder, that is, from quantum entanglement of bits of space-time information. This disorder is expressed as a force we define as gravity. It agrees with both Newtonian gravity and general relativity and it offers an explanation for "dark energy," as a kind of positive vacuum energy within space-time.
A common way of phrasing the second law is to say that a system always tends toward disorder over order, but this statement can be misinterpreted. In physics, entropy is closely related to the concept of multiplicity. I think it is probably the easiest way to think of entropy. Take as an example a system of 20 books. There are many more ways (a multiplicity of ways) to accomplish a jumbled pile (no rules) than to stack them up neatly (specific rules). We can say that the randomly jumbled mess of books has higher entropy than the neat stack does. What happens when we come along and straighten up the books? Does the book "system" go against entropy's natural tendency toward disorder? No, because we were involved in the process we became part of the system. We did work on the system to decrease its entropy. Overall, the entropy of the "us plus the books" system increased.
The second law also states that systems tend toward a state of equilibrium. To explain this, let's take a different example. We have two volumes of gas separated by a movable wall inside a really good Thermos bottle, a bottle so well made we can consider the system isolated. The wall itself is also made of a perfect insulating material so no heat transfer can happen. We can think of this arrangement as two systems in mechanical contact with each other. To start with, we have one gas that is hot and the other one is cold. The hot gas will exert thermal pressure and push the wall into the cold side. We will assume that the wall movement is frictionless. The hot gas will expand and cool and the cold gas will compress and warm up (this is another example of an adiabatic process). The system does mechanical work (the expansion and compression of gases as well as the movement of the wall). The wall will stop moving when both gases reach the same pressure. If it is the same type of gas on each side, the compartments will also reach the same temperature. The two systems will come to rest at a state of thermodynamic equilibrium, and in this state, no part of the two-gas system can do any more work on the other part. This system still has thermal energy but that energy can't be used to do any work (within the system). This state has the highest entropy it can have under these circumstances. In this case, it is not self-evident that the system has reached a state of maximum disorder, or even that it has achieved the greatest multiplicity of possible states. We do know, however, that it has evolved toward an end state of thermodynamic equilibrium, which is also an evolution toward maximum achievable entropy.
As we can see, there is more than one way to look at entropy, like looking at a magic trick from different angles. No one angle gives it completely away.
The system's movement toward equilibrium is an irreversible process. It can't return to its original state of disequilibrium unless work is done on it. We could set up a real system of two gases as close as we can to the one just described and we would discover why this move toward equilibrium is irreversible. It might seem that we could restart the experiment over and over forever. Each time we would do exactly the same mount of work to heat and cool our gases to their starting temperatures. But even the best insulating material is imperfect and only superfluids are truly frictionless. Some heat will be lost to the outside of the Thermos, and some energy will be lost as heat loss through friction by the wall as it moves. If the experiment was repeated over and over using the same Thermos and the same initial work input, the overall thermal energy available to the system would eventually decrease and the entropy of the system (which now includes the room into which the heat dissipates and the mechanism we use to heat/cool the gases) would increase.
Another way of defining an irreversible process considers the role of chaos in systems. Any system of interacting molecules will include interactions between them that are chaotic in nature. We often think of chaos as what our kids do to their rooms but it is also a scientific theory. Systems are very sensitive to initial conditions, so even very tiny deviations in conditions at the beginning of a process can result in significant differences in how that system progresses over time. A hurricane is a chaotic system and that's why its strength and path are impossible to predict with certainty even a few days out. Almost every real process contains one or more chaotic elements. If we try to reverse a process back to its initial conditions we cannot rely on a series of predictable step-by-step transformations to arrive at exactly the same starting state. The specific process is irreversible and the outcome is not repeatable.
Is there any truly reversible process in thermodynamics? Even an atomic clock, which relies on extremely stable microwave cavity oscillations, shows minute frequency drift over time. In these clocks, laser-cooled atoms "tick" back and forth between an excited state and ground state. The energy difference between the states is always perfectly precise because it is quantum in nature. However, even the NIST-F1 cesium clock loses a second every 300 million years. Rather than heat loss per se, this system loses energy because the field transitions dissipate energy (they do work at the quantum scale just to keep going). The clock's entropy increases, albeit very slowly.
A process that doesn't generate entropy is a reversible process. The second law of thermodynamics (that entropy tends to increase in systems) is a consequence of the irreversibility of processes. Maxwell's demon was once held up as a theoretical system in which entropy could hold steady during a thermodynamic process. But under close scrutiny, even this famous thought experiment devised by James Clerk Maxwell, is not a reversible process. The idea itself though is intriguing: A microscopic demon guards the gate between two halves of a room. He lets only slow molecules into one half and fast molecules into the other half. Eventually you expect one half of the room to be warmer than the other half. This reduces the randomness of the molecular arrangement of gases and therefore reduces the entropy of the room (system). Looked at another way, it takes a system in equilibrium out of equilibrium. The argument seems rock-solid until you realize that the entropy of the demon itself (all that sorting work it is doing) actually increases the system's entropy overall more than it decreases it. In nature, many systems appear to have decreasing entropy. In all living systems, molecules are ordered into, and maintained as, intricate arrangements. Highly ordered galaxies appear to form from disordered clouds of gas. Disordered water vapour molecules flash-freeze into beautiful crystal patterns on a winter window. The key to all of these systems, living and non-living, is that they are open to some extent to their surroundings, like any real system is. In each case, surrounding entropy increases by an even greater amount, leading to a net entropy increase overall.
Scientists can only approximate a perfectly reversible process. An example is a process that is reversed by introducing an extremely tiny change to some property of a system that starts at equilibrium with its surroundings. If the tweak is small enough, the system remains so close to equilibrium that any deviation from it cannot be accurately measured yet the process itself does reverse.
In an ideal system designed to do work, such as a theoretical engine that has 100% efficiency, none of the work performed would be lost to heat transfer. The efficiency of a real engine, however, is always less than 100%, significantly less. If it is a piston engine or a steam engine, its efficiency can be analyzed quite easily by plotting a pressure-volume curve as it goes through a compression/expansion cycle. The area bound by the curve is the work done. The more efficient the engine is, the more closely that curve will follow an ideal equilibrium curve for that engine. An efficient engine never deviates far from its equilibrium state. A Carnot heat engine, mentioned earlier, is a theoretically ideal thermodynamic cycle, in which no energy is lost as "waste" heat transfer and there is no net increase in entropy. It represents an engine with 100% efficiency. An irreversible (real) process always strays away from that ideal equilibrium curve.
An example of a process that strays far from equilibrium is the carbon dioxide fire extinguisher once again. When you trigger the fire extinguisher, the carbon dioxide gas sprays out of the canister so fast that the air/carbon dioxide system has no time to reach equilibrium at first. The carbon dioxide cools adiabatically. The original amount of thermal energy is now spread over a large volume of gas cloud. Almost all of the potential energy of the pressurized system is lost through the work of adiabatic expansion. Even more work would be required to compress that gas back into the canister, much more than the work originally done because this process is now far from equilibrium. It is definitely not reversible and there is a significant increase in the system's entropy. However, entropy is also a statistical concept. Even in this case there is a chance, an infinitesimally small chance, that all the carbon dioxide molecules could spontaneously re-arrange themselves back into the canister (against their pressure gradient) through purely random molecular movements, reducing the system's entropy, and reminding us that the second law itself is statistical in nature. In a macroscopic system full of billions of atoms, it is vanishingly unlikely, but in a system of just a few atoms, the random chance of them all doing something together goes up. A triggered extinguisher, like all thermodynamic processes, proceeds in one direction only, and that is in the direction of increasing entropy. This direction, in turn, implies a forward arrow of time.
Thermodynamic Arrow of Time
The universe follows the second law of thermodynamics, where all real processes are irreversible, with the consequence that time must flow in one direction only. The increasing entropy of an evolving system gives us an impression of time and means that we can distinguish past events from future ones. The exception is a system that is in prefect equilibrium. In this case, the entropy remains the same and it is impossible to distinguish a past state of that system from a present state. The arrow of time would have seemed self-evident to Rudolf Clausius, who coined the term "entropy" in the mid 1800's, defining it as heat that incrementally dissipated from heat engines. The fact that time moves forward then seemed obvious but that confidence wasn't about to last. In just a couple of decades, quantum field theory was formulated to allow for charge, parity and time reversal (CPT symmetry). At around the same time, the idea that space and time are sewn together into a four-dimensional fabric was quickly becoming accepted science. General relativity and special relativity (which preceded general relativity) both treat time as something that is malleable and part of the system rather than outside of it. These theoretical developments brought our assumptions about time, and the second law of thermodynamics itself, into question.
Time, as we experience it, is a broken symmetry; there is no mirror in which time flows backward. Broken eggs can't reassemble. Aging doesn't reverse itself except in The Curious Case of Benjamin Button. Why this is so is actually a deep mystery in physics. Charge, parity and time reversal (CPT symmetry) is a fundamental symmetry of physical laws (except thermodynamics). The implication of this symmetry is that there is no theoretical reason why a "mirror" universe, one with an arbitrary plane of inversion (which you could think of as a vast three-dimensional mirror), with reversed momenta (complete with time running backward) and populated with antimatter (carrying opposite charges), couldn't evolve under our very same physical laws. It might even start from a vast Big Bang and shrink in volume rather than expand as ours does. Entropy in such a universe, we might assume, would tend to decrease rather than increase. However, even this assumption might be too simple. A number of physicists speculate that even in a universe that oscillates - it expands and contracts over and over - entropy might continually increase.
We might think that CPT symmetry, a mathematical theorem particularly useful in quantum physics, is a violation of the laws of thermodynamics. It should dissolve the one-way arrow of time and the rule that entropy never decreases in an isolated system. For example, we know that antimatter particles exist in our universe and we might assume that they travel backward through time. This assumption is wrong but it is not so easy to understand why. We can take the positron, the electron's antimatter twin particle, as an example. According to a theory called quantum electrodynamics (QED), an antimatter particle, mathematically speaking, travels backwards in time. If you look at a Feynman diagram of particle interaction, backward time travel by particles is commonly depicted. Below, we can see that an e+ (positron) and an antiquark (the q with the line over it) both travel backward in time in this depiction of electron/positron (e-/e+) annihilation (see the black arrows angled toward the left).
Joel Holdsworth; Wikipedia |
Even though all of our physical laws (except thermodynamics) display a fundamental CPT symmetry, it doesn't mean that all processes obey it. Three of the four fundamental forces of nature, the strong force, the electromagnetic force and gravitation (which, as general relativity, is not formulated in terms of quantum mechanics) obey CPT symmetry but the weak fundamental force occasionally violates both parity and charge symmetry at the quantum level. Recently, researchers discovered that this quantum process also sometimes violates time symmetry as well. Oscillations between different kinds of B-meson particles during the weak interaction happen in both directions of time but the rates are slightly different - this disparity doesn't have anything to do with the thermodynamic reason for time's broken symmetry however. An article from SLAC National Accelerator Laboratory at Stanford University offers an excellent technical comparison between broken T-symmetry at the macroscopic and quantum scales. For a thorough discussion of what this means philosophically, try this 2017 article posted by the Journal of General Philosophy of Science. I found it a good read.
As we've seen, at the quantum scale, processes are time-reversible (with the weak force exception). Quantum processes, according to the Copenhagen interpretation, are governed by the Schrodinger equation, which has T-symmetry built into it. Wave function collapse, however, does not. This is a mathematical framework that links indeterminate ("fuzzy") quantum particle behaviours to the determinate macroscopic behaviours of substances. Therefore, it finds itself at the epicenter of how time symmetry in the quantum world breaks down at the larger scale we experience.
Mathematically, a quantum system is laid out as a superposition of several equally possible states (technically called eigenstates (https://en.wikipedia.org/wiki/Introduction_to_eigenstates)), which reduce to a single eigenstate when the system is observed or measured. How this happens, and even if this happens, physically, is up for debate. It is simply a mathematical description, which means that the process itself is a black box, but it does provide a link between quantum indeterminacy and determinate macro-processes, thermodynamics being one of them. Somehow, inside the "black box," time switches from reversible to irreversible. How do two well established and experimentally proven but mutually exclusive theories (thermodynamics versus quantum mechanics) co-exist? This problem is encapsulated in Loschmidt's Paradox. No one knows the solution to Loschmidt's Paradox, although theorists have been working on it for decades.
What is known is that the second law of thermodynamics (and its arrow of time) is a statistical principle based (somehow) on the behaviors of countless quantum time-symmetrical particles. If we keep the statistical nature of these descriptions in the front of our minds and go back to the simple example of a stack of books, we could add that, yes, there are countless more ways those books can fall into a messy pile than there are ways to stack them up neatly BUT there is no rule against the books spontaneously falling into a nice neat stack either. It's just extremely unlikely. If this happened, it would not mean that the second law of thermodynamics just broke. It means that the underlying statistical nature of the arrow of time is revealing itself. A puzzle appears, however, when we think again about the increasing entropy of the universe. We could assume from that that the universe initially had very low entropy. There were no indistinguishable particles or forces at the very beginning. Through a series of symmetry-breaking processes, four distinct fundamental forces and all the myriad particles of matter emerged as the universe expanded and cooled. The question is, if the universe started with very low entropy wouldn't it have been extremely unlikely as well?
We might wonder if the arrow of time is an aspect of dynamics that emerges at the macroscopic scale. Emergence might not be the best description because at the subatomic particle level, time seems to exist but it runs in either direction. Time's one-way arrow only emerges within a large collection of molecules. Time's arrow might be better described as a symmetry that breaks at the macroscopic scale.
All these questions, I hope, point out that our understanding of time itself is a problem when we think about the second law of thermodynamics and entropy. Time is not a unified concept in physics. Depending on the theoretical framework you chose - quantum mechanics, classical dynamics or even general relativity - time can be reversible. Or it can be a one-way arrow. Or it can be illusion altogether, because general relativity treats time as one dimension in a four-dimensional stretchy fabric where all past and future times are equally present and our present time is non-important.
Black Hole Entropy: Testing Thermodynamics
A black hole tests all of our scientific laws and it is especially interesting when viewed as a thermodynamic object, which it surely is. A black hole is a region of space-time bent so severely by gravity that nothing, not even electromagnetic radiation such as light, can escape. Once energy or matter crosses the black hole's event horizon, it is lost from our observations. Black holes can be directly observed when in-falling matter is heated by internal friction, creating an accretion disk external to the event horizon, and it can be extremely bright. A black hole also gradually emits radiation, called Hawking radiation, and it has a temperature, which means that these objects should be subject to the same laws of thermodynamics as any other object.
Entropy, as we've explored already, can be understood in several different ways. Some physicists might argue that entropy is best understood as a statement about information rather than about order or disorder. Generally the various descriptions of entropy agree with each other but under specific circumstances, individual weaknesses with each approach become evident. Theorists appear to be best equipped to tackle the question of black hole entropy by interpreting entropy as information. More information encoded in a system means it has higher entropy. As an isolated system's entropy can never decrease, the information encoded by an isolated system can never decrease. This is a slightly different take on the idea of entropy as a multiplicity of microstates.
Black holes are wonderfully mysterious objects. Inside a black hole, matter becomes inaccessible to our observations. But, the momentum and charge of that matter is conserved, and its mass remains to bend space-time around it. Most black holes, especially those formed by massive collapsing spinning stars, are expected to have not only charge but also significant angular momentum. According to the second law of thermodynamics, we expect that matter disappearing into a black hole should increase the black hole's entropy. This implies that a black hole has non-zero entropy. A number of theoretical physicists are currently working on how entropy works with black holes and how to measure it. The new methodologies they have come up with so far are surprising. We immediately realize how unusual a black hole is when we learn that instead of using volume to calculate the entropy of what we assume is a spinning spherical object, we must use the area bound by its event horizon instead. In 1973, Jakob Bekenstein calculated black hole entropy as proportional to the area of its event horizon divided by the Planck area (the area by which a black hole surface increases when it swallows one bit of information). In 1974, Stephen Hawking backed up this work and additionally showed that black holes emit thermal radiation, which means they have a specific temperature. So how does the second law of thermodynamics enter? It has actually been rewritten for black holes to say that the total area of the event horizons of any two colliding black holes never decreases. This is part of the closely analogous set of laws for black hole mechanics.
How do we get an intuitive picture of how black hole entropy works? We can start by looking at a black hole's entropy statistically. Each of all the countless billions of particles that have fallen down the gravity well of a black hole will be in a specific thermodynamic state. Each microstate will contribute to what should be an enormous number of possible microstate arrangements. Microstates in a macroscopic system are all the different ways that the system can achieve its particular macro-state (which is defined by its density, pressure, volume, temperature, etc.).
By treating these microstates statistically, we can come up with an approximation of the black hole's overall entropy. This would be straightforward if black holes didn't present us with a unique and bedeviling twist: the no-hair theorem. The no-hair theorem argues that this approximation cannot be done. Aptly named, it tells us that a black hole can be described by only three classical parameters: its mass, its charge and its angular momentum. All the particles that fell into a black hole do not contribute to any kind of unique character to it. A black hole that swallowed up a cold hydrogen gas cloud looks the same as one that swallowed an iron-dense planet. Instead, the no-hair theorem treats a black hole as a ubiquitous and enormous single "homogenous" particle. A possible analogy might come from another housekeeping chore. The vacuum cleaner sucks up all the stuff off the floor. A CSI investigator could look in the canister afterward and determine, through skin flakes, hairs and other debris, who lived in the room, and perhaps even what they were doing over the week. In the case of black holes, the "canister contents" appear to become generic once they cross the event horizon. You can't tell what particular atoms fell in, when they fell in, and what their velocities, etc., were. The no-hair theorem is a mathematical theorem: it is the solution to Einstein-Maxwell equations of electromagnetism in curved space-time. It turns all different forms of matter/energy into a generic electromagnetic stress-energy tensor, which bends space-time. What this theorem implies is that all of the information encoded by the special quantum characteristics of each matter particle, such as baryon number, lepton number, colour charge, and even whether it is matter or antimatter, is annihilated when it falls into a black hole. More specifically, it implies that the black hole as a system will have much less entropy than the ordinary matter originally had. Matter and energy being lost in a black hole, if treated as an isolated system, appears to represent a process that decreases entropy, and that is a violation of the second law.
We can't just toss out the no-hair theorem as a mathematical curiosity that might well prove invalid in nature. d. Its validity is backed up by recent observations of black holes by LIGO, a gravitational wave observatory. To add to this entropy problem we could draw the additional, and no less astounding, conclusion that the no-hair theorem suggests that a black hole is actually just one a single microstate (just one "particle"), which means it should have not just low entropy but zero entropy, if we interpret it as there's only one way to assemble a single microstate.
According to quantum mechanics, the quantum information encoded in a particle (its spin, angular momentum, energy, etc.) must be conserved during any process, which is a variation on the first law of thermodynamics, except that the focus here is on information rather than energy. Where does that information go inside a black hole? Hawking radiation, which leaves the black hole system, is a natural place to look, but Hawking's theory suggests that this radiation is "mixed," which means that it is generic; it doesn't contain any of the specific particle information that went into it. To make things even more interesting, there is some serious debate about whether Hawking radiation, as described using quantum theory, actually is a form of thermal radiation. The conclusion that a black hole has a temperature comes not from direct observation (which could be understood using classical statistics). It is based, instead, on quantum mechanics. Thermal radiation, emitted from a black body (a physical object that absorbs all incoming electromagnetic radiation), contains information about the body that emitted it. Hawking radiation contains no such information. It is based on the law of conservation of energy only. In space-time, virtual particles pop in and out of existence all the time everywhere. They exist because space-time has a non-zero vacuum energy, which is a consequence of the uncertainty principle. Close to the very high-energy space-time environment around a black hole, some theorists suspect that virtual particles, as particle-antiparticle pairs, have enough energy to become real particle pairs, with mass. If one of the pair falls in, it must have negative energy (according to an outside observer) in order to preserve the conservation of energy law. This also means it has negative mass (there is a mass-energy equivalence), which means that the black hole itself loses mass and appears to emit a particle (again, as observed by an outside observer). Hawking radiation has not been observed yet, but by using an analogue acoustic black hole made in a lab, scientists have found strong evidence that suggests Hawking radiation exists around real black holes.
Good evidence for the existence of black hole radiation, whether it is thermal or not, might solve the issue of conservation of energy but it doesn't appear to conserve information. Information can be lost by one of a pair of entangled particles when it falls into a black hole and is lost. Entangled particles are thought to be very common in space-time and they can also be physically very far apart in space, across the universe in fact. Distance is irrelevant to quantum entanglement. Because of quantum mechanics, an entangled pair or a group of particles can be described by a singular unique quantum state (spin, angular momentum, energy, etc.) just as a single particle is. From a quantum mechanics point of view, the pair or group becomes a single particle (some theorists think that matter inside a black hole might be quantum-entangled). If one of a pair of quantum entangled particles falls into a black hole and loses its quantum signature, does its entangled partner pop out of existence somewhere else in the universe at the same time? It seems that this process would continuously decrease the entropy of the universe as a whole. The dilemma this presents is called the black hole information paradox. One could argue that because time slows down to a stop in the infinite gravity well at the event horizon of a black hole, nothing really ever goes in. Its quantum information remains somehow encoded, smeared somehow across the event horizon, scrambled up and out of reach. Newer models of quantum gravity suggest that the particle left behind remains entangled by whatever form of matter/energy its partner is now in inside the black hole, thus dissolving the paradox. Another argument emerging among physicists is also exciting. It actually uses quantum entanglement to solve the black hole paradox. It uses wormholes to link the paradox phenomenon with the Einstein-Rosen Bridge (or wormhole), described by two previously unrelated theories. It is laid out in this Quantum magazine article. Entangled particles inside and outside a black hole could remain connected through the continuous space-time that would exist inside a wormhole, solving the information paradox. No one's sure yet if that idea holds up theoretically.
The holographic principle, which is gaining momentum in theoretical physics, is yet another way to explore the information paradox. It suggests that a black hole encodes all of the particle information just outside the event horizon as statistical degrees of freedom. Degrees of freedom are a measure of information, closely related to the idea of multiple microstates. How this information is stored, and what form it is in, is impossible to visualize, however, because a black hole is treated theoretically (in almost all theories) as a four-dimensional (space-time) object. Although a black hole should appear as a sphere to an observer, it is not a three-dimensional sphere we can relate to. It is a singularity of mass. The event horizon, likewise, is not a physical two-dimensional barrier shell around a black hole. It is the last distance from which light can escape the gravitational well, measured as the Schwartzchild radius. For example, Earth has a Schwartzchild radius of about 2 centimetres, which means that if Earth's mass was compressed into a sphere of 2 cm radius, it would be so dense that it would spontaneously collapse into a black hole singularity.
From our perspective, information is last observed at the event horizon of a black hole. This approach helps us understand why entropy is a measure of event horizon area, rather than volume. It also implies that a black hole, rather than being a zero-entropy object, is a maximum entropy object. Depending on how we look at it, generic information can be thought of as equivalent to maximally mixed information, an equilibrium state.
No matter how we look at the information paradox, information inside a black hole seemingly must get back out through black hole evaporation (as Hawking radiation). A black hole that doesn't feed on matter should gradually shrink and eventually disappear, meaning that somehow, the quantum information of all that matter must get back out. If the information is irretrievably lost from our universe, then black holes either do no obey thermodynamics or they represent some kind of door into some other entity and the universe is not an isolated system after all. It could even mean that ordinary matter and energy as we know it is actually an illusion. It could be information encoded on a surface area, making the universe, by extension, a hologram of that data.
Black hole entropy has also recently been calculated based on a supersymmetric black hole in string theory. This technical 2007 review by T. Mohaupt describes the reasoning and process. This solution ties in with the holographic principle (which is also based on string theory) and its solution closely matches that of Jakob Bekenstein (who based black hole entropy on the area of the event horizon). The fact that the two calculations match up closely gives a number of theorists hope that string theory could be a route to ultimately solving the information paradox.
Based on all of these and other developments, physicists Brandon Carter, Stephen Hawking and James Bardeen have formulated a series of black hole mechanics laws, which are analogous to the laws of thermodynamics. While thermodynamics is a classical science, these laws attempt to integrate general relativity, quantum mechanics and thermodynamics together. As I hinted at earlier, these mechanical laws offer up some seemingly odd conclusions. Uniform gravity at the event horizon is analogous to thermal equilibrium (the first law of thermodynamics). Entropy is analogous to the increasing surface area of the event horizon (the second law of thermodynamics). New theories about black hole entropy offer some serious food for thought because we approach entropy not just through the lens of classical mechanics but through the lenses of general relativity and quantum mechanics as well. Being a universal rule of nature, shouldn't entropy find its expression there too?
One of the most interesting approaches to black hole entropy was done under a specialized mathematical framework. It is laid out by physics theorist Sean Carroll in one of his blog entries from 2009 (I wholeheartedly recommend his blog). As a black hole's rotation and charge increases, its entropy approaches zero. This statement is analogous to the third law of thermodynamics, which will be explored next. According to this law, no system can have exactly zero entropy, so this means there is a limit to the spin and charge of a black hole. The third black hole law of mechanics might represent the deepest puzzle yet for theorists.
A black hole at the spin/charge limit is called an extremal black hole. It is a black hole that has the smallest possible mass at a given charge and angular momentum. It is therefore the smallest possible mass black hole that could theoretically exist while rotating at a constant speed. If these objects existed, they would be microscopic, but they are only theoretical. In theory, an extremal black hole could be created in which all of its energy comes from the charge, or electrical field, and none from matter. Such a black hole is a product of Euclidean quantum gravity, a theory of space-time in which time is treated exactly like another spatial dimension. The entropy of such a black hole can be calculated by using string theory, and it comes out to be exactly zero, which is forbidden by the third law of thermodynamics. A first reaction could be, well, that's the end of that thought-stream, and Dr. Carroll suggests that this is exactly what the authors of the original paper thought. But then the idea was revisited because it seemed to hint at something very interesting.
In an extremal black hole, entropy discontinuously drops as charge is increased, and eventually its hits a limit where the mathematical solution the researchers used splits into two different space-times! Part of this mystery includes the fact that the mathematics of all charged black holes gives them not one but two event horizons. The outer event horizon is the one you expect - a point of no return. The inner event horizon is located between that point of no return and the singularity itself. What's unique is that an object between the two horizons isn't forced to crash into the singularity. Inside the black hole, moving forward in time means moving inward toward the inner event horizon. That part is inevitable. However, outside the black hole and inside the inner event horizon, time moving forward looks normal and an object in either of those spaces isn't forced anywhere.
As you increase the charge and keep the mass the same, the two horizons come together. You are moving toward an extremal black hole. You would expect that the region of space-time between the horizons will eventually disappear, but it doesn't. It reaches a finite size and stays there, until you reach an exact extremal black hole, and which point it suddenly and discontinuously disappears. The entropy, when calculated, decreases smoothly alongside the increasing charge until exactly when an extremal black hole is reached and at that point it suddenly slips to zero. This asks the question: is there a theoretical problem here or does the entropy suddenly escape into some new and different space-time (as space-time itself appears to split as well)? Does it offer a clue about where matter (and all that missing entropy) goes inside a real black hole? This new space-time is a mathematical solution called two-dimensional anti-de-Sitter space-time on a two-dimensional sphere. Dr. Carroll himself wonderfully refers to this hidden space-time as "Whoville" In his fascinating post on this mysterious theoretical journey. You can download a pdf of the scientific paper he and the original authors written on it here.
Although we might not know exactly what kind of space-time black hole particles find themselves in if that's what they finds themselves in, black hole physics seems to be a great tool to use to search out the limits of the second law of thermodynamics. Black holes go an additional step by demanding that all of our disparate laws of nature come together to describe them. Dr. Carroll puts it well: black holes are fertile "thought-experiment laboratories" to test our understanding of thermodynamics, especially the second law.
For The Laws of Thermodynamics PART 4 click here
Labels:
Atoms
Monday, January 1, 2018
The Laws of Thermodynamics PART 2
The Laws Of Thermodynamics PART 1 click here.
First Law Of Thermodynamics
This law states that the total energy of an isolated system is constant. This energy is generally the sum of its kinetic, potential and chemical energies. In some systems, nuclear, magnetic potential or electrical energy can be considered as well. While the Zeroth Law defines the temperature of a system, this law defines the energy of the system. It also states that energy cannot be created nor destroyed. It can only transformed from one form to another. The idea is the same as the broader conservation of energy law in physics, except that here we focus on internal energy. Although not a new idea in the mid 1800's, this law took time to hammer out empirically as a mathematical statement, which can be written as ΔU = Q + W. Yet another way to express this law is dU = -PdV. "d" is used because this formula for change is written as a differential to describe a changing system. The change in internal energy of a system, in this case, is equal to the inverse of the pressure times the change in volume. There is no heat term. Sometimes, the internal energy of a system can change without any heat exchange. I mentioned a phase change as an example earlier. This can also happen when work is done at such a slow pace that heat dissipation in the form of friction approaches zero or when the system is adiabatic in nature (I will describe this kind of system in a bit). However it is written, the first law links internal energy, heat and work. More specifically, a change in internal energy can be achieved by countless different combinations of heat and/or work added or removed from a system.
Even though caloric was incorrectly thought to be a substance bearing heat the now-obsolete caloric theory worked seamlessly with the current laws of conservation of energy and matter. It worked because it was assumed that caloric could never be created or destroyed in an isolated system. Thought of as a weightless self-repelling gas, caloric could pass through the pores in a liquid or solid from hotter areas into cooler areas. The coffee in our example cools, this theory would argue, because caloric suffused from the coffee into the surrounding air, warming it. We now know that the coffee cools because heat is transferred from the coffee to the countertop and the air through the processes of thermal conduction and thermal (infrared on our case) radiation respectively.
According to the first law, the internal energy of a system will decrease if it loses heat through thermal conduction, radiation or convection, or if it does work. The energy lost is transferred into the system's surroundings. As mentioned earlier, the total energy of the larger isolated system does not change. The internal energy of the coffee decreases as it cools but the internal energy of the room in which the coffee sits doesn?t change IF it is perfectly sealed off from any energy or material transfer, which nothing ever is in reality. In contrast, any work or heat that goes into a system increases its energy. For example, if we wind up an old-fashioned mechanical watch, we are applying potential mechanical energy to its spring mechanism. It will begin to tick and it will eventually wind down to a stop, when the potential spring energy is depleted. The energy hasn't disappeared, however. It was transferred into work ? the movements of the gears inside the watch and of the arms around the face. Some energy was transferred into sound energy as the ticking you hear. Energy was also transformed into internal friction in the spring and of the tiny gears against one another. The watch is a closed system. No matter is transferred, but heat from friction, though an immeasurably small amount, escapes the watch system into the surroundings. If we wind it up again, we have restored the system's energy by doing work on the system. The source of energy in the watch (which comes from our muscles winding it up) is transferred into work and waste heat each time. No real system has an infinite source of energy. Even the orbits of planets, etc., which seem to be eternal, are part of a system that loses energy through solar wind, resistance to the particles that make up interstellar space, and as gravitational and thermal radiation from the solar system. There is no such thing as perpetual motion. Both the first and second (as we will see) laws of thermodynamics forbid it. Even recently observed time crystals do not violate thermodynamics. These crystals spontaneously change from moment to moment. We will explore this fascinating new state of matter later on.
Adiabatic Process
Thermodynamics got its start with the study of a water vapour under pressure and heat. The behaviour of fluids (gases, liquids and plasma) under pressure and heat is still one of the main focuses in this field of science. Think of air and gasoline confined in the piston of an engine cylinder. If the pressure is kept constant in this system (the piston can easily move), the work done when heat is applied to it is equivalent to an increase in volume. After ignition of the gas/air, the piston is explosively pushed up. Heat is applied and the system does work. This is called an isobaric system. If temperature is kept constant instead, it is called an isothermal system. This time, the piston is bathed in constant-temperature reservoir such as a large water bath that absorbs and dissipates heat. The piston is pushed down to compress the air. Work is done to the system and its potential energy increases. Again, pressure and volume are directly and inversely related to one another. A third kind of process, called an adiabatic process, occurs when no heat is added to or removed from the system. It differs from the isothermal process just described because in that case heat was removed from the system into the water bath. Because no heat leaves or enters the system, the change in energy depends only on work that is either done to the system or done by the system. Often, an adiabatic process is one that is too rapid for any heat exchange to take place. An example is a carbon dioxide fire extinguisher. The gas is under pressure so when it is triggered, it expands very quickly. The compressed gas is at room temperature in the canister but when it expands its becomes cold. The same amount of thermal energy is spread over a much larger volume because it is conserved. The expanded gas stays cold at first because it doesn't have a chance to exchange heat from the room-temperature air around it. No heat is added to or removed from the system. Work is done through the expansion of the carbon dioxide. It might seem confusing that the expansion of the gas in this case has nothing to do with heating it. That phenomenon is thermal expansion, in which a substance experiences outward pressure as it is heated, and it is caused by the increasing kinetic energy of its molecules exerting outward pressure.
Heat of Fusion and Heat of Vapourization
Another example of an adiabatic process is our typical winter weather here in Alberta. Warm Pacific air streaming from the southwestern coast of British Columbia hits the Rocky Mountains and is forced up the mountainside, which means a climb from sea level to about 4000 metres. For every 1000 metres the air climbs, its temperature drops almost 10°C. Air that starts out at +10°C in Vancouver (at sea level) can drop to a face-numbing -30°C at the top of Mount Robson. Then the cold air slides down the Alberta side of the mountains and this time it warms up at the same rate. By the time it flows past the town of Cochrane in the foothills (about 1000 m altitude), for example, it has warmed up a balmy (for us!) 0°C. The air cools in the mountain pass because it has moved into a lower-pressure zone so it expands (to an average of about 0.60 atmosphere (atm) pressure compared to 1 atm at sea level). No heat actually leaves the air system. The expansion itself (work done by the system) lowers the temperature, because the same amount of thermal energy is spread over a larger volume. The thermal energy is conserved, obeying the first law of thermodynamics. As the air comes down the leeward side of the mountains, it is compressed once again to about 0.87 atm at Cochrane's altitude, which warms it to about 0°C.
This is not quite the famous Chinook, however. Often, the air from the Pacific is laden with moisture, especially in winter. As it makes its way to the Rocky Mountains and it begins to climb, the moisture in the air, as liquid water, cools along with the air itself and eventually it reaches its freezing point and starts to switch over to (usually heavy) snow. When a substance freezes into a solid, it releases stored potential energy as a heat transfer. This is called either latent heat or enthalpy of fusion (the technical words for melting/freezing). The solid phase of a substance has a lower internal energy than the liquid phase because the inter-molecular bonds are stronger, providing more order. This means that the solid phase has lower potential energy. It is called latent heat because the heat lost from the substance as it freezes can't be measured as a change in temperature in the substance. A transfer of heat doesn't always coincide with a change in temperature. Latent heat is often referenced to a unit of mass. In this case it is called specific heat of fusion. The release of potential energy (as thermal energy) into the mountain air means that the air cools at a slower rate when it expands. Instead of cooling at a rate of 10°C per 1000 m, for example, it cools at a rate of about 8°C/1000 m. At the mountain peak, the temperature will be about -22°C (rather than -30°C) and about 8°C (rather than 0°C) when it reaches Cochrane, a typical Chinook day.
What exactly is happening at the molecular level? When water freezes (a phase change), the molecules "lock" into place. They lose some freedom of movement compared to liquid water, which can flow. The solidified ice has less potential energy than liquid water so it must release that energy into the system, as a transfer of heat. Water releases about 6 kJ of energy per mole (called molar heat of fusion in this case) as it freezes into ice (or snow). For this reason, snowy days tend to be warmer than clear ones. The change from steam into liquid water represents an even larger release of energy. Water releases about 41 kJ/mol as it condenses from steam into liquid, an indication of how much more potential energy the gaseous state has over the liquid state. To vapourize, water needs to absorb 41 kJ/mol from its surroundings, one reason why a warm dry breezy day is perfect to put laundry out on the line. The moist clothes have a constant supply of warm dry air to evaporate their moisture into as the water absorbs energy.
Each substance has specific values for its heat of fusion and its heat of vapourization, which depend on its unique inter-molecular bonding. In other words, each substance has its own specific latent heat. These values, in turn, contribute to the unique melting and boiling points of each substance. Specific latent heat is the amount of energy required to cause a phase change in a specific amount of a substance. If you would like to explore phase change more deeply, try my previous article "Plasma: The Fourth State of Matter." In it, I concentrate on the plasma state but in dong so I also explore what a phase change means in depth.
When a substance freezes or condenses, it releases heat into its surroundings. When a process releases heat, it is called an exothermic process. The reverse processes - melting and evaporating - require heat to move forward. They absorb heat from the surroundings, and are called endothermic processes. Water evaporating off clothes is an endothermic process.
This subject reminds us once again of the important but subtle distinction between temperature and internal energy. Temperature, explored earlier in this article, is the average kinetic energy of the molecules in a system. Internal energy, however, is the total energy of that system. It includes the potential energies of the molecules, in addition to their kinetic energies. Water vapour (at just over 100°C), for example, has more internal energy than liquid water (at 20°C) for two reasons. First, its molecules have more kinetic energy, which means it has more thermal energy (this can be measured as temperature). Second, its random molecular arrangement has much more potential energy than the more orderly arrangement of molecules in the liquid state (this can't be measured directly).
A related term is specific heat capacity. Again using water as an example, a certain amount of thermal energy must be added to a volume of water in order to raise its temperature. Specifically, it takes 4200 J of energy to raise the temperature of 1 kg of water by 1°C. Every substance has its own specific heat capacity (SHC). The SHC of steel, by comparison, is 490 J/kg/°C. What does this mean? Water can hold a lot more heat than an equivalent amount of steel can. It also requires a lot more energy to warm up than the same mass of steel does. Water, in fact, has an unusually high SHC. That is why a warm water bottle makes such an excellent foot warmer in a cold bed. A steel disk of the same mass warmed to the same temperature would not only stub your toes. It would have roughly only a tenth of the heat available to warm them up. See this Engineering Toolbox table to compare the SHC values of some other everyday substances.
Next we will explore the often misunderstood second law of thermodynamics and its implications in "The Laws Of Thermodynamics PART 3" click here.
First Law Of Thermodynamics
This law states that the total energy of an isolated system is constant. This energy is generally the sum of its kinetic, potential and chemical energies. In some systems, nuclear, magnetic potential or electrical energy can be considered as well. While the Zeroth Law defines the temperature of a system, this law defines the energy of the system. It also states that energy cannot be created nor destroyed. It can only transformed from one form to another. The idea is the same as the broader conservation of energy law in physics, except that here we focus on internal energy. Although not a new idea in the mid 1800's, this law took time to hammer out empirically as a mathematical statement, which can be written as ΔU = Q + W. Yet another way to express this law is dU = -PdV. "d" is used because this formula for change is written as a differential to describe a changing system. The change in internal energy of a system, in this case, is equal to the inverse of the pressure times the change in volume. There is no heat term. Sometimes, the internal energy of a system can change without any heat exchange. I mentioned a phase change as an example earlier. This can also happen when work is done at such a slow pace that heat dissipation in the form of friction approaches zero or when the system is adiabatic in nature (I will describe this kind of system in a bit). However it is written, the first law links internal energy, heat and work. More specifically, a change in internal energy can be achieved by countless different combinations of heat and/or work added or removed from a system.
Even though caloric was incorrectly thought to be a substance bearing heat the now-obsolete caloric theory worked seamlessly with the current laws of conservation of energy and matter. It worked because it was assumed that caloric could never be created or destroyed in an isolated system. Thought of as a weightless self-repelling gas, caloric could pass through the pores in a liquid or solid from hotter areas into cooler areas. The coffee in our example cools, this theory would argue, because caloric suffused from the coffee into the surrounding air, warming it. We now know that the coffee cools because heat is transferred from the coffee to the countertop and the air through the processes of thermal conduction and thermal (infrared on our case) radiation respectively.
According to the first law, the internal energy of a system will decrease if it loses heat through thermal conduction, radiation or convection, or if it does work. The energy lost is transferred into the system's surroundings. As mentioned earlier, the total energy of the larger isolated system does not change. The internal energy of the coffee decreases as it cools but the internal energy of the room in which the coffee sits doesn?t change IF it is perfectly sealed off from any energy or material transfer, which nothing ever is in reality. In contrast, any work or heat that goes into a system increases its energy. For example, if we wind up an old-fashioned mechanical watch, we are applying potential mechanical energy to its spring mechanism. It will begin to tick and it will eventually wind down to a stop, when the potential spring energy is depleted. The energy hasn't disappeared, however. It was transferred into work ? the movements of the gears inside the watch and of the arms around the face. Some energy was transferred into sound energy as the ticking you hear. Energy was also transformed into internal friction in the spring and of the tiny gears against one another. The watch is a closed system. No matter is transferred, but heat from friction, though an immeasurably small amount, escapes the watch system into the surroundings. If we wind it up again, we have restored the system's energy by doing work on the system. The source of energy in the watch (which comes from our muscles winding it up) is transferred into work and waste heat each time. No real system has an infinite source of energy. Even the orbits of planets, etc., which seem to be eternal, are part of a system that loses energy through solar wind, resistance to the particles that make up interstellar space, and as gravitational and thermal radiation from the solar system. There is no such thing as perpetual motion. Both the first and second (as we will see) laws of thermodynamics forbid it. Even recently observed time crystals do not violate thermodynamics. These crystals spontaneously change from moment to moment. We will explore this fascinating new state of matter later on.
Adiabatic Process
Thermodynamics got its start with the study of a water vapour under pressure and heat. The behaviour of fluids (gases, liquids and plasma) under pressure and heat is still one of the main focuses in this field of science. Think of air and gasoline confined in the piston of an engine cylinder. If the pressure is kept constant in this system (the piston can easily move), the work done when heat is applied to it is equivalent to an increase in volume. After ignition of the gas/air, the piston is explosively pushed up. Heat is applied and the system does work. This is called an isobaric system. If temperature is kept constant instead, it is called an isothermal system. This time, the piston is bathed in constant-temperature reservoir such as a large water bath that absorbs and dissipates heat. The piston is pushed down to compress the air. Work is done to the system and its potential energy increases. Again, pressure and volume are directly and inversely related to one another. A third kind of process, called an adiabatic process, occurs when no heat is added to or removed from the system. It differs from the isothermal process just described because in that case heat was removed from the system into the water bath. Because no heat leaves or enters the system, the change in energy depends only on work that is either done to the system or done by the system. Often, an adiabatic process is one that is too rapid for any heat exchange to take place. An example is a carbon dioxide fire extinguisher. The gas is under pressure so when it is triggered, it expands very quickly. The compressed gas is at room temperature in the canister but when it expands its becomes cold. The same amount of thermal energy is spread over a much larger volume because it is conserved. The expanded gas stays cold at first because it doesn't have a chance to exchange heat from the room-temperature air around it. No heat is added to or removed from the system. Work is done through the expansion of the carbon dioxide. It might seem confusing that the expansion of the gas in this case has nothing to do with heating it. That phenomenon is thermal expansion, in which a substance experiences outward pressure as it is heated, and it is caused by the increasing kinetic energy of its molecules exerting outward pressure.
Heat of Fusion and Heat of Vapourization
Another example of an adiabatic process is our typical winter weather here in Alberta. Warm Pacific air streaming from the southwestern coast of British Columbia hits the Rocky Mountains and is forced up the mountainside, which means a climb from sea level to about 4000 metres. For every 1000 metres the air climbs, its temperature drops almost 10°C. Air that starts out at +10°C in Vancouver (at sea level) can drop to a face-numbing -30°C at the top of Mount Robson. Then the cold air slides down the Alberta side of the mountains and this time it warms up at the same rate. By the time it flows past the town of Cochrane in the foothills (about 1000 m altitude), for example, it has warmed up a balmy (for us!) 0°C. The air cools in the mountain pass because it has moved into a lower-pressure zone so it expands (to an average of about 0.60 atmosphere (atm) pressure compared to 1 atm at sea level). No heat actually leaves the air system. The expansion itself (work done by the system) lowers the temperature, because the same amount of thermal energy is spread over a larger volume. The thermal energy is conserved, obeying the first law of thermodynamics. As the air comes down the leeward side of the mountains, it is compressed once again to about 0.87 atm at Cochrane's altitude, which warms it to about 0°C.
This is not quite the famous Chinook, however. Often, the air from the Pacific is laden with moisture, especially in winter. As it makes its way to the Rocky Mountains and it begins to climb, the moisture in the air, as liquid water, cools along with the air itself and eventually it reaches its freezing point and starts to switch over to (usually heavy) snow. When a substance freezes into a solid, it releases stored potential energy as a heat transfer. This is called either latent heat or enthalpy of fusion (the technical words for melting/freezing). The solid phase of a substance has a lower internal energy than the liquid phase because the inter-molecular bonds are stronger, providing more order. This means that the solid phase has lower potential energy. It is called latent heat because the heat lost from the substance as it freezes can't be measured as a change in temperature in the substance. A transfer of heat doesn't always coincide with a change in temperature. Latent heat is often referenced to a unit of mass. In this case it is called specific heat of fusion. The release of potential energy (as thermal energy) into the mountain air means that the air cools at a slower rate when it expands. Instead of cooling at a rate of 10°C per 1000 m, for example, it cools at a rate of about 8°C/1000 m. At the mountain peak, the temperature will be about -22°C (rather than -30°C) and about 8°C (rather than 0°C) when it reaches Cochrane, a typical Chinook day.
What exactly is happening at the molecular level? When water freezes (a phase change), the molecules "lock" into place. They lose some freedom of movement compared to liquid water, which can flow. The solidified ice has less potential energy than liquid water so it must release that energy into the system, as a transfer of heat. Water releases about 6 kJ of energy per mole (called molar heat of fusion in this case) as it freezes into ice (or snow). For this reason, snowy days tend to be warmer than clear ones. The change from steam into liquid water represents an even larger release of energy. Water releases about 41 kJ/mol as it condenses from steam into liquid, an indication of how much more potential energy the gaseous state has over the liquid state. To vapourize, water needs to absorb 41 kJ/mol from its surroundings, one reason why a warm dry breezy day is perfect to put laundry out on the line. The moist clothes have a constant supply of warm dry air to evaporate their moisture into as the water absorbs energy.
Each substance has specific values for its heat of fusion and its heat of vapourization, which depend on its unique inter-molecular bonding. In other words, each substance has its own specific latent heat. These values, in turn, contribute to the unique melting and boiling points of each substance. Specific latent heat is the amount of energy required to cause a phase change in a specific amount of a substance. If you would like to explore phase change more deeply, try my previous article "Plasma: The Fourth State of Matter." In it, I concentrate on the plasma state but in dong so I also explore what a phase change means in depth.
When a substance freezes or condenses, it releases heat into its surroundings. When a process releases heat, it is called an exothermic process. The reverse processes - melting and evaporating - require heat to move forward. They absorb heat from the surroundings, and are called endothermic processes. Water evaporating off clothes is an endothermic process.
This subject reminds us once again of the important but subtle distinction between temperature and internal energy. Temperature, explored earlier in this article, is the average kinetic energy of the molecules in a system. Internal energy, however, is the total energy of that system. It includes the potential energies of the molecules, in addition to their kinetic energies. Water vapour (at just over 100°C), for example, has more internal energy than liquid water (at 20°C) for two reasons. First, its molecules have more kinetic energy, which means it has more thermal energy (this can be measured as temperature). Second, its random molecular arrangement has much more potential energy than the more orderly arrangement of molecules in the liquid state (this can't be measured directly).
A related term is specific heat capacity. Again using water as an example, a certain amount of thermal energy must be added to a volume of water in order to raise its temperature. Specifically, it takes 4200 J of energy to raise the temperature of 1 kg of water by 1°C. Every substance has its own specific heat capacity (SHC). The SHC of steel, by comparison, is 490 J/kg/°C. What does this mean? Water can hold a lot more heat than an equivalent amount of steel can. It also requires a lot more energy to warm up than the same mass of steel does. Water, in fact, has an unusually high SHC. That is why a warm water bottle makes such an excellent foot warmer in a cold bed. A steel disk of the same mass warmed to the same temperature would not only stub your toes. It would have roughly only a tenth of the heat available to warm them up. See this Engineering Toolbox table to compare the SHC values of some other everyday substances.
Next we will explore the often misunderstood second law of thermodynamics and its implications in "The Laws Of Thermodynamics PART 3" click here.
Labels:
Atoms
Subscribe to:
Posts (Atom)