The Laws of Thermodynamics PART 2 click here

The Second Law of Thermodynamics

The concepts of heat, internal energy and thermal energy we just explored can easily be confused. We still casually but incorrectly talk about heat as if it is something that an object contains, or as a property of that object. Internal energy and thermal energy are sometimes incorrectly used interchangeably even in textbooks (they are interchangeable only in a theoretical ideal gas, where there is no potential energy - the particle-particle interactions considered are perfectly elastic collisions between atoms which are treated as small hard spheres). Despite these challenges, the second law is no doubt the most misunderstood law in thermodynamics. Even by experts. We could look at it as something to be feared, but we can also see this law as the source of high quality mental fun. Borrowing from physicist Lidia del Rio once again this is where the village witch lives and it is where she does her best magic.

The second law introduces another term called entropy. This law states that entropy can never decrease over time in an isolated system. Unlike energy, entropy is not conserved in an isolated system. Remember, an isolated system is one in which neither energy nor matter can ever enter or leave. As a possible consequence, the universe as an isolated system might eventually suffer an ultimate fate called heat death, which is based on the second law of thermodynamics. The universe's entropy will continue to increase until it reaches a state of maximum possible entropy, or thermal equilibrium, where heat exchange between molecules and atoms is no longer possible. All thermodynamic processes die as a result. Entropy is an interesting and far-reaching concept. It does not always relate specifically to the internal energy of a system. Sometimes, entropy is (too-broadly) defined as the level of disorder in a system. It can also be defined as the number of possible microstates within a system. Often, entropy is a measure of the amount of information in a system.

Is the universe actually hurtling toward heat death? Is it truly isolated? Considering that highly organized structures evolve in the universe over time, the notion of increasing entropy in the universe presents some controversy. Wikipedia's brief "controversies" entry on heat death offers arguments against even assigning entropy to the universe as a whole, and there are many more arguments for and against to be found online. One question I find intriguing is whether a gravitational field itself has entropy. Gravitational fields have a way of keeping objects out of thermal equilibrium with the space around them. This idea is explored in the controversial theory of entropic gravity. Here, gravity is treated as an emergent and entropic force: at the macroscopic scale it is homogenous but at the quantum scale it is subject to quantum disorder, that is, from quantum entanglement of bits of space-time information. This disorder is expressed as a force we define as gravity. It agrees with both Newtonian gravity and general relativity and it offers an explanation for "dark energy," as a kind of positive vacuum energy within space-time.

A common way of phrasing the second law is to say that a system always tends toward disorder over order, but this statement can be misinterpreted. In physics, entropy is closely related to the concept of multiplicity. I think it is probably the easiest way to think of entropy. Take as an example a system of 20 books. There are many more ways (a multiplicity of ways) to accomplish a jumbled pile (no rules) than to stack them up neatly (specific rules). We can say that the randomly jumbled mess of books has higher entropy than the neat stack does. What happens when we come along and straighten up the books? Does the book "system" go against entropy's natural tendency toward disorder? No, because we were involved in the process we became part of the system. We did work on the system to decrease its entropy. Overall, the entropy of the "us plus the books" system increased.

The second law also states that systems tend toward a state of equilibrium. To explain this, let's take a different example. We have two volumes of gas separated by a movable wall inside a really good Thermos bottle, a bottle so well made we can consider the system isolated. The wall itself is also made of a perfect insulating material so no heat transfer can happen. We can think of this arrangement as two systems in mechanical contact with each other. To start with, we have one gas that is hot and the other one is cold. The hot gas will exert thermal pressure and push the wall into the cold side. We will assume that the wall movement is frictionless. The hot gas will expand and cool and the cold gas will compress and warm up (this is another example of an adiabatic process). The system does mechanical work (the expansion and compression of gases as well as the movement of the wall). The wall will stop moving when both gases reach the same pressure. If it is the same type of gas on each side, the compartments will also reach the same temperature. The two systems will come to rest at a state of thermodynamic equilibrium, and in this state, no part of the two-gas system can do any more work on the other part. This system still has thermal energy but that energy can't be used to do any work (within the system). This state has the highest entropy it can have under these circumstances. In this case, it is not self-evident that the system has reached a state of maximum disorder, or even that it has achieved the greatest multiplicity of possible states. We do know, however, that it has evolved toward an end state of thermodynamic equilibrium, which is also an evolution toward maximum achievable entropy.

As we can see, there is more than one way to look at entropy, like looking at a magic trick from different angles. No one angle gives it completely away.

The system's movement toward equilibrium is an irreversible process. It can't return to its original state of disequilibrium unless work is done on it. We could set up a real system of two gases as close as we can to the one just described and we would discover why this move toward equilibrium is irreversible. It might seem that we could restart the experiment over and over forever. Each time we would do exactly the same mount of work to heat and cool our gases to their starting temperatures. But even the best insulating material is imperfect and only superfluids are truly frictionless. Some heat will be lost to the outside of the Thermos, and some energy will be lost as heat loss through friction by the wall as it moves. If the experiment was repeated over and over using the same Thermos and the same initial work input, the overall thermal energy available to the system would eventually decrease and the entropy of the system (which now includes the room into which the heat dissipates and the mechanism we use to heat/cool the gases) would increase.

Another way of defining an irreversible process considers the role of chaos in systems. Any system of interacting molecules will include interactions between them that are chaotic in nature. We often think of chaos as what our kids do to their rooms but it is also a scientific theory. Systems are very sensitive to initial conditions, so even very tiny deviations in conditions at the beginning of a process can result in significant differences in how that system progresses over time. A hurricane is a chaotic system and that's why its strength and path are impossible to predict with certainty even a few days out. Almost every real process contains one or more chaotic elements. If we try to reverse a process back to its initial conditions we cannot rely on a series of predictable step-by-step transformations to arrive at exactly the same starting state. The specific process is irreversible and the outcome is not repeatable.

Is there any truly reversible process in thermodynamics? Even an atomic clock, which relies on extremely stable microwave cavity oscillations, shows minute frequency drift over time. In these clocks, laser-cooled atoms "tick" back and forth between an excited state and ground state. The energy difference between the states is always perfectly precise because it is quantum in nature. However, even the NIST-F1 cesium clock loses a second every 300 million years. Rather than heat loss per se, this system loses energy because the field transitions dissipate energy (they do work at the quantum scale just to keep going). The clock's entropy increases, albeit very slowly.

A process that doesn't generate entropy is a reversible process. The second law of thermodynamics (that entropy tends to increase in systems) is a consequence of the irreversibility of processes. Maxwell's demon was once held up as a theoretical system in which entropy could hold steady during a thermodynamic process. But under close scrutiny, even this famous thought experiment devised by James Clerk Maxwell, is not a reversible process. The idea itself though is intriguing: A microscopic demon guards the gate between two halves of a room. He lets only slow molecules into one half and fast molecules into the other half. Eventually you expect one half of the room to be warmer than the other half. This reduces the randomness of the molecular arrangement of gases and therefore reduces the entropy of the room (system). Looked at another way, it takes a system in equilibrium out of equilibrium. The argument seems rock-solid until you realize that the entropy of the demon itself (all that sorting work it is doing) actually increases the system's entropy overall more than it decreases it. In nature, many systems appear to have decreasing entropy. In all living systems, molecules are ordered into, and maintained as, intricate arrangements. Highly ordered galaxies appear to form from disordered clouds of gas. Disordered water vapour molecules flash-freeze into beautiful crystal patterns on a winter window. The key to all of these systems, living and non-living, is that they are open to some extent to their surroundings, like any real system is. In each case, surrounding entropy increases by an even greater amount, leading to a net entropy increase overall.

Scientists can only approximate a perfectly reversible process. An example is a process that is reversed by introducing an extremely tiny change to some property of a system that starts at equilibrium with its surroundings. If the tweak is small enough, the system remains so close to equilibrium that any deviation from it cannot be accurately measured yet the process itself does reverse.

In an ideal system designed to do work, such as a theoretical engine that has 100% efficiency, none of the work performed would be lost to heat transfer. The efficiency of a real engine, however, is always less than 100%, significantly less. If it is a piston engine or a steam engine, its efficiency can be analyzed quite easily by plotting a pressure-volume curve as it goes through a compression/expansion cycle. The area bound by the curve is the work done. The more efficient the engine is, the more closely that curve will follow an ideal equilibrium curve for that engine. An efficient engine never deviates far from its equilibrium state. A Carnot heat engine, mentioned earlier, is a theoretically ideal thermodynamic cycle, in which no energy is lost as "waste" heat transfer and there is no net increase in entropy. It represents an engine with 100% efficiency. An irreversible (real) process always strays away from that ideal equilibrium curve.

An example of a process that strays far from equilibrium is the carbon dioxide fire extinguisher once again. When you trigger the fire extinguisher, the carbon dioxide gas sprays out of the canister so fast that the air/carbon dioxide system has no time to reach equilibrium at first. The carbon dioxide cools adiabatically. The original amount of thermal energy is now spread over a large volume of gas cloud. Almost all of the potential energy of the pressurized system is lost through the work of adiabatic expansion. Even more work would be required to compress that gas back into the canister, much more than the work originally done because this process is now far from equilibrium. It is definitely not reversible and there is a significant increase in the system's entropy. However, entropy is also a statistical concept. Even in this case there is a chance, an infinitesimally small chance, that all the carbon dioxide molecules could spontaneously re-arrange themselves back into the canister (against their pressure gradient) through purely random molecular movements, reducing the system's entropy, and reminding us that the second law itself is statistical in nature. In a macroscopic system full of billions of atoms, it is vanishingly unlikely, but in a system of just a few atoms, the random chance of them all doing something together goes up. A triggered extinguisher, like all thermodynamic processes, proceeds in one direction only, and that is in the direction of increasing entropy. This direction, in turn, implies a forward arrow of time.

Thermodynamic Arrow of Time

The universe follows the second law of thermodynamics, where all real processes are irreversible, with the consequence that time must flow in one direction only. The increasing entropy of an evolving system gives us an impression of time and means that we can distinguish past events from future ones. The exception is a system that is in prefect equilibrium. In this case, the entropy remains the same and it is impossible to distinguish a past state of that system from a present state. The arrow of time would have seemed self-evident to Rudolf Clausius, who coined the term "entropy" in the mid 1800's, defining it as heat that incrementally dissipated from heat engines. The fact that time moves forward then seemed obvious but that confidence wasn't about to last. In just a couple of decades, quantum field theory was formulated to allow for charge, parity and time reversal (CPT symmetry). At around the same time, the idea that space and time are sewn together into a four-dimensional fabric was quickly becoming accepted science. General relativity and special relativity (which preceded general relativity) both treat time as something that is malleable and part of the system rather than outside of it. These theoretical developments brought our assumptions about time, and the second law of thermodynamics itself, into question.

Time, as we experience it, is a broken symmetry; there is no mirror in which time flows backward. Broken eggs can't reassemble. Aging doesn't reverse itself except in The Curious Case of Benjamin Button. Why this is so is actually a deep mystery in physics. Charge, parity and time reversal (CPT symmetry) is a fundamental symmetry of physical laws (except thermodynamics). The implication of this symmetry is that there is no theoretical reason why a "mirror" universe, one with an arbitrary plane of inversion (which you could think of as a vast three-dimensional mirror), with reversed momenta (complete with time running backward) and populated with antimatter (carrying opposite charges), couldn't evolve under our very same physical laws. It might even start from a vast Big Bang and shrink in volume rather than expand as ours does. Entropy in such a universe, we might assume, would tend to decrease rather than increase. However, even this assumption might be too simple. A number of physicists speculate that even in a universe that oscillates - it expands and contracts over and over - entropy might continually increase.

We might think that CPT symmetry, a mathematical theorem particularly useful in quantum physics, is a violation of the laws of thermodynamics. It should dissolve the one-way arrow of time and the rule that entropy never decreases in an isolated system. For example, we know that antimatter particles exist in our universe and we might assume that they travel backward through time. This assumption is wrong but it is not so easy to understand why. We can take the positron, the electron's antimatter twin particle, as an example. According to a theory called quantum electrodynamics (QED), an antimatter particle, mathematically speaking, travels backwards in time. If you look at a Feynman diagram of particle interaction, backward time travel by particles is commonly depicted. Below, we can see that an e

^{+}(positron) and an antiquark (the q with the line over it) both travel backward in time in this depiction of electron/positron (e

^{-}/e

^{+}) annihilation (see the black arrows angled toward the left).

Joel Holdsworth; Wikipedia |

Even though all of our physical laws (except thermodynamics) display a fundamental CPT symmetry, it doesn't mean that all processes obey it. Three of the four fundamental forces of nature, the strong force, the electromagnetic force and gravitation (which, as general relativity, is not formulated in terms of quantum mechanics) obey CPT symmetry but the weak fundamental force occasionally violates both parity and charge symmetry at the quantum level. Recently, researchers discovered that this quantum process also sometimes violates time symmetry as well. Oscillations between different kinds of B-meson particles during the weak interaction happen in both directions of time but the rates are slightly different - this disparity doesn't have anything to do with the thermodynamic reason for time's broken symmetry however. An article from SLAC National Accelerator Laboratory at Stanford University offers an excellent technical comparison between broken T-symmetry at the macroscopic and quantum scales. For a thorough discussion of what this means philosophically, try this 2017 article posted by the Journal of General Philosophy of Science. I found it a good read.

As we've seen, at the quantum scale, processes are time-reversible (with the weak force exception). Quantum processes, according to the Copenhagen interpretation, are governed by the Schrodinger equation, which has T-symmetry built into it. Wave function collapse, however, does not. This is a mathematical framework that links indeterminate ("fuzzy") quantum particle behaviours to the determinate macroscopic behaviours of substances. Therefore, it finds itself at the epicenter of how time symmetry in the quantum world breaks down at the larger scale we experience.

Mathematically, a quantum system is laid out as a superposition of several equally possible states (technically called eigenstates (https://en.wikipedia.org/wiki/Introduction_to_eigenstates)), which reduce to a single eigenstate when the system is observed or measured. How this happens, and even if this happens, physically, is up for debate. It is simply a mathematical description, which means that the process itself is a black box, but it does provide a link between quantum indeterminacy and determinate macro-processes, thermodynamics being one of them. Somehow, inside the "black box," time switches from reversible to irreversible. How do two well established and experimentally proven but mutually exclusive theories (thermodynamics versus quantum mechanics) co-exist? This problem is encapsulated in Loschmidt's Paradox. No one knows the solution to Loschmidt's Paradox, although theorists have been working on it for decades.

What is known is that the second law of thermodynamics (and its arrow of time) is a statistical principle based (somehow) on the behaviors of countless quantum time-symmetrical particles. If we keep the statistical nature of these descriptions in the front of our minds and go back to the simple example of a stack of books, we could add that, yes, there are countless more ways those books can fall into a messy pile than there are ways to stack them up neatly BUT there is no rule against the books spontaneously falling into a nice neat stack either. It's just extremely unlikely. If this happened, it would not mean that the second law of thermodynamics just broke. It means that the underlying statistical nature of the arrow of time is revealing itself. A puzzle appears, however, when we think again about the increasing entropy of the universe. We could assume from that that the universe initially had very low entropy. There were no indistinguishable particles or forces at the very beginning. Through a series of symmetry-breaking processes, four distinct fundamental forces and all the myriad particles of matter emerged as the universe expanded and cooled. The question is, if the universe started with very low entropy wouldn't it have been extremely unlikely as well?

We might wonder if the arrow of time is an aspect of dynamics that emerges at the macroscopic scale. Emergence might not be the best description because at the subatomic particle level, time seems to exist but it runs in either direction. Time's one-way arrow only emerges within a large collection of molecules. Time's arrow might be better described as a symmetry that breaks at the macroscopic scale.

All these questions, I hope, point out that our understanding of time itself is a problem when we think about the second law of thermodynamics and entropy. Time is not a unified concept in physics. Depending on the theoretical framework you chose - quantum mechanics, classical dynamics or even general relativity - time can be reversible. Or it can be a one-way arrow. Or it can be illusion altogether, because general relativity treats time as one dimension in a four-dimensional stretchy fabric where all past and future times are equally present and our present time is non-important.

Black Hole Entropy: Testing Thermodynamics

A black hole tests all of our scientific laws and it is especially interesting when viewed as a thermodynamic object, which it surely is. A black hole is a region of space-time bent so severely by gravity that nothing, not even electromagnetic radiation such as light, can escape. Once energy or matter crosses the black hole's event horizon, it is lost from our observations. Black holes can be directly observed when in-falling matter is heated by internal friction, creating an accretion disk external to the event horizon, and it can be extremely bright. A black hole also gradually emits radiation, called Hawking radiation, and it has a temperature, which means that these objects should be subject to the same laws of thermodynamics as any other object.

Entropy, as we've explored already, can be understood in several different ways. Some physicists might argue that entropy is best understood as a statement about information rather than about order or disorder. Generally the various descriptions of entropy agree with each other but under specific circumstances, individual weaknesses with each approach become evident. Theorists appear to be best equipped to tackle the question of black hole entropy by interpreting entropy as information. More information encoded in a system means it has higher entropy. As an isolated system's entropy can never decrease, the information encoded by an isolated system can never decrease. This is a slightly different take on the idea of entropy as a multiplicity of microstates.

Black holes are wonderfully mysterious objects. Inside a black hole, matter becomes inaccessible to our observations. But, the momentum and charge of that matter is conserved, and its mass remains to bend space-time around it. Most black holes, especially those formed by massive collapsing spinning stars, are expected to have not only charge but also significant angular momentum. According to the second law of thermodynamics, we expect that matter disappearing into a black hole should increase the black hole's entropy. This implies that a black hole has non-zero entropy. A number of theoretical physicists are currently working on how entropy works with black holes and how to measure it. The new methodologies they have come up with so far are surprising. We immediately realize how unusual a black hole is when we learn that instead of using volume to calculate the entropy of what we assume is a spinning spherical object, we must use the area bound by its event horizon instead. In 1973, Jakob Bekenstein calculated black hole entropy as proportional to the area of its event horizon divided by the Planck area (the area by which a black hole surface increases when it swallows one bit of information). In 1974, Stephen Hawking backed up this work and additionally showed that black holes emit thermal radiation, which means they have a specific temperature. So how does the second law of thermodynamics enter? It has actually been rewritten for black holes to say that the total area of the event horizons of any two colliding black holes never decreases. This is part of the closely analogous set of laws for black hole mechanics.

How do we get an intuitive picture of how black hole entropy works? We can start by looking at a black hole's entropy statistically. Each of all the countless billions of particles that have fallen down the gravity well of a black hole will be in a specific thermodynamic state. Each microstate will contribute to what should be an enormous number of possible microstate arrangements. Microstates in a macroscopic system are all the different ways that the system can achieve its particular macro-state (which is defined by its density, pressure, volume, temperature, etc.).

By treating these microstates statistically, we can come up with an approximation of the black hole's overall entropy. This would be straightforward if black holes didn't present us with a unique and bedeviling twist: the no-hair theorem. The no-hair theorem argues that this approximation cannot be done. Aptly named, it tells us that a black hole can be described by only three classical parameters: its mass, its charge and its angular momentum. All the particles that fell into a black hole do not contribute to any kind of unique character to it. A black hole that swallowed up a cold hydrogen gas cloud looks the same as one that swallowed an iron-dense planet. Instead, the no-hair theorem treats a black hole as a ubiquitous and enormous single "homogenous" particle. A possible analogy might come from another housekeeping chore. The vacuum cleaner sucks up all the stuff off the floor. A CSI investigator could look in the canister afterward and determine, through skin flakes, hairs and other debris, who lived in the room, and perhaps even what they were doing over the week. In the case of black holes, the "canister contents" appear to become generic once they cross the event horizon. You can't tell what particular atoms fell in, when they fell in, and what their velocities, etc., were. The no-hair theorem is a mathematical theorem: it is the solution to Einstein-Maxwell equations of electromagnetism in curved space-time. It turns all different forms of matter/energy into a generic electromagnetic stress-energy tensor, which bends space-time. What this theorem implies is that all of the information encoded by the special quantum characteristics of each matter particle, such as baryon number, lepton number, colour charge, and even whether it is matter or antimatter, is annihilated when it falls into a black hole. More specifically, it implies that the black hole as a system will have much less entropy than the ordinary matter originally had. Matter and energy being lost in a black hole, if treated as an isolated system, appears to represent a process that decreases entropy, and that is a violation of the second law.

We can't just toss out the no-hair theorem as a mathematical curiosity that might well prove invalid in nature. d. Its validity is backed up by recent observations of black holes by LIGO, a gravitational wave observatory. To add to this entropy problem we could draw the additional, and no less astounding, conclusion that the no-hair theorem suggests that a black hole is actually just one a single microstate (just one "particle"), which means it should have not just low entropy but zero entropy, if we interpret it as there's only one way to assemble a single microstate.

According to quantum mechanics, the quantum information encoded in a particle (its spin, angular momentum, energy, etc.) must be conserved during any process, which is a variation on the first law of thermodynamics, except that the focus here is on information rather than energy. Where does that information go inside a black hole? Hawking radiation, which leaves the black hole system, is a natural place to look, but Hawking's theory suggests that this radiation is "mixed," which means that it is generic; it doesn't contain any of the specific particle information that went into it. To make things even more interesting, there is some serious debate about whether Hawking radiation, as described using quantum theory, actually is a form of thermal radiation. The conclusion that a black hole has a temperature comes not from direct observation (which could be understood using classical statistics). It is based, instead, on quantum mechanics. Thermal radiation, emitted from a black body (a physical object that absorbs all incoming electromagnetic radiation), contains information about the body that emitted it. Hawking radiation contains no such information. It is based on the law of conservation of energy only. In space-time, virtual particles pop in and out of existence all the time everywhere. They exist because space-time has a non-zero vacuum energy, which is a consequence of the uncertainty principle. Close to the very high-energy space-time environment around a black hole, some theorists suspect that virtual particles, as particle-antiparticle pairs, have enough energy to become real particle pairs, with mass. If one of the pair falls in, it must have negative energy (according to an outside observer) in order to preserve the conservation of energy law. This also means it has negative mass (there is a mass-energy equivalence), which means that the black hole itself loses mass and appears to emit a particle (again, as observed by an outside observer). Hawking radiation has not been observed yet, but by using an analogue acoustic black hole made in a lab, scientists have found strong evidence that suggests Hawking radiation exists around real black holes.

Good evidence for the existence of black hole radiation, whether it is thermal or not, might solve the issue of conservation of energy but it doesn't appear to conserve information. Information can be lost by one of a pair of entangled particles when it falls into a black hole and is lost. Entangled particles are thought to be very common in space-time and they can also be physically very far apart in space, across the universe in fact. Distance is irrelevant to quantum entanglement. Because of quantum mechanics, an entangled pair or a group of particles can be described by a singular unique quantum state (spin, angular momentum, energy, etc.) just as a single particle is. From a quantum mechanics point of view, the pair or group becomes a single particle (some theorists think that matter inside a black hole might be quantum-entangled). If one of a pair of quantum entangled particles falls into a black hole and loses its quantum signature, does its entangled partner pop out of existence somewhere else in the universe at the same time? It seems that this process would continuously decrease the entropy of the universe as a whole. The dilemma this presents is called the black hole information paradox. One could argue that because time slows down to a stop in the infinite gravity well at the event horizon of a black hole, nothing really ever goes in. Its quantum information remains somehow encoded, smeared somehow across the event horizon, scrambled up and out of reach. Newer models of quantum gravity suggest that the particle left behind remains entangled by whatever form of matter/energy its partner is now in inside the black hole, thus dissolving the paradox. Another argument emerging among physicists is also exciting. It actually uses quantum entanglement to solve the black hole paradox. It uses wormholes to link the paradox phenomenon with the Einstein-Rosen Bridge (or wormhole), described by two previously unrelated theories. It is laid out in this Quantum magazine article. Entangled particles inside and outside a black hole could remain connected through the continuous space-time that would exist inside a wormhole, solving the information paradox. No one's sure yet if that idea holds up theoretically.

The holographic principle, which is gaining momentum in theoretical physics, is yet another way to explore the information paradox. It suggests that a black hole encodes all of the particle information just outside the event horizon as statistical degrees of freedom. Degrees of freedom are a measure of information, closely related to the idea of multiple microstates. How this information is stored, and what form it is in, is impossible to visualize, however, because a black hole is treated theoretically (in almost all theories) as a four-dimensional (space-time) object. Although a black hole should appear as a sphere to an observer, it is not a three-dimensional sphere we can relate to. It is a singularity of mass. The event horizon, likewise, is not a physical two-dimensional barrier shell around a black hole. It is the last distance from which light can escape the gravitational well, measured as the Schwartzchild radius. For example, Earth has a Schwartzchild radius of about 2 centimetres, which means that if Earth's mass was compressed into a sphere of 2 cm radius, it would be so dense that it would spontaneously collapse into a black hole singularity.

From our perspective, information is last observed at the event horizon of a black hole. This approach helps us understand why entropy is a measure of event horizon area, rather than volume. It also implies that a black hole, rather than being a zero-entropy object, is a maximum entropy object. Depending on how we look at it, generic information can be thought of as equivalent to maximally mixed information, an equilibrium state.

No matter how we look at the information paradox, information inside a black hole seemingly must get back out through black hole evaporation (as Hawking radiation). A black hole that doesn't feed on matter should gradually shrink and eventually disappear, meaning that somehow, the quantum information of all that matter must get back out. If the information is irretrievably lost from our universe, then black holes either do no obey thermodynamics or they represent some kind of door into some other entity and the universe is not an isolated system after all. It could even mean that ordinary matter and energy as we know it is actually an illusion. It could be information encoded on a surface area, making the universe, by extension, a hologram of that data.

Black hole entropy has also recently been calculated based on a supersymmetric black hole in string theory. This technical 2007 review by T. Mohaupt describes the reasoning and process. This solution ties in with the holographic principle (which is also based on string theory) and its solution closely matches that of Jakob Bekenstein (who based black hole entropy on the area of the event horizon). The fact that the two calculations match up closely gives a number of theorists hope that string theory could be a route to ultimately solving the information paradox.

Based on all of these and other developments, physicists Brandon Carter, Stephen Hawking and James Bardeen have formulated a series of black hole mechanics laws, which are analogous to the laws of thermodynamics. While thermodynamics is a classical science, these laws attempt to integrate general relativity, quantum mechanics and thermodynamics together. As I hinted at earlier, these mechanical laws offer up some seemingly odd conclusions. Uniform gravity at the event horizon is analogous to thermal equilibrium (the first law of thermodynamics). Entropy is analogous to the increasing surface area of the event horizon (the second law of thermodynamics). New theories about black hole entropy offer some serious food for thought because we approach entropy not just through the lens of classical mechanics but through the lenses of general relativity and quantum mechanics as well. Being a universal rule of nature, shouldn't entropy find its expression there too?

One of the most interesting approaches to black hole entropy was done under a specialized mathematical framework. It is laid out by physics theorist Sean Carroll in one of his blog entries from 2009 (I wholeheartedly recommend his blog). As a black hole's rotation and charge increases, its entropy approaches zero. This statement is analogous to the third law of thermodynamics, which will be explored next. According to this law, no system can have exactly zero entropy, so this means there is a limit to the spin and charge of a black hole. The third black hole law of mechanics might represent the deepest puzzle yet for theorists.

A black hole at the spin/charge limit is called an extremal black hole. It is a black hole that has the smallest possible mass at a given charge and angular momentum. It is therefore the smallest possible mass black hole that could theoretically exist while rotating at a constant speed. If these objects existed, they would be microscopic, but they are only theoretical. In theory, an extremal black hole could be created in which all of its energy comes from the charge, or electrical field, and none from matter. Such a black hole is a product of Euclidean quantum gravity, a theory of space-time in which time is treated exactly like another spatial dimension. The entropy of such a black hole can be calculated by using string theory, and it comes out to be exactly zero, which is forbidden by the third law of thermodynamics. A first reaction could be, well, that's the end of that thought-stream, and Dr. Carroll suggests that this is exactly what the authors of the original paper thought. But then the idea was revisited because it seemed to hint at something very interesting.

In an extremal black hole, entropy discontinuously drops as charge is increased, and eventually its hits a limit where the mathematical solution the researchers used splits into two different space-times! Part of this mystery includes the fact that the mathematics of all charged black holes gives them not one but two event horizons. The outer event horizon is the one you expect - a point of no return. The inner event horizon is located between that point of no return and the singularity itself. What's unique is that an object between the two horizons isn't forced to crash into the singularity. Inside the black hole, moving forward in time means moving inward toward the inner event horizon. That part is inevitable. However, outside the black hole and inside the inner event horizon, time moving forward looks normal and an object in either of those spaces isn't forced anywhere.

As you increase the charge and keep the mass the same, the two horizons come together. You are moving toward an extremal black hole. You would expect that the region of space-time between the horizons will eventually disappear, but it doesn't. It reaches a finite size and stays there, until you reach an exact extremal black hole, and which point it suddenly and discontinuously disappears. The entropy, when calculated, decreases smoothly alongside the increasing charge until exactly when an extremal black hole is reached and at that point it suddenly slips to zero. This asks the question: is there a theoretical problem here or does the entropy suddenly escape into some new and different space-time (as space-time itself appears to split as well)? Does it offer a clue about where matter (and all that missing entropy) goes inside a real black hole? This new space-time is a mathematical solution called two-dimensional anti-de-Sitter space-time on a two-dimensional sphere. Dr. Carroll himself wonderfully refers to this hidden space-time as "Whoville" In his fascinating post on this mysterious theoretical journey. You can download a pdf of the scientific paper he and the original authors written on it here.

Although we might not know exactly what kind of space-time black hole particles find themselves in if that's what they finds themselves in, black hole physics seems to be a great tool to use to search out the limits of the second law of thermodynamics. Black holes go an additional step by demanding that all of our disparate laws of nature come together to describe them. Dr. Carroll puts it well: black holes are fertile "thought-experiment laboratories" to test our understanding of thermodynamics, especially the second law.

For The Laws of Thermodynamics PART 4 click here

Hello,

ReplyDeleteI read your whole post and i like it.

Thank you for sharing your knowledge about Thermodynamics

BSER will announce Rajasthan Board 8th Result 2021 in the first week of June 2021. More than 11 lakh students will be waiting for the 8th board RBSE result 2021 date. The board will publish class 8 result 2021 Rajasthan Board on rajeduboard.rajasthan.gov.in and rajresults.nic.in. Students will need to visit Rajasthan Ajmer 8th board result 2021 website to access their results. Roll number is the key credential to check 8 class board result 2021. The direct link to check the RBSE 8th result 2021 will be provided here as soon as it is RBSE 8th Result 2021announced. Only students who appear for March 2021 will be able to access rajresults.nic.in 2021 Class 8 result.

ReplyDelete