Thermodynamics is the branch of physics that explores how heat and temperature relate to the energy of a system and its ability to do work. Almost every field of science, especially those in chemistry and engineering, relies heavily on an understanding of the principles of thermodynamics. Words such as internal energy, heat, temperature and entropy signal that a thermodynamic process is being described. Every process in the universe is a thermodynamic process.
A Brief History
Even the ancient Egyptians were curious about heat. In their time, free from any scientific encumbrances, they thought of heat as one of four essential or basic elements of matter, fire, along with water, earth and air. Much later, in the 17th and 18th centuries, investigators began to wonder if heat was a kind of physical substance, something that flows such as phlogiston or caloric. They imagined that heat physically flowed from one substance or object into another. In the early and mid-1800's, scientists such as Joseph Black, Antoine Lavoisier and James Prescott Joule, utilized the rigour of the scientific method to study heat scientifically and quantitatively. The study of thermodynamics as a science took off when engineers used these new concepts about heat transfer to maximize the efficiency of the first steam engines.
The first steam engine, the Thomas Savory steam pump, patented in 1698, was a highly inefficient but inexpensive coal-chugging device designed to pump water out of coalmines. It wasn't elegant but in short order people realized that they could use the power of steam to manufacture machinery and transport goods and people. Besides leading to the first railway locomotives, the improved stationary piston steam engine revolutionized manufacturing in general. It was a crucial development that ushered in the Industrial Revolution across Europe and North America. The first stream engine, as we know it, had a piston that could generate and transfer power to a machine. It was invented in 1712 by Thomas Newcomen. James Watt later in 1781 revolutionized the steam engine into a rotary motion engine that could drive factory machinery. He also introduced a condenser, which greatly improved its efficiency.
About a century after Watt's revolutionary work (in 1894), another Scotsman, William Thomson (Lord Kelvin), came up with the first precise definition of this new field of steam-engine thermodynamics, as well as the word itself: "Thermo-dynamics is the subject of the relation of heat to forces acting between contiguous parts of bodies, and the relation of heat to electrical agency."
If we look back on all the frenetic activity of the 18th and 19th centuries that contributed to numerous branches of thermodynamics we have today, the birth of thermodynamics as a modern science is actually best attributed to a less well-known man, Sadi Carnot. Thirty years prior to Lord Kelvin's work, he published "Reflections of the Motive Power of Fire," an examination of heat, power and engine efficiency. In this treatise, he introduced the concept of work along with its connection to thermal energy (which is thermodynamics in a nutshell). He abstracted the steam engine so that, as an engineer and physicist, he could focus it into an idealized heat engine. By doing so, he developed the first model thermodynamic system, a concept that is still widely used today, especially among engineers who need to understand the often-nuanced differences between a real mechanical engine and the simpler idealized theoretical model. Most physicists today consider Sadi Carnot to be the father of thermodynamics. Yet, sadly, almost no one paid attention to his book during his lifetime, cut short at the age of 36 by cholera. Because cholera is so contagious, all of his belongings and almost all of his writings were buried along with him when he died. Miraculously, the book survived and Lord Kelvin, with much effort, was able to get a copy of it to use as part of the basis for his own famous work.
In A Nutshell
So what exactly is thermodynamics? You could think of it as the puzzle that connects four concepts: heat, temperature, energy and work. The box the puzzle comes in is the thermodynamic system. There are four basic laws that tell us how the puzzle fits together: These are the zeroth law, the first law, the second law and the third law.
Physicist Lidia del Rio and co-authors recently wrote in the Journal of Physics A that "If physical theories were people, thermodynamics would be the village witch. The other theories find her somewhat odd, somehow different in nature from the rest, yet everyone comes to her for advice, and no one dares to contradict her."
What is a Thermodynamic System?
The concept of a thermodynamic system began with Sadi Carnot, who thought about the "working substance" in a heat engine. In his case, this working substance was a volume of water vapour. His working substance was, in essence, a thermodynamic system in itself. It was aptly named a "working" substance because, as a system, it can do work when heat is transferred into it. It can be put in contact with a piston or a boiler, for example. Below is a modern version of Carnot's engine diagram. It gives us an idea of how his working substance works in a simple heat engine.
Eric Gaba; Wikipedia
Thermal energy as transferred as heat from a hot furnace TH (left) through the fluid of a working substance such as water vapour (the circle) into a cold heat sink, TC (right). As the heat is transferred, it forces the working substance to do mechanical work on its surroundings (W). This work could be then be transferred to the cycles of expansion/contraction that turn a piston, for example.
This diagram is a simple theoretical thermodynamic system. Carnot thought of it as an isolated system. That means there would be no interaction whatsoever with the surrounding air, the walls of the containers used, etc. There would be no interactions that, in real life, make all thermodynamic systems, at least to some degree, open in terms of energy transfer. In a real steam engine, for example, there are numerous ways in which energy is lost to the surrounding materials and to the air, particularly as heat loss and as friction, a loss of usable work. In a real engine, there are always many ways in which energy that could theoretically do work is permanently lost instead and those losses are substantial. No real mechanical engine can be 100% efficient. Most are far from it. The latest gasoline fuel-injection combustion engine technology is up to about 35% efficiency. The latest stream turbine power generator technology is about 42% efficient which means that in both cases the majority of the energy generated is lost from the system and unavailable to do useful work.
Everything that has a temperature is a thermodynamic system, and in reality all such systems interact with other surrounding systems. The only truly isolated system is perhaps the universe itself (at least according to non-multiverse theories). These interactions mean that real thermodynamic systems are complicated, so when we want to study how a system works, we want to simplify it into a theoretically isolated system where we can limit its interactions with the world around it. We impose theoretical impermeable walls around it. "Isolated" is different from "closed." A closed thermodynamic system is one in which energy, but not matter, can be exchanged with its surroundings. In an open system both energy and matter can be exchanged.
A theoretical example of an isolated system is hot coffee enclosed in an impossibly perfectly insulated Thermos bottle. It is never going to cool off (there is no energy transfer) and you can't smell its aroma (there is no transfer of matter). An example of a closed system would be hot coffee in a very well sealed plastic container. It's going to cool off but you can't smell it. An example of an open system could be the hot coffee in an open mug. You can smell its aroma because its molecules are mixing into the air you are breathing. There is an exchange of matter. It is going to eventually reach room temperature because the cup is transferring heat to the countertop it rests on and to the air around it. As an open system, it is going to change until it reaches a state of equilibrium with the room around it. The hot coffee will warm the room it is in but the room is likely to be so large compared to the coffee that the change would be undetectable. The room with the coffee in it, taken as an approximately isolated system, will have the same total energy when the coffee is hot as when it has cooled to room temperature because the energy is conserved.
We have described the "state" of our coffee "system" as hot. The transfer of heat from the coffee to the air around it is an example of a thermodynamic process. The coffee eventually cools from hot to room temperature. A thermodynamic variable or property is also called a state function. A "state function" of this system is temperature. Internal energy, mass, pressure, volume, enthalpy and entropy (all concepts we will explore) are examples of state functions in thermodynamics.
The thermodynamic state of our coffee system is described by its properties such as temperature, volume, and pressure. As the coffee cools, its state changes. It will continue to cool until the coffee reaches a state of thermodynamic equilibrium with its surroundings. Equilibrium means that the system is in balance with its surroundings. The coffee eventually reaches the same temperature as the air and countertop upon which it sits. This is thermal equilibrium. A system will also spontaneously move toward other kinds of equilibrium as well, such as mechanical equilibrium (toward equal pressure for example) and chemical equilibrium (toward minimal Gibbs energy, a form of potential energy that is widely used in chemical thermodynamics). Thermodynamic equilibrium is a state of complete equilibrium where all these kinds of equilibrium are in place. If the coffee was sealed in a pressurized but heat-permeable container, it would eventually reach thermal equilibrium but not complete thermodynamic equilibrium because its pressure cannot equalize with the air around it. When a system is in complete thermodynamic equilibrium, it will not change spontaneously. It will remain in the same state indefinitely, unless work is done to it. It cannot do work because it does not have energy available to do work.
The description of our coffee is a classical thermodynamic description. It doesn't take into account the states of the molecules or atoms or subatomic particles in the coffee and it makes the assumption that the dynamic process (the change that occurs) is continuous or smooth. The coffee cools gradually, not in jumps, for example. If we want to look at the individual dynamics of the atoms in our coffee, we need to deal with quantum thermodynamics. This field of study explores the relationship between two independent theories: (classical) thermodynamics and quantum mechanics.
A Glimpse Into The Future
Quantum thermodynamics attempts to describe thermodynamic changes that occur at the atomic scale. Analogous to how thermodynamics developed from trying to improve steam engine technology, quantum thermodynamics, a very young field, is growing out of our desire to shrink technologies into a variety of quantum machines. A significant challenge here is that classic concepts such as temperature and work need to find some kind of quantum counterpart. One breakthrough is that we can now equate a system's energy with its information, an idea that we will revisit later in this article. Internal energy doesn't equate to thermal energy in a system.
Changes at the quantum level are not continuous because particle properties are quantized. They come in discrete packets. If you recall the de Broglie matter wave, all matter at the atomic scale is both particle and wave in nature, and changes in energy are limited to specific jumps in energy levels. Subatomic behaviours belong to the realm of quantum mechanics. A quantum description of a system can give us an idea of how atoms behave but it can't tell us anything about the macroscopic system as a whole. We can't add up de Broglie waves to get a complete thermodynamic picture of the coffee in our earlier example. Technically, we can't study each individual particle in a system. In order to describe the internal energy of the coffee at the quantum level, we would need to know, at the same time, all the trajectories and kinetic energies, all particle masses, magnetic moments and angular momenta of billions of particles. Even if we could manage such a feat, we would still lack a complete thermodynamic description of the coffee because it also possesses emergent properties. These are properties that emerge when a collection of many atoms behave together in a macroscopic system, such as temperature, volume and even pressure, which cannot be described at the level of individual atoms. Even if we could know all the quantum information of every particle in the coffee, we still couldn't know what its temperature is, and how fast it is cooling.
Complicating our effort further, all quantum systems have uncertainty built into them. Quantum systems have a unique and intriguing property: you can't know both the energy and the position (or the momentum and the velocity) of any particle at the same time. This means that an atom, for example, cannot be treated like a tiny sphere of matter. It is more accurately a statistical cloud of where it might be and how fast it is going, which has uncertainty built into it.
This seemingly impossible problem does have a satisfactory solution however: we can use statistics. It is a mathematical bridge between quantum thermodynamics and classical thermodynamics. It deals with individual particles or atoms or molecules by taking averages of their dynamic properties, and using those average values to describe a system.
Thanks to advances in technology, even a system containing just a few molecules or even atoms can now be harnessed to do microscopic work. Last year three scientists won the Nobel Prize in chemistry for developing the world's tiniest machines, with motors smaller than the width of a human hair. Fraser Stoddart developed a molecular computer chip that can store 20 kB of memory. Bernard Feringa built a nanocar with four molecular motors as wheels. John-Pierre Sauvage inspired other researchers to develop machines such as microscopic robots that can grasp and collect amino acids. Work is underway to develop a fast-charging quantum battery in which energy can be stored and released on demand from a quantum system.
As the wave of new micro-technologies gains momentum, the field of thermodynamics needed to describe how they work is having trouble keeping up. How do concepts of heat and efficiency translate into this new realm of tiny machines? While statistics can bridge the gap between the quantum realm and classical realm of a (large) macroscopic system, how can it bridge the gap between quantum-scale process in a quantum-scale (tiny) system and a macroscopic-scale (large) observer? Statistics, so useful when describing processes and states of macroscopic systems, finds little use here. In quantum machines, every quantum bit of information has real meaning and cannot be treated as a statistical average. Is irreversible heat loss in a macroscopic system equivalent to quantum dissipation in a quantum system? There is a lot of heated debate about how to link quantum and macroscopic thermodynamic processes because all of these new technologies must obey the same thermodynamic principles as the original heat engine.
The Four Laws Of Thermodynamics
Zeroth Law
Though this law might not be familiar to a lot of us, its impact is quite enormous. It can be summed up in a simple statement: If two systems are in thermal equilibrium with a third system then they must also be in thermal equilibrium with each other. This implies that the sizes of the systems and what kinds of molecules they are composed of don't matter. As James Clerk Maxwell famously said, "all heat is of the same kind." It is a deceptively simple observation of equivalence, which underpins all the thermodynamic laws, and which establishes that temperature as a universal property of matter, as explained in this 4-minute video by The Royal Institution.
By being able to define temperature, we can measure the thermal energy in any system. We can use the scale of Fahrenheit, Celsius or Kelvin, but the Kelvin scale is the thermodynamic scale used by scientists. It starts at absolute zero, where there is no thermal energy in a system. A thermometer can be used to verify that systems in thermal equilibrium are at the same temperature.
If we define the barrier between two systems as a kind of wall permeable only to heat transfer, such as the wall of a coffee cup between hot coffee and our hands, we realize that only energy is transferred between the two systems, not matter. The zeroth law, though simple, shuts the door on earlier theories that treated heat as a physical entity, such as phlogiston or caloric, as types of matter thought to flow between objects.
Thermal Energy and Internal Energy
Temperature is the measure of the thermal energy of a system. As a statistical measurement, it measures an average of all the individual kinetic energies of the atoms and molecules in the system, which cannot be individually measured. Individual molecules do not have thermal energy or a temperature. They have only kinetic energy from which thermal energy emerges at the macroscopic scale. Temperature offers us a hint of what is happening at the molecular scale. Every molecule in a system, such as our coffee, has three physical degrees of freedom. It can move in three dimensions in space, and each degree of freedom has kinetic energy associated with it, which varies from molecule to molecule within the system. Some kinds of molecules (such as binary-shaped oxygen, O2) also have rotational motion about an axis (an additional degree of freedom). Thermal energy is the result of all of these molecular motions within a system. It is measured as temperature, which is their average kinetic energy. We might think of kinetic energy as unidirectional, such as an object rolling downhill gaining kinetic energy. Here, the important distinction is that the kinetic energies of the molecules are random and in all directions. There are cases where thermal energy cannot be measured as a temperature change, for example, during a phase change, which we will explore.
Work and Heat
Every thermodynamic system also has internal energy, which is a more encompassing form of energy, and which can be used to do work. Work, conversely, can also be done on a system to increase its internal energy. Unlike thermal energy, internal energy cannot be directly measured. It is the sum of the system's thermal energy plus its potential energy. The internal energy of a system, the sum of the kinetic and potential energies of the molecules and atoms, does not include energy due to the motion or location of the system as a whole relative to its surroundings. The system's gravitational potential energy or its motion through space is not included in other words. However, chemical bond energy, magnetic moment, internal electrical field, nuclear potential energy, and internal stress can all be included as contributions to the internal energy of a system. The internal energy of a system can also be changed by adding or subtracting matter from the system, transferring heat into or out of the system, or by doing work on the system or by the system itself dong work. The internal energy of a system also changes if thermal energy is added or removed.
Work performed by a system is energy transferred by the system to its surroundings. Unlike internal energy or thermal energy, which are properties of a system, work is energy-in-transit. A system doesn't contain work; it is a process of energy transfer. Likewise, heat (though we commonly say that an object contains heat or an object is hot) is not a property of a system. It is better understood, like work, as energy in transit. The kinetic motion of molecules in a system can be the source of, and the effect of, the transfer of heat from another system. The term "latent heat" (which we will explore) is also not a property of a system. It is best understood as internal energy released from, or absorbed into, a system, as internal energy transferred without a change in temperature.
We can't measure internal energy directly but we can get an idea of the change in internal energy in a system by using the concept of enthalpy. As described nicely in Wikipedia, enthalpy describes the internal energy of a system plus the energy "needed to make room for it," which is measured as pressure and volume. Like internal energy, we can't measure enthalpy directly but we can measure the change in enthalpy in a system. The concept of enthalpy is a convenient way to link internal energy (thermal energy plus potential energy) with work in a system. Enthalpy (H) is essentially (internal energy (U)) + ("work" energy, or pressure x volume). It is useful for reactions that involve gases, where changes in pressure and/or volume tend to be significant and easy to measure. For reactions that only involve liquids or solids, the change in enthalpy will pretty much equal the change in internal energy because there will be hardly, if any, change in volume or pressure in the system. Solids and liquids resist compression and expansion.
This brings us the first law of thermodynamics, which establishes the existence of internal energy in a system, next.
Understanding what plasma is can be confusing, even though plasmas are part of our everyday lives. The Northern Lights, lightning, flames, fluorescent bulbs, neon signs and even the Sun are all examples of matter in this particular physical state. Although most of us are well acquainted with at least three states of matter - solid, liquid and gas - getting to know plasma, and how substances become plasma, can be daunting, but also fun and very useful.
Plasma Is Everywhere
It might surprise us to learn that plasma is by far the most abundant physical state of matter in the universe, much more so than our familiar three states of solid, liquid and gas. Perhaps even more surprising is the fact that plasma is just one of over 30 different states of matter currently listed on Wikipedia. More yet will undoubtedly be discovered in laboratories. Most of these exotic states are unknown beyond research circles because they tend not to be observed except under extraordinary conditions. Plasma, in contrast, is the most common state of ordinary matter under natural conditions in our universe.
Most of our visible universe consists of plasma, and it is a fortunate thing because that is the key reason why we can observe it with our telescopes. Stars are entirely made up of it. Most interstellar gas is plasma. Some sources estimate as much as 99% of visible matter in the universe is in a plasma state. By the end of this article we will understand how plasma can emit light. Even in our own solar system, over 99% of its mass is in a plasma state, thanks to the overwhelming mass contribution of our Sun. In our everyday life we regularly, but temporarily, witness plasma as lightning, fires, and the auroras, when energy is applied to matter, usually in its gaseous state. I did not explore plasma in high school and I had few occasions to explore it in my first years as an undergrad. One reason why plasma is not more thoroughly studied at these basic levels is that it can be difficult to jump from the fairly straightforward molecular theory needed to understand solids, liquids and gases into atomic and sub-atomic particle theories required to get a handle on the plasma state. Yet it's not only do-able; it's fascinating.
What Is a Phase Change?
A common theoretical thread to solids, liquids and gases is temperature. We can extend this temperature spectrum at the cold end by exploring the exotic-sounding Bose Einstein Condensate (BEC). In this state, mysterious quantum behaviours of matter are on a scale that we can observe. I wrote an article a few years ago that explores what happens when matter gets VERY cold and transforms into a BEC. Because stars are very hot, it is tempting to add plasma to the upper temperature end of this physical state spectrum. That, however, would be misleading and incorrect. Gases, when extremely hot, can break down into plasma but we do not need to heat a gas to transform it into plasma. Applying an electrical charge alone to a gas can do this. Think of a glowing neon tube light, for example. It is filled with gas in a plasma state but you can comfortably touch it with your hands.
There is an additional reason why temperature alone does not necessarily dictate the phase of a substance. We often first learn about phase changes by exploring water as a gas, liquid and solid, but when we talk about solids, liquids and gases, we really need to consider pressure in addition to temperature, which together determine what state matter is in. Under constant atmospheric pressure, temperature alone determines which state water will be in. This is how we encounter water in everyday life. We can also keep water at a constant temperature and change the pressure. Under enough pressure water will condense into liquid and even into exotic forms of ice. In a near-vacuum, water ice can sublimate directly into vapour.
Temperature, pressure and electrical potential are the three changing factors behind the phase changes we will focus on here. A phase change of any type relies on a boost or withdrawal of energy into/from matter. These aren't the only kinds of phase change that exist. The application of a magnetic field can change the physical state of magnetic material, for example. For all phase transitions (which is also called a change in physical state), the physical arrangement or ordering of atoms changes. In this article I'd like to start by exploring transitions between solids, liquids and gases in depth. Then I will compare those changes to transitions into a plasma state and explore what plasma is as a physical phase of matter.
Exploring Solids, Liquids and Gases Using Water as an Example
Matter changes from one physical state to another through a process called a phase transition. Every chemical compound and element undergoes a phase transition under a specific combination of temperature and pressure. The periodic stable below lists all the known elements. At 0°C and 1 atmospheric pressure, all elements in red type are gases, elements in green type are liquids, elements in black type are solids, and it is unknown what physical state those in grey type are. These "mystery" elements start at atomic number 100, fermium. These elements, plus several other smaller atomic number elements are synthetic or man-made. This is the highest atomic number element that can be created in a macroscopic amount, although a pure sample of this metal has not been created yet.
We can compare the melting and boiling points of most of the elements above but we are all most familiar with water. Not only is water familiar but it is also exhibits some unique and fascinating properties unlike most other substances, and this will allow us to explore change of state in greater depth. It is a chemical compound of hydrogen and oxygen (H2O). We drink it in a liquid state. At 0°C, water freezes into ice, a solid state, and at 100°C it evaporates into a gas, water vapour. These transitions assume a pressure of one atmosphere (atm; the average air pressure at sea level)). To understand how pressure plays a role in the physical transition of a substance, imagine water vapour in a container fitted with a gas-tight plunger. The temperature is carefully maintained throughout the experiment at 130°C. The starting pressure is 1 atm. As the plunger lowers down into the container, the volume of gas decreases and the gas pressure increases. Soon the water vapour will be compressed into liquid water. 130°C is the boiling point at around 5 atm. Above 5 atm, water at 130°C will liquid. It will have to be compressed much further, to about 100,000 atm (because liquid water strongly resists compression) to solidify into scalding hot 130°C ice in the container.
Researchers at Sandia National Laboratories performed a similar experiment several years ago. They subjected liquid water (starting at 1 atm) to extremely rapid compression (to 70,000 atm). The water shrank abruptly into a dense phase of solid ice. When the pressure was relieved, it melted (and expanded) back into liquid water. The ice formed under these conditions is not the everyday ice we make in our freezers. Ordinary freezer ice, as most of us know, is less dense than liquid water. At pressures above 100,000 atm, however, water only exists as different kinds of very dense ice, even at temperatures of hundreds of degrees centigrade. Scientists think such hot ice exists in the deep interior of exoplanet GJ 436 b, a Neptune-size ice giant that orbits very close to its parent red dwarf star, GJ 436, which is 33 light-years away from Earth.
A Phase Change is a Molecular Rearrangement
Chemically, water in any physical state remains water. That is, it retains its chemical properties. Any matter undergoing a phase change from solid to liquid to gas and vice versa retains its chemical properties. Each water molecule consists of two hydrogen atoms covalently bonded to an oxygen atom. What changes during a phase transition is the molecular arrangement of these molecules (see below). At any particular temperature and pressure, water molecules will adopt the most thermodynamically stable arrangement possible for that environment. In order to transition into a new arrangement, the molecule-molecule interactions must change. Molecule-molecule, or in the case of elements, atom-atom interactions can shift and reorganize a substance, changing its physical properties in the process. The interactions between molecules in a substance tend to be attractive. In solids, these attractive forces are strong enough to be called chemical bonds, but these bonds are still much weaker than the chemical bonds that bind each molecule itself together (such as the two covalent hydrogen bonds in water). The simple images below highlight the general differences in atomic/molecular arrangement between solids, liquids and gases.
Materialscientist;Wikipedia
This is an actual atomic resolution image of the lattice-like arrangement of molecules in solid strontium titanate. Chemical bonds between strontium titanate molecules hold them close together in a tight regular arrangement. Above about 2000°C, strontium titanate crystals will melt, breaking these relatively weak intermolecular chemical bonds.
Kaneiderdaniel;German-language Wikipedia
This highly simplified two-dimensional representation (left) shows how atoms or molecules might be arranged in a typical liquid in a beaker. They have contact with their closest neighbours, where they experience very weak attractive inter-molecular forces, but there is no overall order to their arrangement, and the atoms/molecules can slide freely past one another.
As the diagram right simplistically shows, the spaces between molecules/atoms in a gas are vast compared to the size of the atoms or molecules themselves, many magnitudes greater than this simple diagram suggests. This is true even for a highly pressurized gas. The atoms/molecules can move freely in all directions. They experience no appreciable attractive forces between them so they move independently of one another. The ideal gas law assumes negligible molecular volume relative to gas volume and it assumes the absence of any molecular attraction. It quite accurately predicts how volume, temperature and pressure relate to one another in a gas. This means that most gases in reality act much like a theoretical ideal gas.
Water molecules bond with each other in a solid phase according to the Bernal-Fowler ice rules. This is where water gets quite interesting. Every oxygen atom can bond to 4 hydrogen atoms. Two bonds are strong. These are the covalent chemical bonds that hold each water molecule together, and they don't change during a phase transition. However, oxygen has 6 valence (or outer, chemically available) electrons. In water vapour, four electrons remain unbound as two lone pairs. As water vapour condenses into liquid water, some of the four unbound electrons take part in very weak molecule-molecule bonds with hydrogen atoms in adjacent molecules. These are called hydrogen bonds. Unlike the strong covalent hydrogen-oxygen bonds holding each molecule together, these bonds are weak and in the liquid state, they are transitory. The (proton) positive charges of hydrogen atoms are attracted to the negative charge zones of nearby oxygen lone electron pairs.
The attractive force behind hydrogen bonding is about 90% due to the attraction between opposite charges. This is an electrostatic phenomenon. 10% of the bonding force is due to electron sharing. This is a quantum mechanical phenomenon and it is also responsible for (much stronger) covalent bonding. In liquid water, the weak hydrogen attractions form and break very easily, allowing water molecules to slip past one another. The much stronger covalent intra-molecular bonds holding each water molecule together only break when water undergoes a chemical decomposition reaction called electrolysis, producing hydrogen and oxygen gases as products.
The diagram below models the hydrogen bonds between water molecules. Two lone electron pairs force every water molecule into a triangle shape, which makes it polar - each molecule has a positively charged region and a negatively charged region. The positive charge of hydrogen atoms is attracted to the negative charge zone of oxygen atoms. Hydrogen bonds are indicated by dotted lines.
User Qwerter at Czech Wikipedia
The length and strength of hydrogen bonds is strongly dependent on temperature. As liquid water freezes, additional hydrogen bonds form between molecules. Every hydrogen atom is, in effect, bonded to two oxygen atoms - one bond is strong and one bond is weak. Under sufficient pressure and/or cold, water molecules come close enough together to bond into regular and stable lattice-like arrangements. As ordinary ice, the molecules form bonds that result in a hexagonal crystalline lattice arrangement. Under more extreme pressure/temperature regimes, water ice can transition into a variety of different lattice structures such as cubes, rhomboids and tetragons, which allow for denser molecular arrangements. Each transition into a denser crystalline lattice is a phase change. There are at least 17 known physical states of water ice alone. Changes in the number, length and strength of hydrogen bonds underlie each of these additional solid phase changes.
Most liquid substances solidify into denser molecular arrangements but there are a few exceptions, and they are always due to unusual bonding between the molecules. Water is an example. Ordinary water ice is actually less dense than liquid water. That is why it floats. This phase is called hexagonal ice, or ice lh. It is the only solid water phase encountered on Earth. Due to the unusual chemical bonds of water (responsible for its molecular shape and polarity), the hexagonal lattice arrangement of lh ice keeps molecules a bit further apart from one another than they are in the liquid state (yet the bonds themselves are stronger and more permanent). Ih ice has a density of 0.9167 g/cm3 compared to liquid water with a density of 1.18 g/cm3. The diagram below shows what the hexagonal lattice of Ih ice looks like. Gray dashed lines indicate hydrogen bonds.
NIMSoffice;Wikipedia
In addition to a number of solid crystalline lattice states, water can also exist as a solid lacking any crystalline structure, called amorphous ice. On Saturn's icy moon, Enceladus, water ice strewn onto the surface from its many cryovolcanoes is amorphous. Liquid water flash-freezes in the vacuum of space (zero pressure) before it can organize into any structure. In contrast, along Enceladus's unique "tiger stripes," the ice is crystalline because here it is kept warm enough long enough from the heat of geothermal activity to arrange into a more thermodynamically favourable crystalline structure.
Below is Cassini's view of Enceladus's south pole, showing 4-5 "tiger stripes," which are tectonic fractures.
NASA/JPL/Space Science Institute
Critical Point and Triple Point
The temperature and pressure at which a phase change occurs is called a critical point. The temperature of the critical point depends on the pressure of the system and vice versa, so on a graph you have a line rather than a single point (see the phase graph below for water). However, there is a single point, a specific temperature and pressure, at which gas, liquid and solid states of water all have an identical free energy (or internal molecular energy). All three phases coexist at this point, which is called the triple point. A critical point is a line and a triple point is a point. In the graph below, two black lines represent the solid/liquid critical point and the liquid/gas critical point of water. In the lower center of the graph you can see the single triple for water. It is a single point on the graph found at 0.01°C (273.6°K) and 0.006 atm (611.657 pascals).
mglee;Wikipedia
A note about pressure units on the above graph: 1 atm pressure is roughly equal to 1 bar pressure. 1 bar is equal to 100 kilopascals (kPa). The pascal is the internationally recognized SI unit for pressure. The bar and the atmosphere are older non-Si units. I am in the old habit of using atm units, where 1 atm is standard air pressure at sea level.
This 6-minute video from Bergen University in Norway demonstrates how water behaves as it approaches its triple point.
At the triple point, all of the water in a system can be changed into vapour, liquid or solid just by tweaking the pressure or temperature a tiny bit. The triple point is also the lowest pressure at which liquid water can exist. At lower pressures, as on the surface of an icy moon (at near vacuum), water ice, perhaps warmed when facing its star, will sublimate directly into vapour, bypassing the liquid state altogether. For most substances, the triple point is also the lowest temperature at which the liquid state can exist but water, due to its unusual hydrogen bonds, is an exception. Notice the odd little horn on the green section in the graph above. This is an anomaly of water. If the temperature is just below the triple point, a steadily increasing pressure will transform ordinary solid water ice (Ih) into denser liquid water (at around 1000 atm) and then back into an even denser solid (now as ice VI) at around 10,000 atm.
A Brief Look at The Thermodynamics of a Phase Change
For the phase changes I've described so far, the transition process is abrupt, or more scientifically put, discontinuous. At a phase transition point (a critical point), such as the boiling point for water for example, two phases - liquid water and water vapour - co-exist at one specific temperature. In thermodynamic terms, they have the same average Gibbs free energy. In any system some molecules will randomly have slightly more energy and some will have slightly less. However, at critical point the substance cannot be distinguished as either gas or liquid.
All thermodynamic systems tend to assume the lowest possible Gibbs free energy. Systems also tend toward increasing entropy. Gibbs free energy can be looked at as a measure of potential chemical energy in a system that is available to do work in the system, while entropy can be thought of as a measure of the orderliness of a system. Highly ordered crystalline water ice has very low entropy compared to the high entropy of water vapour. Both states, however, are stable within their own temperature/pressure regime because each state has maximized its entropy according to the Gibbs free energy available to it.
Below the boiling point, the liquid phase is more thermodynamically stable (which means it has reached maximum possible entropy at a lower Gibbs free energy). As water vapour condenses into liquid water, it enters a more highly ordered state (lower entropy). As this phase transition occurs, an energy exchange takes place. As the water enters a state of lower Gibbs free energy, it releases the extra free energy as latent heat. Conversely, if we wanted to melt ice into water we would have to apply heat to it. In other words we would have to add latent heat to the system so that we can increase the entropy of the water molecules.
So far we've focused on water as our example, but every chemical substance and element can exist as a solid, liquid and gas, depending on its temperature and pressure. Each substance has its own unique critical points and triple point, and it may have additional solid lattice states as well. These critical points are unique to each substance and depend on the substance's unique molecular bonding.
The Relationship Between Pressure and Phase Change
We've explored how substances and elements transition from solids to liquids to gases and vice versa. Increasing the pressure on a substance or increasing its temperature increases the average kinetic energy of that substance. Pressure and temperature determine which physical state a particular substance is most thermodynamically stable. The relationship between temperature and physical state is fairly straightforward as we've seen. However, the relationship between pressure and physical state depends on the starting physical state of the substance. A gas responds to pressure by shrinking in volume and increasing kinetic energy. A gas resists compression through thermal pressure: Even though a container full of gas appears static, the molecules in it are in constant motion. Collisions between those molecules and the sides of the container are detected as a force per unit area, or pressure. If a gas is compressed, it will heat up because the total thermal energy of that gas, which doesn't change, is now concentrated into a smaller volume. If heat is continually removed from the system as it's compressed, the gas will remain at the original temperature it was even though it will eventually condense into a liquid and then further solidify into a solid.
Liquids, unlike gases, tend to be fairly incompressible. This means that liquids don't experience much increase in kinetic energy when they are compressed, but they will compress under extreme pressure. The pressure that resists compression is not thermal pressure, as with gases, but another kind of outward pressure (which gases also express but is masked by thermal pressure). This pressure is the result of the Pauli exclusion principle. This quantum mechanical principle means that when electron orbitals in adjacent atoms are forced into very close proximity, they will powerfully resist overlapping one another into the same atomic orbital. To note, this is not the same thing as orbital sharing, in which an electron will occupy an empty orbital in an adjacent atom, and through such sharing create a chemical bond. I will explain this more accurately in a moment.
Under increasing pressure, the liquid will likely first become denser and more viscous and then it will transition into a solid, in which molecules will pack more closely together but in a very ordered arrangement that respects the exclusion principle. If you recall my earlier example of liquid water being subjected to sudden extreme pressure and condensing into a dense form of ice, in both cases it eventually becomes more thermodynamically favourable for the water molecules to transition into a high-density highly ordered form of ice than to remain as a highly pressurized liquid. That "decision" made by a substance is abrupt, which means it happens all at once.
Water Under Very Intense Pressure
What happens if you continue to increase the pressure on a solid such as water ice? Solids, like liquids, resist compression. If we take a look at the water phase graph above once again, we see that above 100,000 atm (or bars), water exists only as a solid no matter what the temperature is, at least up to 350°C. Cubic ice VII will transform into an even denser cubic lattice called ice X as the pressure is increased. Its crystalline lattice will transition into increasingly denser molecular arrangements, each transition being a phase change. At the highest pressure on the graph (around 6-10 million bars) we see a phase of ice called high-pressure ice XI. This extremely dense XI hexagonal ice is hypothetical at this time and should not be confused with ice XI orthorhombic, which likely forms at temperatures below -200°C at zero pressure, such as on the surface of Pluto's moon, Hydra, for example.
A 2010 article by Burkhard Militzer and Hugh Wilson explains what researchers think happens when the pressure on ice increases even further than our graph goes. In this theoretical case, water bypasses hypothetical high-pressure ice XI. They suggest instead that above about 3 million bar, Ice X could transition into a different more complex lattice that contains 12 hydrogen atoms per unit. If the temperature is also increased to about 2700°C, they think the lattice might transition into yet another new arrangement. The hydrogen atoms may become mobile within a stable lattice consisting only of oxygen atoms. This would create a super-ionic phase. If the temperature is increased even further under this intense pressure regime, the oxygen atoms themselves may also become mobile and the lattice itself might melt into a new extremely dense unstructured phase of matter that no longer consists of water as we know it. The atoms are no longer chemically bound to each other. This is an exception to the simpler rule we started with - that the chemistry of substances does not change during a phase transition. As one goes deeper into almost any theory, rules based on simpler understandings tend to hit the roadside.
At temperatures below 2700°C and pressures above 48 million bar, a transition to a metallic ice phase is possible. Such intense pressure is expected to exist deep within icy Uranus or Neptune. In this phase, the structure of the ice would resemble stacked corrugated sheets of oxygen and hydrogen atoms, which would be electrically conductive.
Even more extreme pressures can be forced on matter, for example, inside the core of a rapidly collapsing star. What would happen to water under such conditions? As pressure is increased within a solid, individual atoms resist being pressed closer together. To explain this, we need to revisit the electron orbital. Hydrogen is a simple example to illustrate this. This atom (at ground state) has one electron in its 1s electron orbital. Any orbital can hold a maximum of two electrons, so 1s could also accommodate a second electron if there was one. An electron in another atom's orbital can temporarily occupy that role. If it is another hydrogen atom, then a covalent electron-sharing chemical bond is created, turning atomic hydrogen into hydrogen gas, H2.
An atom is a system, and like all physical systems, it tends toward the lowest free energy state possible. There are many forces going on in a hydrogen molecule. The two electrons repulse each other. The two protons repulse each other. Each electron is attracted to both proton nuclei and vice versa. There is an optimum distance between electrons and protons where all of these electrostatic forces add up to a lowest possible energy state. This is the most stable state. When atoms are squeezed closer together (or pulled further part), they resist and a force must be applied. Although perhaps easier to visualize, the electrostatic interactions that I've just described here are insignificant compared to much more powerful quantum interactions that respect the exclusion principle. Two or more electrons cannot share identical quantum numbers. If two electrons share an orbital they must be of opposite spin (spin is a quantum number). This exclusion principle translates into a repulsive quantum force that very powerfully resists additional pressure put onto an already extremely dense solid phase.
As pressure increases, the atomic orbital structure itself is forced to shift. Normally, electrons fill only a few energy levels in an atom. Many energy levels are unoccupied. Under extreme pressure, all the electrons are forced into the lowest energy level orbitals, as close to the nucleus as possible. This is ultra-dense electron-degenerate matter that is expected to exist in the core of a white dwarf stellar remnant. The exclusion principle means that two same-spin electrons won't share an orbital no matter how strongly they are forced together. Electrons, experiencing such intense repulsive quantum forces, respond by moving faster and faster in these lowest orbitals. Under enough pressure, they approach the speed of light, which is the limit of this arrangement. Protons begin to absorb electrons, creating a degenerate phase of matter, called neutron matter which is the densest state of matter possible. It is thought to exist in super nova stellar core fragments called neutron stars. Such transitions into extremely dense matter are technically phase changes as well, as is the final transition of matter into a black hole, in which matter as we know it collapses entirely.
Where Plasma Fits In As A Phase Transition
Where does plasma fit into this progression of phase transitions? The temperature of a substance or element is the measure of the average vibrational energy of its constituent molecules or atoms. Gas molecules generally have a great deal of vibrational energy as well as kinetic energy. They move around in all directions at great velocities, widely separated from each other. They occasionally bump into one another, and the force of these collisions translates into gas pressure. If a gas is cooled, it will eventually condense into a liquid at its critical point. Pressurizing a gas adjusts that critical point to a higher temperature. If you take another look at the phase diagram of water, you notice that water will boil into water vapour at just 50°C, rather than at 100°C if the pressure is 1/10th that of atmospheric pressure. At the other extreme, the temperature of water streaming from a deep sea hydrothermal vent can reach over 400°C but it does not boil because the pressure is over 300 times atmospheric pressure at depths over 3000 m, where most hydrothermal vents are located.
One way to make plasma is to heat gas. What happens if we heat a container of gas rather than compress it? We increase the total kinetic energy of the molecules. If we heat a sample of hydrogen gas enough, we will eventually supply enough energy to break the chemical bonds and dissociate the gas into hydrogen atoms. The bond dissociation energy of H2 is about 50,000°C but this temperature depends on the density of the gas. As density increases, the bond dissociation energy increases (due to two particles taking up more space than one). Now a mono-atomic gas, the hydrogen atoms can be further energized into ions. The ionization energy of hydrogen is about 150,000°C at 1 atm. This is also pressure-dependent. As density increases, the ionization energy decreases. As atoms are forced together, the gaps between electrons energy levels get shorter, which means less energy is required to ionize an atom.
Ionization means that the atom loses one or more electrons to create a gas that contains positive and negative ions. If we ionize hydrogen gas, we create a cloud of electrons and protons, called plasma. A very dense liquid-like plasma state in which protons are surrounded by a sea of mobile electrons, called metallic hydrogen, might exist deep inside Jupiter and Saturn. Because electrons are not confined to orbitals, this is also called a degenerate state of matter.
The process of ionization from a neutral gas into charged plasma is considered by many researchers to be the fourth physical phase after gases, liquids and solids. This phase transition, however, is a different and gradual, rather than abrupt, process. Atoms with more than one electron tend to lose electrons gradually as increasing energy is applied to a gas. The other three transitions - solid to liquid to gas - are each marked by an abrupt change in the arrangement of the molecules in a substance. In a transition from a gas into plasma, unlike the other three phase transitions, the substance also chemically changes because at least some molecular bonds are broken. For example, water vapour can ionize into hydrogen/oxygen plasma within a lightning bolt. The process of ionization into plasma is reversible, a characteristic in common with the three other changes of state. If energy is removed from the plasma, the ions will recombine into neutral atoms and molecules. However, in a complex mixture of gases, some new molecules might form as different ions react with one another.
The process of ionization takes place over a series of steps. We will go back to hydrogen as our example. As energy is pumped into atomic hydrogen gas, the kinetic energy of the atoms increases and some atoms become excited. An atom is excited through two possible processes: collisions with other atoms and the absorption of electromagnetic energy (light).
The electron in a hydrogen atom at ground state is located near the nucleus in a lowest possible energy orbital, which is actually a cloud of possible locations rather than a defined circular orbit. The electron in each hydrogen atom can absorb either kinetic or photon energy and move to a higher energy orbital. Electrons can even move into higher energy orbitals that are not ordinarily occupied by electrons. The atom is now in an excited state. If energy is removed from this system, the electron will return to its lowest energy (ground) state by releasing exactly the same energy as the energy difference between the orbitals. The energy is quantized. It is released as photons of specific wavelengths. For hydrogen, some of these photons are in the visible range. An excited hydrogen atom commonly emits red or aqua blue photons depending on the orbital drop, but it can also emit higher energy ultraviolet photons if it absorbs and releases more energy. As energy is added to the system, the electron continues to move up into higher energy orbitals until it eventually breaks free from the atom altogether. The proton nucleus can no longer hold onto the electron so it is now a completely ionized nucleus without any electrons. If an atom with more than one electron loses some of its electrons, but not all, it is partially ionized. Our hydrogen atom is now a dissociated electron and proton. A gas of these ions is called completely ionized plasma, shown in the simple diagram below.
Spirit469;Wikpedia
In this simple diagram (left) of completely ionized plasma, all electrons have been stripped from their nuclei, creating an electrically conductive "electron sea."
Under real conditions, low-energy plasmas often contain a mixture of excited, partially ionized and neutral atoms, which can be used to emit a beautiful glow in a gas-discharge lamp, for example. The tube below, filled with diffuse low-energy hydrogen plasma, glows pink - a mixture of red and aqua blue photon emissions.
www.pse-mendelejew.de; Wikipedia
The plasma in the tube above contains mostly neutral and excited atoms and only a few ionized atoms. The degree of ionization in plasma, sometimes called plasma density or electron density, is the number of free electrons in a volume of plasma. Even a partially ionized gas which contains only 1% ionized atoms, can be considered plasma if it exhibits plasma behaviours such as responding to magnetic fields and conducting electricity.
The plasma in the tube above was not created by heating hydrogen gas until it ionized; this tube is at room temperature. Extreme heat is one way to create plasma. Hydrogen in the interior of the Sun and other stars is extremely high-energy completely ionized plasma because it is an incredibly hot environment. Any substance, if hot enough, will transition into plasma. Even the exotic solid states of water, which remain solid even up to 400°C, would eventually break down into plasma as temperature is increased. I should note here that this process is not chemical ionization of water, in which water (H20) self-ionizes into hydroxide (0H-) and hydronium (H30+) ions. That is a chemical equilibrium reaction, and no change of phase occurs.
Rather than through heat, the hydrogen plasma in the tube above was created by applying an electric potential to hydrogen gas. This is another way that energy is applied to a gas in order to ionize it. Instead of electrons being "shaken off" of an atom with high kinetic energy, electrons (which, remember, are charge-carrying particles) are drawn off the atom by a powerful electrical force. It is maybe analogous to ducklings being swept away from their mom down a stream with a powerful current. It can take tremendous energy to completely ionize a gas. It all depends on which molecules and elements the gas consists of and what kind of energy is applied. When we focus in on what happens to an ionizing atom, it is more accurate to talk about energy in term s of electron volts (eV) than in temperature, which is an average of particle energies. The most tightly bound (and stable) atom is helium. It has two electrons, and therefore two ionization energies - it takes 25 eV to remove just one electron (this partially ionizes the atom). It takes much more energy (about 54 eV) to strip both electrons off the helium nucleus (now the atom is completely ionized). If a collision with another atom strikes with over 54 eV of energy, the target atom can be completely ionized.
It takes far less energy to excite atoms rather than ionize them. Even a relatively small 120V household circuit can light up a small neon lamp, which is a vacuum tube filled with mostly neon gas and a little argon gas.
In gases, ionization is reversible. If the energy is removed from the system, such as turning off the applied voltage gradient (unplugging the hydrogen tube above), the protons recombine with free electrons, the atoms return to ground state, and molecules recombine into neutral gas (H2). In such a closed system of only a single atomic gas, the gas can be ionized and recombined, transitioned into liquid and solid states and back again, over and over, demonstrating an entirely reversible process, but under real conditions, the extreme energy of some plasmas can trigger various chemical reactions as well, particularly combustion reactions. These reactions are non-reversible and add a non-reversible component to the phase change.
In solids, in particular, the process is not easy to describe in terms of a phase change. An example here might be a fulgurite. Lightning strikes sand and leaves behind a hollow tube of glass buried in the ground. Lightning is an extremely powerful electric potential, of up to 500,000 volts. The melting point of silicon dioxide (pure sand) is 1710°C and the boiling point is 2230°C. The interior of a lightning bolt can reach a temperature of 28,000°C, far higher than the boiling point of sand, enough to chemically break it apart and at least partially ionize its silicon and oxygen atoms. When lightning strikes sand, some sand is explosively vapourized into gas and plasma, leaving a hollow tube, where the bolt struck, surrounded by a layer of molten sand that quickly solidifies into glass. If there were impurities in the sand, such as soil and plant debris, these components would have combusted and reacted with each other, resulting in new chemical compounds in the fulgurite's glass. Air molecules in the vicinity are also broken down into plasma temporarily. In this case, a series of very rapid phase changes has occurred from solid to liquid to gas to plasma, but there was also opportunity for chemical reactions to take place, which are irreversible. Even the air itself, which is a mixture of gases, does not transition to a plasma state and then back into neutral gases without some chemical changes taking place. Some highly reactive oxygen ions and excited oxygen molecules will recombine into ozone molecules, for example, creating the fresh-air smell after a thunderstorm.
Why isn't a neon sign or lamp hot? Neon lights might get warm to the touch but they never get hot. Yet we can think of the ionized atoms in plasma as hot. The atoms have at least enough energy to become excited and lose some outermost electrons. The plasma itself contains some free fast moving electrons with significant kinetic energy, but in the case of the low-density plasma in a neon lamp, it also contains mostly neutral atoms with far less kinetic and thermal energy. This means that its average temperature will be simply warm to the touch. A neon light contains very diffuse plasma and most of the neon gas remains in a neutral gas state that absorbs the excess energy of the electrons that collide with them. It doesn't take much energy to create glowing plasma in which only a few outermost electrons of atoms are excited and fewer still are stripped off, creating a small electrical current in the tube that sustains the excited-atom glow. Ordinary air is an exception. It is actually a powerful electrical insulator and would make a lousy plasma light. It will ionize and glow (this is a lightning bolt) only when it is subjected to a very powerful electrical potential of more than approximately 100,000V. The dielectric strength of air, its ability to withstand a potential gradient before breaking down or ionizing, is 3.0 MV/m (million volts/metre), which is much higher than neon's, which is 0.02 MV/m. Dielectric strength is an intrinsic property of a material.
Understanding Plasma is Essential To Understanding Our Universe
Plasmas, no matter how they are created, have unique physical properties. They act quite differently from neutral gases. Like gases, plasmas do not have a definite shape or volume. They can be compressed fairly easily. Unlike gases however, which are electrically neutral, plasmas respond to electric and magnetic fields. Even though the charges are usually balanced overall in plasma, they are separate and free to move. This means they can produce electric currents and magnetic fields and they respond to them as well. Electromagnetic forces exerted on plasmas act on them across very long distances. This gives plasma special coherent behaviours that gases never display. The Sun and other stars are examples of plasma created under extreme heat. Their interiors consist of extremely dense, energetic, and completely ionized plasma. Powerful plasma currents exchange heat released from ongoing nuclear fusion in the stellar core. These physical currents are also powerful electrical currents because they are moving charges. The moving charges set up incredibly intense magnetic fields that can interfere with one another and snap. These are the mechanisms behind the violent solar weather that can damage communications, satellites and electrical systems 150 million miles away on Earth. While stars are made of dense plasma, most of the visible matter in the universe consists of very diffuse highly ionized plasma. Both kinds of plasma are the key reason why we can see distant stars and various gas clouds. They glow because they are plasmas that contain atoms that are continually being excited and returning to ground state, emitting photons of light in the process. Intergalactic space, interstellar space and interplanetary space are also all (extremely diffuse) plasma. Solar wind is plasma and Earth's ionosphere in the upper atmosphere consists of diffuse atmospheric gases ionized by solar radiation, or plasma in other words. At one point, prior to recombination, the entire universe consisted of completely ionized plasma. It was too energetic to support intact atoms. Expansion (and therefore cooling) allowed that primordial plasma to settle into the first and most abundant simple atoms such as hydrogen and helium. For these reasons, some theorists think that plasma theory should play a more dominant role in understanding the cosmology of the universe, alongside general relativity, high-energy astronomy, mechanics and dynamics.
Plasma behaviour can be extremely complex. Understanding the complicated dynamics of stars, for example, requires computer modeling based on plasma theory, much of that theory belonging to the field of magnetohydrodynamics, a field of study that only got under way in last few decades. Understanding the electrodynamics of plasma (the behaviour of an electrical fluid) on the largest scales might also help to explain the evolution of galaxies as well as their birth from the collapse of interstellar clouds into stars, which is not yet completely understood. Plasma theory might also shine some light on the mysterious nature of dark matter, required to explain the rotation curves of galaxies. Black holes, quasars and active galactic nuclei must all incorporate plasma dynamics in order for us to fully understand how they work. Extreme plasma dynamics could even represent missing pieces in our understanding of cosmic inflation and the accelerating expansion of the universe, the latter of which is now called dark energy. An intuitive understanding of plasma could be a useful key to understanding the way the universe works.
We know we are intrepid explorers by
nature. Our pop culture reflects that fact. We believe that someday we will overcome
all technological odds to travel to, and set foot on, distant but promising exoplanets, perhaps somewhere where an alien
ecosystem has taken root. Maybe that life is biochemically similar to us. It's only a matter of time.
There is a problem, however, one that has not
been given much play in our sci-fi movies and books. We are not made for outer
space. We are extremely delicate organisms that have evolved within a pocket
shielded from deadly solar wind and
even more violent cosmic radiation.
We live inside a thick envelope of gas surrounded by a powerful planetary magnetosphere,
which in turn is enveloped in an even more powerful and far-reaching stellar magnetosphere.
These powerful magnetic envelopes deflect most harmful radiation away and our
atmosphere does a thorough job of absorbing what still makes it through. As a
result, our bodies are exposed over our lifetimes to just a miniscule fraction
of the fast atomic fragments that bombard every square meter of deep space.
Space is not the empty, cold, benign
backdrop portrayed in most movies. It is a nuclear blast zone. Stars like our
Sun are ongoing fusion reactions sloughing off electromagnetic radiation, protons,
electrons, neutrinos, and small atomic nuclei in every direction. Almost
infinitely more powerful stellar explosions and collisions, happening all the
time across the universe, blast matter away at almost the speed of light.
Nothing slows these deadly particles down as they fly across the light-years.
An astronaut's suit or spaceship hull stands little chance against this
constant cosmic onslaught. No currently available material is strong or dense enough to absorb or deflect cosmic radiation while also being lightweight enough to be
launch-able. Any ship-wide magnetic deflection shield powerful enough to work will require an enormous supply of energy during the decades it will take to get to even a relatively nearby exoplanet is also a
distant, if not impossible, technological dream.
Despite the inhospitable nature of space, we
have inhabited it, at least close to home where we have some protection from
cosmic radiation. The International Space Station (ISS) is an artificial biosphere that takes care of the physical needs of a few
humans at least on the scale of many months. On its predecessor, Mir,
Valery Polyakov, a Russian astronaut, spent 437.7 consecutive days in space during 1994/1995. What we might forget is that this is time spent in space orbiting
close to Earth. Mir, for example, maintained a near-spherical orbit of between
352 km and 374 km above the surface of Earth, within the second most outer
layer of Earth's atmosphere called the thermosphere,
which is just above the ionosphere,
which forms the inner edge of Earth's magnetosphere. At this altitude the
density of air is extremely low so there is no atmospheric protection from
solar or cosmic radiation. Air here is so diffuse that a single molecule of
oxygen would have to travel on average one kilometer before it collided with
another molecule. However, Mir was well within Earth's thick protective
magnetosphere. Even at its narrowest region, where it is compressed by incoming
solar wind on the side of Earth facing the Sun, the magnetosphere is about
60,000 km thick.
How does atmospheric cosmic and solar
radiation protection work? Our atmosphere is transparent to low frequency electromagnetic (EM) radiation
emitted from the Sun. Sunlight, for example travels through air to bathe us on the
surface. High-frequency EM radiation from the Sun, such as X-rays and gamma
rays, is absorbed by the plasma (consisting of electrons and electrically charged atoms and molecules) in Earth's ionosphere. In addition to EM
radiation, the Sun also emits particles in all directions, most of which are
protons, and they are traveling very fast, about 400 km/s. Earth's rotating
metallic core generates a powerful magnetic field that deflects most of these
charged particles away from the surface. Our atmosphere is also opaque to this
radiation. It absorbs what isn't deflected. Consider
Mars for a moment. It possesses neither a deflecting planetary magnetosphere
nor a thick highly absorptive atmosphere. Its thin carbon dioxide rich
atmosphere provides some radiation protection, but not much. It is roughly 2.5times less protective than Earth's very thin thermosphere through which ISS orbits. Although its surface at the equator approaches a livable temperature, the continuous bombardment of radiation makes it more inhospitable than you might think.
While astronauts on Mir and now the ISS have
very little atmospheric radiation protection, they are protected from solar
wind by Earth's magnetosphere. Thanks to the Sun's magnetosphere they are also mostly protected from far more powerful radiation
- gamma rays and particles with far higher kinetic energy than any solar wind
particle. Yet even here close to Earth, astronauts can only stay on ISS for a
limited time. A very small amount of cosmic radiation makes it through our
solar magnetosphere, meaning that the astronauts do receive a low but cumulative
dose of cosmic radiation. A 2014 study by radiation expert Francis Cucinotta indicates that ISS astronauts exceed their lifetime safe limit due to cosmic
radiation in just 18 months for women and two years for men.
Protons (hydrogen nuclei) and, to a lesser
extent, larger atomic nuclei (such as alpha particles and helium nuclei), traveling
near the speed of light which is 300,000 km/s, pervade every square centimeter
of deep space. The ISS would never be feasible in this constant blast zone.
However, protected within the magnetospheres of the Sun and the Earth, an
aluminum hull just a few millimetres thick shields about 95% of the radiation
that strikes it. Even thick plastic stops this radiation, which consists mostly
of EM radiation along with relatively low-energy solar protons (averaging 400
km/s) that manage to pass through Earth's magnetosphere. We have to say average velocity here because the Sun
isn't static. Magnetic storms sometimes rage across its surface, accelerating particles up to 3200 km/s, in
the case of exceptionally violent storms called coronal mass ejections.
A coronal mass ejection is a mass of highly magnetized plasma, a chunk of the
Sun, hurled off into space when a magnetic field snaps. Velocities up to 3200
km/s have been recorded by the LASCO onboard the SOHO satellite orbiting between
the Sun and Earth. Even this accelerated plasma, which would devastate Earth's electrical and communications systems,
has nowhere the particle-to-particle punch of cosmic radiation, with velocities close
to 300,000 km/s.
As mentioned, materials are still in the process of being developed that are both light and effective at shielding protons traveling close to light
speed. Most cosmic radiation comes from supernovae. It consists of stellar particles
violently spewed in every direction when a high-mass star reaches the end of
its life. The shock wave of the explosion is powerful enough to accelerate
material to near light speed. Only in extremely powerful accelerators can
particles on Earth approach such velocities.
Radiation itself can be a confusing
subject but at its core the concept of radiation is simple: it is an emission or
transmission of energy. It comes in four basic kinds: electromagnetic (EM) radiation (radio waves,
visible light, gamma rays etc.), acoustic radiation (sound waves, seismic waves), gravitational radiation (gravitational waves) and particle radiation (on Earth we typically deal with alpha radiation – alpha particles, beta radiation – electrons, and neutron
radiation – neutrons). In terms of cosmic radiation, we are especially interested in protons, alpha particles and, to a much lesser extent, larger atomic nuclei.
Radiation is either ionizing or non-ionizing. With
the exception of microwaves, sonic devices and intense or prolonged light exposure that can cause photochemical burns, non-ionizing
radiation tends to present a minimal hazard to human health. This radiation simply doesn't have
enough energy to ionize atoms in living tissue. I will explain what "ionize" means in a moment. Generally, a particle or wave must carry
more than 10 eV (electron volts) of energy to ionize atoms and therefore damage
biochemical bonds in molecules, but the line is blurry because some atoms
ionize more easily than others do and some chemical bonds break more easily than others do. Consider
the energy of a visible green light photon of about 2 eV. 2 eV is harmless (unless those 2 eV photons are concentrated into a laser. Then they can burn
your retinas and blind you). Radiation of 10 eV or more, however, has enough energy to strip
electrons off of (ionize) atoms and
molecules, breaking chemical bonds between them in the process. This radiation
can be harmful and even lethal to humans.
Short wavelength (very energetic) EM radiation such as X-rays and gamma rays
can break chemical bonds in DNA for example, creating genetic damage that can
eventually lead to cancer. It can also damage biological proteins and enzymes,
impairing cell function. All particle radiation from radioactive materials or
in the form of solar radiation and cosmic radiation is ionizing. It causes biological
damage at the microscopic level in our bodies. That said, The National Cancer Institute in the United States explains why radiation can sometimes be good thing. Targeted at cancer cells, which are dividing uncontrollably, the ionizing radiation damages their cellular DNA so severely that the cells program themselves to die, shrinking the tumour.
Most cosmic radiation consists of
ultra-fast protons, and most of them are blasted out of exploding stars. A
proton, a particle normally confined inside an atomic nucleus, is an
exceptionally tiny object, weighing less than 2 x 10-27 kg. Despite
its miniscule mass, a proton blasted from its atom and accelerated to near
light-speed (now we call it a relativistic proton) packs a catastrophic punch,
around 1 GeV (G means giga or billion electron volts). Compare this to the low
end of ionizing radiation at 10 eV. This force is roughly equivalent to a
baseball being thrown at you hard but with all of that impact condensed into an
area about one millionth of a nanometre wide (a proton is about 10-15
m in diameter).
You might expect a single proton at such
high velocity to fly right through your body, causing minimal damage along its
sub-microscopic course. After all, a general rule in physics is that the higher
the kinetic energy of a particle, the smaller the fraction of its kinetic
energy tends to get deposited in the material. The problem is that the proton
will, more than likely, glance off atoms in your body along the way. A homemade analogy here might be that while a massless "slender" gamma photon elegantly dances through the atoms, a massive proton acts like a bull in a china shop. Those
atoms in the proton's path will be ionized, and they will ionize other atoms and so on, depositing
energy along an ever-widening path of damage in the body. This process is
technically called linear energy transfer (LET), which I will explain in a moment. An astronaut onboard the ISS
might by hit by only a few cosmic protons during a months-long mission but new
research indicates that even low-dose infrequent cosmic radiation exposures can
cause significant long-lasting damage, particularly to the brain. An astronaut
on route to Mars or on the surface of Mars, with only the Sun's magnetosphere
as protection (Mars doesn't have one), will receive a much higher exposure to
cosmic radiation than an ISS astronaut. Of most concern is the cumulative
damage to our fragile and very slow to heal brain.
Primary cosmic radiation,
originates outside the solar system and some of it comes from outside the Milky
Way galaxy. About ¾ of cosmic radiation consists of protons (hydrogen nuclei).
About a quarter consists of heavier alpha particles (helium nuclei) and about 1%
consists of still heavier nuclei, mostly lithium, beryllium and boron nuclei. These
heavier so-called HZE (high atomic number and energy) ions,
though far less abundant, are especially damaging. Though very sparse even in
deep space where there is no magnetic shielding, these infrequent impacts would
contribute significantly to an astronaut's overall radiation dose.
A radiation dose is measured as an absorbed
dose. The SI unit is the gray (Gy). It measures the radiation's energy
deposited in the body. Radiation is also measured by the action upon the matter
of the body, by its linear energy transfer (LET).
This means it measures the amount of energy lost per distance traveled. Two
equivalent absorbed doses can do vastly different amounts of damage in the body
depending on their LET values. A high LET means that the radiation leaves
behind more energy and therefore causes more atoms to ionize in its wake. A
higher mass particle, such as an alpha particle, will leave a track of higher
ionization density than a proton, if both are going the same velocity when they
strike the body. A gamma photon will have a far lower LET value yet. The chemical make-up of our cells also plays a significant role determining ionization damage. Low LET radiation, such as X-rays or gamma rays, ionizes
water molecules inside cells. It breaks them up into H+ and OH-
ions (also called radicals). It does this damage over a long track through the
tissue so it tends to leave one event per cell. The often single H+ and OH-
ion pair simply recombines to form water once again, releasing some energy in the process.
When ionization occurs over a shorter wider track, many H+ and OH-
ions can form within each cell. A pair of OH- ions near each other
can recombine into H2O2 (peroxide) instead. This molecule
causes additional oxidative damage to proteins, lipids and to DNA in the cell on top of the ionization damage to various biological molecules. Dense clusters of high LET ionization
damage in the body means that cosmic particle radiation is extremely dangerous to astronauts,
especially to their brains as we will discuss below in more detail.
Radiation Units: A Frustrating Labyrinth
Articles about radiation can be very
difficult to grasp because so many unfamiliar units are thrown around seemingly
at random and sometimes interchangeably. A large part of this confusion stems
from the use of both "old" American units and SI (standard or metric
international) units. Here in North America the switch over to SI has been particularly
slow in the field of nuclear science in part because the stakes are so high. Even
a small conversion error can lead to dangerous radiation exposures. In addition, a number of different units must be used to accurately describe radiation as it
travels from its source, through the atmosphere or through space and then into
our bodies. The rate of emission of radiation from its source is measured in
curies (old) or becquerels (SI). Sometimes radiation emission is measured in
terms of emission energy instead of rate. In this case it is measured in
electrovolts (eV) or joules (J, an SI-derived unit). I used eV earlier to
compare ionizing to non-ionizing radiation energy. Once the radiation is
emitted and is now ambient, its ambient concentration is measured in roentgens
(old) or coulombs/kg (SI). Once ambient radiation strikes a living body or
other object, the raw amount that object absorbs is measured in eitherrads (or
just rads; old) or grays (SI). I gray is equivalent to 100 rads. Rems (old) and
Sieverts further complicate the picture. These units measure the effective
dose, or dose equivalent, in other words the biological harm caused by radiation
in living tissue. Rather than describing a radiation disaster in terms of rads, for example, seiverts or rems can offer a better description of how the
radiation exposure will affect human health. This measurement reflects the
different LET values of different kinds of radiation as well as the kind of
tissue receiving the dose. Some body tissues are more sensitive to radiation than others.
This measurement of dose equivalent is still an inexact science, however, as
there are so many not entirely known biological factors to consider. Randomized
experiments that test radiation damage to living human tissues are, of course
unethical. Animal study results are sometimes difficult to extrapolate to
humans because their bodies, tissues, and physiologies are different. We also
still don't know much about how our human tissues react to various kinds of radiation
exposures, especially from cosmic radiation.
The Edge of a Mystery: The Brain and It's response To Radiation Scientists once assumed that brain tissue
is less sensitive to radiation damage than other tissues because brain cells
tend to multiply at a much slower rate than other cells in the body, such as
gut and skin cells do, for example. Cells that divide less often spend less of
their time in the process of division. The DNA in a quiescent cell is tightly
coiled into a dense structure called chromatin. This structure is very stable
and resistant to radiation damage and it offers a very small target as well.
When a cell starts the process of dividing, its DNA must unravel so the
machinery of DNA synthesis can replicate the entire genome. The DNA in this
state is highly susceptible to damage. Luckily, cells have mechanisms that
detect and repair DNA damage when it occurs, but if the radiation dose is high
enough, the dividing cell can't fix all the damage before it completes replication. It will then either program
itself to die off or it will pass on the DNA damage as mutations.
Based on instances of acute radiation
exposure studied after radiation accidents and war, tissues in which cells
multiply most rapidly such as those in the gastrointestinal tract, the spleen
and bone marrow, tend to present symptoms of radiation damage (radiation sickness) first. These
assumptions are based on whole-body one-time exposures to an absorbed dose of between 6 and 30 Gy.
Neurological symptoms (including cognitive defects) typically only manifest
after a dose higher than 30 Gy occurs. This is a catastrophic dose; the victim will die within two days. There is a fascinating, if sobering,
chart of whole-body dose effects here.
Scientists are discovering that the effects of acute exposure cannot be
extrapolated to the effects of sustained low-dose exposure, especially
long-term effects. The results don't take into account cellular damage that
takes weeks, months or years to manifest. More importantly, different tissues
in the body deal uniquely to long-term radiation exposures with different LET values
in complex ways that are still poorly known. The study we are most interested in
here (explored in detail below) considers both Gy and LET values as indicators of damage, in this case, to
brain cells. The results are based on both behavioural changes and structural
tissue changes in mice exposed to radiation that mimics cosmic radiation.
Until a few years ago, NASA and other space
agencies only suspected that long-term cosmic radiation caused cognitive
impairment in astronauts. This suspicion was largely based on clinical data from cranial radiotherapy and radiation treatment for brain cancer and then comparing
that data to before/after cognitive test results of astronauts. The
extrapolation from clinical data to cosmic radiation exposure, as they knew,
was problematic, much like comparing apples to oranges. A typical daily dose
during cranial radiotherapy is about 2 Gy.
Compare this to a far lower roughly 1/5000th daily dose of radiation
of around 0.48 mGy expected for an astronaut during a round-trip and stay on Mars. Adding that exposure up
for a 300-day trip, for example, still adds up to just 1.44 Gy, less than the
dose of a single cranial radiotherapy treatment. Travel to Mars shouldn't be a
problem right? The trouble with comparing the two doses is that they have
vastly different LET values. In the clinic, X-rays and gamma rays are most often used.
These energetic EM photons are not nearly as densely ionizing as cosmic
particle radiation because a massless photon particle has far less momentum
than a particle of mass traveling at nearly the same velocity.
To really understand how long-term cosmic
radiation will affect astronauts during trips to Mars or on possible future
deep space missions, you need direct experimentation, using relativistic
particles. How would you design an experiment to do this? Charles Limoli, a
neuroscientist and radiation biologist at the University of California School
of Medicine, has taken some of the first steps in answering the question. His research
findings to date can be found in the February 2017 issue of Scientific
American. It's an excellent read that inspired me to write this article. He explains not only the radiation problems that current astronauts face based on current research findings, but he also outlines the challenges of getting the data
scientists will need to plan future deep space travel. A reference paper for
this article called What Happens to Your Brain on the Way to Mars by Vipan Parihar et al. provides a good background for this research. Here I try to provide a glimpse into his work, and offer some insight into the challenges and excitement of
designing new experiments, into the process itself. The findings are preliminary so far but they are
quite stunning. Undoubtedly a lot of future work will build upon it.
In this case, mouse brains are used as
living models for astronaut brains in deep space. There are three significant
challenges to this approach. First we have to ask if a mouse brain is similar
enough to a human brain in terms of structure and function to make a valid
model. We also have to ask if a mouse brain reacts to radiation the same way a
human brain does. Rats are used as extensively as mice in neurological studies.
There is a mountain of neurological data using either species, suggesting that
they are reliable and useful models. I wondered, though, if rats would make a
better model in this case. That turned out to be an interesting question. A
2016 research paper by Bart Ellenbroek and Jiun Youn explores the differences between rats and mice and how reliable they are as models
for human brain behaviour. The species differ in their behaviour, as anyone who has worked with both species knows. For example, rats
tend to be more comfortable with lots of human physical contact while mice can
be stressed by similar attention. Unexpected stress on an animal could affect
the results of behavioural tests. The researchers also point out that, while both
rodent brains are very similar to human brains, there are fundamental
differences between them that could affect the reliability of test results. Limoli's
research presents a unique physical limitation on what kind of animal model you
can use – it has to fit into the accelerator target area.
An additional thing to keep in mind is that the brain is still not fully understood. Neurology, neuroscience and psychiatry are very active fields of research. Still, at least one basic fact about the brain seemed to be firmly established: the brain contains two basic types of cells with neutrons performing the twin star roles of function and structure and with glial cells playing a supporting role. This is called theneuron doctrine. Research over the last few years calls these assumptions into question. For example, a 2013 Scientific American blog post by Douglas Fields points out an unexpected but key difference between mouse
brains and human brains that suggests that glial cells are far more involved in function than researchers realized. It was assumed that glial cells can't do any electrical signalling. Until now they've
been thought of simply as physical and physiological support cells for neurons.
The researchers transplanted human glial cells into mouse brains and discovered
that these mice soon significantly surpassed their untreated siblings in both memory and
learning. Somehow human glial cells imparted an improved, perhaps more humanlike, cognitive ability into
a mouse mind. A specific type of cultured human glial cells called astrocytes
are in fact much larger and have a more variable morphology than mouse
astrocytes. It is a clue that these cells might be involved in the evolution of
human intellect. Researchers now know that glial cells not only propagate calcium signals over long distances but they also form electrically coupled synchronized units through gap junctions (similar to how heart cells are synchronized to contract during a heartbeat).
The second question is how do we reliably test
cognitive function in mice before and after radiation exposure? Fortunately
mice have been bred and used extensively for various kinds of cognition and
memory research for decades. There is lots of data to draw from as well as a
large collection of reliable cognitive and memory test protocols available to
use. This 2015 compilation paper by SM Holter et al. provides an overview of those tests. Third, how do we
expose our test animals to specific doses of relativistic particles? A tremendous
amount of energy must be used to accelerate particles to nearly the speed of
light. Few natural mechanisms outside of stellar explosions can do the job. The
only way to do this in a lab is to expose the mice to radiation inside a
particle accelerator. Fortunately, NASA Space Radiation Laboratory,
commissioned in 2003, is designed for exactly these kinds of experiments.
Radiation consisting of a variety of particles with a range of very high
energies is specifically designed to resemble cosmic radiation.
Out of practical time constraint
limitations, the mice received a single dose of radiation equivalent to many
months to years of actual cosmic radiation exposure according to Limoli's
article. The article states a dosage of either 30 cGys or 0.3 Gy was used, which is
very low, approximately 50 times lower than an expected round trip to Mars
radiation dose. Would an astronaut's brain, exposed to the same amount of
radiation but spread out over many months, have enough time to repair the
damage between intermittent particle exposures? Researchers are not sure but the results of this experiment are not promising as we will see in a moment. This research is an essential first step to
find out just how big a problem long-term cosmic radiation exposure could be for
future astronauts.
A healthy brain neuron consists of a soma (cell body) which contains the nucleus and other organelles, dendrites (branched projections) and an axon (a long
slender electrically conductive projection). See the diagram below. It connects
to other neurons through specialized connections called synapses.
(Wikipedia public domain)
One neuron
can contact another neuron's dendrite, soma or, less commonly, its axon via a
synapse. At the synapse an electrical signal is converted into a chemical
signal by the release of neurotransmitter molecules into the gap between two cells. The neurotransmitter initiates an
action potential in the
connecting neuron. See the enlarged box in the diagram above. One neuron can
connect to many other neurons to form complex neural networks in the brain.
To start, a healthy mouse explores toys
in a box. Over a period of hours and days, its brain physically changes as it learns and forms
memories. The neurons in its brain form new dendritic branches and trees and create new synaptic connections. Dendritic branching can be very extensive
and complex. A single neuron can receive as many as 10,000 dendritic inputs
from other neurons.
The toys in the box form part of a task that evaluates a mouse's cognition and memory abilities. In this task, called the novel object recognition task, a mouse is placed in a box containing toys to explore. After
exploration for a fixed amount of time, the mouse was then removed from the box
and the locations of the toys are changed and some are replaced with new
toys. The mouse is returned minutes, hours or days later to the box to explore
the novel landscape. A healthy mouse is curious - it will quickly notice changes
and spend extra time exploring those changes. The time spent on checking out
new things compared to overall time present in the box is that mouse's
discrimination index, a reliable measure of its memory and learning ability.
Memory and learning takes place primarily within the brain's prefrontal cortex and hippocampus.
Limoli and his group discovered that a single dose of low-level cosmic-like
radiation exposure greatly reduced the mice's discrimination index values. The
deficit stood out in stark relief when irradiated mice were given the same follow-up tasks as
healthy mice. Their curiosity was greatly diminished. They didn't seem to
recognize changes made to their environment.
Six weeks after a single exposure of either
5 or 30 cGy of radiation, both very low doses representing sparse particle
impacts, the performance of the mice dropped on average by a whopping 90%,
regardless of dose. Furthermore, they also found that these impairments lasted
12, 24 and even 52 weeks after the exposure, suggesting that the damage to the
mouse's brains didn't heal, at least within a year after being damaged.
The researchers confirmed the physical
damage to the brain by imaging sections of the medial prefrontal cortex of healthy
non-radiated mouse brains and of irradiated mouse brains. The imaging data
revealed significant reduction in dendritic branching as well as a significant
loss of dendritic spines. A dendritic spine is a tiny protrusion from the main shaft of the dendrite that contains the
synapse that allows the dendrite to receive signals. If dendrites are the
branches on the brain "tree," then spines are the tree's leaves, as Limoli
describes it in his article. These structures are very plastic.
They undergo constant turnover in a healthy brain. The growth of dendritic
spines reinforces new neural pathways as an anatomical analogue of learning.
They also maintain memories. Environmental enrichment (providing lots of
learning opportunities) leads to increased dendritic branching, increased spine density and an increase in the number of synapses in the brain. Cosmic radiation exposure not only undid the changes associated with learning
and new memory formation. It severely impaired the normal (baseline) cognitive
function of the brain as well.
Only the medial prefrontal cortex, a region
known to be associated with learning and memory, was imaged but it seems
reasonable to assume that the radiation attacks the physical and functional
integrity of synaptic connections across the entire brain.
Conclusion
This is disconcerting news to those of us
who dream of mankind eventually traversing the universe to explore other
planets and moons in person. Imagine the spectre of our best and most brilliant
men and women gradually losing their cognitive abilities, losing their memories
and themselves as they travel through
deep space. Cosmic radiation would impair astronauts in the most critical way.
The skills and mental acuity required to deal with maintenance issues and sudden
problems as they arise during a long-term space flight are what set astronauts
apart from the rest of us. Stasis would
be no solution, unless it is inside some as of yet undiscovered material that
can block out cosmic radiation. The fact is, we evolved inside a layered
magnetic cocoon where the threat of cosmic radiation doesn't exist. We'll have
to rely on another facet of our evolution, our ingeniousness, to get past this
hurtle.