Tuesday, September 2, 2014

Interpretations of Quantum Mechanics

Quantum mechanics should make you shiver uncomfortably at least a little bit. It's not easy to understand (if anybody truly does) and it's not easy to interpret. We started our journey with our last article exploring the Dirac equation, which describes electron spin and incorporates special relativity. Studying this equation, even as non-mathematicians, offers us some insight into the nuts and bolts of quantum mechanics. But the scientific field of quantum mechanics goes much further than the Dirac equation: It is the study of all matter and its interactions at the subatomic scale. It encompasses black body radiation, the quantization of light and matter (the Bohr model of the atom), quantum entanglement, quantum field theory and quantum electrodynamics, just to name a few fields of study.

Because the quantum world is far too small to probe directly, we must rely on mathematical constructs and formulations. And this is where things get dicey because it opens up questions as to how we interpret the math into reality. Some experts even wonder if we should. Is mathematics the reality? As scientists, we want to understand nature and in the quantum world we cannot directly know what that nature is, what "real" is. We are left in a quandary where various continually evolving interpretations are the only scaffolds we have to foot our understanding on, and every one of them may feel shaky in some way or another.

Locality Versus Non-Locality

Does any object in this universe exist independently of all other objects? It seems that they do. Consider a rogue planet hurtling through the empty vacuum of space, for example. Unless another object comes into direct contact with it, we expect that planet to maintain its velocity and its trajectory forever. At first thought, the only exceptions we can think of to this rule are the force of gravity and any electric and magnetic forces acting on the planet. This is the local view of the universe, which means that an object is only directly influenced by its immediate surroundings. Locality is both commonsense and intuitive because it reflects how we personally perceive the universe to operate, at least from our classical macroscopic point of view. The problem is that physics seems to have ways of sidestepping this requirement of commonsense. Non-locality is one of the most unpalatable nonsensical aspects of quantum mechanics. Let's begin by reflecting briefly on past perilous scrapes with non-locality in physics.

Gravity and Electromagnetism Eventually Come Around to Locality (Depending On Your Definition of What Real Is)

Isaac Newton's law of universal gravitation was formulated in such a way that gravity (from a nearby star, for example) acts on a planet without any medium to transmit its force. How can a force act without a medium of some kind to transmit it? This argument is encapsulated in the principle of locality.

Newton himself was certain that some kind of gravitational transmission medium would eventually be discovered, but it wasn't until Albert Einstein reformulated gravity two hundred years later into general relativity that locality was restored to gravity. He did this by introducing the stretchy bendable four-dimensional fabric of space-time, which pervades the entire universe. Gravity bends space-time so that in this way, while we cannot directly observe the fabric, we can think of the planet as being a mass interwoven into the same fabric as the star, with both objects stretching the fabric around them into four-dimensional pits or gravity wells. Physical locality is restored to the universe IF you consider the general relativity matrix of space-time to be physically real.

As for charge and magnetism, Coulomb's law of charges was originally non-local in nature, bringing with it the same undesirable spectre of "spooky action at a distance," to quote Einstein. The notion of an electric field slowly emerged as a way to at least try to deal with the glaring non-locality of interactions between charges separated by a distance. A field could be visualized around an object, where other objects could be visualized as directly acting on this field. Was the field anything other than fancy physics bookkeeping? No. It was not a real thing, at least not until Maxwell's equations came along and superseded Coulomb's law, restoring locality to both electrical and magnetic forces. This was done in an ingenious way by assigning energy and momentum to both electric and magnetic fields, making the field a physically "real" thing in other words. Even more elegantly, I think, Maxwell came up with a totally charge-free solution to his equations. A chain of oscillations of an electromagnetic field was shown to propagate at exactly the speed of light. We could now think of the electromagnetic force as an oscillation, a chain of events in other words, directly connecting one object (light transmitter) to another object (light absorber or reflector), and thus restoring locality to this force. This can be done without resorting to a medium (such as 19th century luminous ether) to transfer light waves. Luminous ether was dealt its death blow when special relativity destroyed the single frame of reference required by the ether as transfer mechanism, and as it did so it also imposed an invariant speed limit on the propagation of light.

At this point physicists had the universe fairly well sewn up. We had physical objects, or particles, and we had physically real fields. We could offer a complete - and local - description of everything going on with our earlier example of a planet hurtling through space, as well as for any phenomenon we could think of. This is the local realist view of our universe. It makes logical sense to us because this is the kind of universe we experience day to day. The classic laws of motion, of momentum, of action and reaction, rule our everyday universe.

Is The Universe Nonlocal?

All objects are composed of matter: atoms and their constituent subatomic particles. What seems to us as a perfectly independent object begins to look anything but when we begin to probe it at the subatomic or quantum scale. This is the scale at which classical mechanics gives way to quantum mechanics. It is at this scale where we find clues into the real nature of the universe and where the independent existence of objects may be revealed to be mere illusion.

The EPR Experiment

The EPR experiment, one of my favourites, smacks us hard in the face with the weirdness of quantum mechanics (QM). The following 4-minute video gives you the general idea of what it's all about.


It sweeps away our expectations of how matter works in this universe and it makes us question the separate reality of objects. It introduces us to the intensely creepy, counter intuitive, and well-documented phenomenon of quantum entanglement.

The EPR experiment is a thought experiment that was designed by Albert Einstein, Boris Podolsky and Nathan Rosen (E-P-R) and published in 1935. It was intended to showcase a major flaw with then-young theory of quantum mechanics, and that flaw was its unintended consequence of quantum entanglement. Like gravity and electromagnetism before it, many leading physicists believed that quantum mechanics was simply incomplete and, with additional work, it would eventually snap into line with locality. The problem is that it didn't snap into line and it still doesn't, even after countless variations of this experiment have been performed which verify nonlocal quantum behaviour and demonstrate statistically that the local realist view of the universe is incorrect. This being said there are many arguments against non-locality made by several theoretical physicists. I will explore some of the most prevalent counter-theories later on.

The EPR experiment considered two entangled particles, and to appreciate the EPR experiment, we need to know what quantum entanglement means. Physical qualities of particles tend to come in pairs called conjugate variables. Examples for a particle are location and momentum, or the components of a particle's quantum spin measured around different axes ("up" versus "down" spin). Mathematically these qualities are treated as related pairs of variables, with the Heisenberg uncertainty relation between them. You will also find quantum pairs called complementary variables. This means that, for example, when the momentum of a particle is measured and determined, its location becomes indeterminate, and vice versa.

This is a counterintuitive aspect of quantum mechanics but it is a relatively straightforward consequence of the mathematics of the Schrodinger wave equation. Let's say we want to localize a particle. That particle is described mathematically as a wave function. Any aspect of that particle - its position, its momentum, etc. - is described in that wave function as a probability distribution thanks to the Born rule. This means that the particle's position is uncertain and it could be located anywhere within that distribution (an example of this distribution is the electron cloud). If we think of the wave function as the sum of many waves, we can start to narrow down position by adding up more and more plane waves, which describe possible location. When you do this you end up with a more and more localized distribution and your position evolves toward a specific single value or summed plane wave. But there is a cost to doing this, and it is because position and momentum are conjugate variables. The momentum of the particle is now, as a consequence, a mixture of all these waves added together and they all have different momenta. When they are summed up you will get a wide probability distribution for momentum, with this uncertainty being inversely proportional to the position certainty for the particle. This is the Heisenberg uncertainty principle in a nutshell and a hint that the wave aspect of matter lends to its counterintuitive nature.

Quantum entanglement involves two or more particles. This means that the quantum states, or wave functions, of two or more particles cannot be described independently when they are entangled. To refresh our memory, a wave function in quantum mechanics describes the quantum state of one or more particles and it contains all the information about the particle or group of particles, treating it (or them) as a system in isolation.

Here, the wave function must be described for the group as a whole. The important consequence of entanglement is that any action taken on one particle in an entangled pair must be considered as an action taken on both particles as a whole. In other words, two entangled particles can no longer be described as two independently evolving wave functions or probability distributions. Instead they become components of a more complex probability distribution that describes the particles together. This too lends to the counterintuitiveness of quantum mechanics. Does this mean that in reality two entwined particles overlap into one physical entity, one that exists in two locations simultaneously? Consider that macroscopic objects as large as diamonds have been shown to exhibit quantum entanglement. In this case, two separate diamonds share a single vibration called a phonon. Are they two separate diamonds or are they superimposed (much like Schrodinger's cat)?

In the original version of the 1935 EPR thought experiment, two particles, A and B are entangled and they might be separate by any distance, even across the universe. Measuring a quality of particle A, without disturbing the system, will cause the conjugated quality of particle B to become indeterminate even though there is no contact between the particles. This result is considered evidence for non-locality.

In the EPR paper (arguing for locality), there were two possible explanations for this seemingly non-local behaviour. Either there was some kind of interaction between the two particles or there was information about the outcome already present in both particles. The authors of the paper preferred the second possibility in which the information must be present in one or more hidden parameters. The first explanation (interaction between the two particles) questioned Einstein's fairly recently formulated theory of special relativity, and it is no surprise that he would be uncomfortable with that: Because the effect would be instantaneous, the communication between the two particles must travel faster than the speed of light. Special relativity is based on the fact that light travels at one speed and nothing in the universe can travel faster than that speed, not even information.

The authors knew that the formulation of quantum mechanics (QM) had no room to introduce hidden parameters, so they concluded that QM must therefore be an incomplete theory.

Measuring Wreaks Everything

The EPR paper, rather than attaining its goal of debunking QM, now serves as a good discussion of its implications. Furthermore, although Einstein brought to light some of the eeriest implications of quantum mechanics, he was later proved to be simply wrong, on two levels. First, when one part of the entangled system is measured, the system is disturbed. In classical mechanics, you can make various measurements that do not disturb or alter your system (at least in any significant way). In QM, however, the act of measuring any aspect of the system collapses the wave function of that system and therefore changes the physics of that system. This error was pointed out by Niels Bohr after he read the EPR paper (and Einstein argued against it; Bohr and Einstein were longtime friends but their often-public debates are legendary).

Niels Bohr (left) and Albert Einstein (right) in 1925
Quantum Mechanics Was Never Set Up to Be Local

The second error is that they assumed the system was a local one, but it was not even formulated to be local, as defined by something called the Hilbert vector. In 1926, John von Neumann developed a mathematically rigorous formulation of quantum mechanics. He realized that a quantum system could be realized as a point in a mathematical construct called complex Hilbert space. In this formulation, all possible states of the system are represented by unit vectors, which reside in complex Hilbert space. Physical quantities such as momentum and position are treated as linear operators operating in this space (a linear operator acts like a function if you remember calculus). I've mentioned that the position of a particle, such as an electron in an atom, can only be described as a probability cloud. Hilbert space constructs that cloud mathematically. The inner product between two state vectors is a complex number called the probability amplitude. Hilbert's theory of operators essentially replaces a badly defined expression in Dirac's equation, called the delta function, making it more mathematically rigorous. As it does so, it provides the mathematical mechanism of non-locality: The theoretically infinite range of a particle's location makes interactions with other distant particles possible.

The Von Neumann mathematical construction of quantum mechanics continues to be considered by many physicists to be the structural backbone at the heart of quantum theory. There are many interpretations of quantum mechanics, and they rely on assumptions made about what that von Neumann backbone of quantum mechanics tells us. If you remember from the last article, Paul Dirac is considered by many to be the father of quantum mechanics. It might be fair to say that Dirac and von Neumann are co-parents of quantum mechanics, with Dirac being the pragmatic parent and von Neumann being the mathematically rigorous parent. This article is meant for non-mathematicians, so a detailed discussion of either the Dirac equation or von Neumann's formulation is far beyond our scope here. However, I think it might be prudent to at least consider the general gist of the mathematics behind the theory before we explore the various, and evolving, interpretations out there. It's easy to fall in love with one interpretation over another one on the basis of whether that idea feels good or not, rather than whether it makes the most sense. I learned that lesson the hard way while researching The Holographic Universe. You will see evidence of my personal feelings getting in the way of good scientific investigation as revealed in the comment section that follows that article.

Prove To Me the Universe Is Non-local!

The conclusion of the von Neumann discussion is fairly straight forward: the quantum state, or wave function, of a system of two or more entangled particles includes the value of interest at EVERY site. This vector formally defines quantum mechanics as non-local in nature. Does this mathematical conclusion translate into physical reality? Does this mean that the universe is therefore non-local in nature? It's a very large leap from a mathematical conclusion based in the quantum world to the macroscopic universe as a whole, and there are experts that question whether any quantum formula finds its analogue in our everyday world.

To really buy that the universe is eerily non-local, one needs experimental proof, and to prove that quantum entanglement is non-local took quite a bit of work as well as an upgrade in available technology. In 1964, John Bell designed a theorem that he hoped would settle the matter. If hidden parameters or variables exist, then an experiment could be designed to satisfy what was called Bell inequality, but if Bell's inequality was violated then the system could not contain hidden variables. To understand what Bell's inequality is, the logic behind it and how it's derived, try this excellent online teaching article by David Harrison at the University of Toronto. It describes in an easy-to-follow non-technical way how Bell's inequality is tested experimentally using an entangled system of two electrons or two photons with opposite polarizations. I'll leave the full description for you to read in the link; it takes just a few minutes to read (and many more to think about if you are like me).

If you are interested in reading original quantum mechanics articles published by everyone from the EPR trilogy of Einstein, Podolsky and Rosen to Bohr, Bohm, Bell as well as many experimental tests of the EPR experiment, there is an excellent list with links here.

Several experiments were performed, beginning in the late 1960's, some with flaws in design that were later addressed. In the 1980's, physicists Alain Aspect and Paul Kwiat performed such experiments and found violations of Bell's inequalities up to 242 standard deviations, making their results statistically pretty much airtight. In these experiments, they used entangled photon pairs generated in a high-speed particle collider and made use of double-channel polarizers, while removing all no-detection outcomes. An example design of these experiments is laid out in Aspect et al.'s 1982 paper.

Despite such rigorous testing, two possible loopholes remained open for localists, for a time. First, the detection rate was very low, and it is conceivable that detected events represented a non-random set of results, a loophole that is closed if the detected samples are assumed to be random.  Secondly there could be signals between the two detectors used because they were not far apart and signals below the speed of light could be possible. Distances of up to 6.5 m were originally used but that distance later grew to ten kilometres, essentially ruling out any superluminal signaling. Quite recent experiments such as those by Weihs et al in 1998 and M.A. Rowe et al in 2001 closed these loopholes in their designs, but not both simultaneously, thus leaving just a wee bit of wiggle room for localists, but not much, and continuing improvement in experimental design should soon close the debate.

Non-locality: A Real Pain in the Side OR an Open Door?

How difficult a simple almost-airtight conclusion is to swallow! Despite all evidence to the contrary, non-locality certainly has a significant number of detractors. Some objectors claim it is possible that quantum mechanics - the eerie entanglement observed the Young slit experiment and the EPR experiment, for example - is nothing more than a filmy gauze obscuring a deeper reality.

Superdeterminism Theory

Recently (2008, 2009), Nobel Prize laureate Gerard 't Hooft questioned the validity of Bell's theorem by suggesting the possibility of a superdeterminism loophole in it. Superdeterminism means that the entire universe is predetermined. All of nature runs like a kind of clockwork, which eliminates uncertainty in all systems and all free will as well (lugging with it its own set of non-trivial philosophical issues). He claims that the uncertainty of quantum mechanics is a front and behind it, the universe works in a perfectly straightforward a-to-b-to-c manner, even at the quantum scale.

This eliminates the need for faster-than-light speed communication because the universe already "knows' what will happen next. Some may think this argument, being deterministic, also has problems in regard to chaos, as chaos is by now a well-understood part of almost every physical system in the universe, and a well-established theory in its own right. However, chaos develops in deterministic systems. A chaotic dynamical system often appears to be completely disordered because it is extremely sensitive to initial conditions, so that even when initial conditions are as precisely measured as possible, tiny deviations of those measurements result in vastly different random-looking outcomes.

Superdeterminism therefore comes with its own built-in limitation. For all practical purposes, there are always going to be one or more absolutely tiny initial effects in any system that will make outcomes very nearly approach absolute randomness, thus making the superdeterminism loophole seem unlikely at least from a practical point of view. Another problem with this concept is that entanglement experiments seem to show that particles "communicate" with each other somehow, that one's decision determines the other's. But how can any decision-making occur if the universe is predetermined? An interesting possible way out of this is to consider that the particles may not have to communicate at all because they already did. In this case, because the universe itself comes from a single common source, no particle in it is completely independent of any other particle.

With superdeterminism, there is no longer any need to try to see how the superposition of particles, or Heisenberg's uncertainty principle, translates into reality, and locality would be preserved in the sense that everything connects to a single origin in the past. Wave functions would be considered to be purely mathematical constructs or tools that have no basis in reality.

All of this might explain "spooky action at a distance" but it comes with it the impossibility of ever testing the theory because there is no way to observe the system (universe) from outside the system.

Guiding Wave Theory

In 1952, physicist and philosopher David Bohm came up with a very interesting hidden variable possibility in quantum mechanics that suggests that there may be a hidden order that organizes a particle, which could be part of even deeper levels of order. He basically rediscovered an earlier idea that Louise de Broglie had proposed in 1927 and later abandoned. The idea is that there is a quantum particle, such as an electron, as well as an associated hidden guiding wave that controls its motion. In the double slit experiment that we explored in the previous article, the electron would clearly be considered a particle, and it goes through just one slit or the other guided by its hidden guiding wave. The electron would therefore not be thought of as an indeterminate non-physical state and its motion would not be considered random. The guiding wave would be responsible for building up the wave pattern that is observed. This de Broglie-Bohm theory, as it is also called, may be argued that it retains locality in the sense that there is connection between entangled particles at some level through a hidden type of order. However, other accounts consider it to be nonlocal. The guiding wave itself must be nonlocal and it is this wave that imparts the non-locality observed in experiments. In this case, the argument is that the velocity, for example, of a particle, depends on the positions of all other particles in the universe at that instant (the guiding wave encapsulates that). It seems that by introducing a hidden variable - the guiding wave - you lose locality. However, you gain realism in that the particle is now a "real" or physical particle.

Some problems stubbornly remain with this theory. If you consider special relativity, you will realize that this is a problem because what is instantaneous in one frame of reference will not be instantaneous in another frame. How does the guiding wave work in space-time that undergoes Lorentz transformations in other words? Another problem has to do with momentum. If a guiding wave can guide (change the trajectory of) an electron, then it must have some energy associated with it so it can change the momentum of the electron. As far as I know there is no evidence for changes in electron momentum associated with any version of the Young slit experiment, for example.

Perhaps most intriguing about this theory is that it implies a mutually interconnected holistic universe, a bit like t' Hooft's universe. While t' Hooft's superdeterminism theory seems to have few fans (based purely on my online sleuthing), Bohm's interpretation is widely popular among both physicists as well as curious laypeople, even though it can be argued that this theory is encumbered by a set-up that may be as superfluous as luminous ether proved to be. It is not considered to be an extension of quantum mechanics (QM) because it does not provide a more accurate prediction of experimental outcomes, but it does suggest that hidden variables are in fact possible even though they are not formally part of the QM formulation.

Some Thoughts on These Theories in Space-Time

I leave it to the reader to debate whether Bohm's interpretation is merely a contrived construction to make QM a bit more palatable or a stoke of genius (or neither). Something I find very interesting after researching the possibility of a Fractal Universe is that Bohm visualized his guiding wave as a phenomenon that might exist in some kind of abstract multi-dimensional configuration space. A growing list of fractal and other theories (such as string theory) hint that space-time might be composed of either some kind of inter-dimensional complex vector space or a version of higher dimensional space, where Einstein's four-dimensional vector space very closely resembles but not quite completely describes space-time. Think of dark matter and dark energy or of the mismatch between quantum mechanics and general relativity as examples of where our current understanding of space-time does not quite fit. This complex space is also reminiscent of the kind of space that a particle itself "lives" in, or perhaps is constructed of, as explored in the previous article, What is An Electron REALLY? The Dirac spinor, for example, works perfectly to describe electron behaviour but does not make sense in our familiar three-dimensional Euclidean physical space.

Perhaps quantum space is similar to the kind of space outside of time where we find Richard Feynman's infinite probability trajectories for a particle and where the particle itself is nothing more than a summed probability arrow. Not incidentally, Feynman's description of particle motion does not require any kind of guiding principle. For him, quantum mechanics works purely and only on the statistical summation of random activities, and it fully preserves the chaotic unpredictability inherent in the Dirac equation. In the Bohm interpretation, in contrast, the uncertainty principle that relates conjugate variables to each other is not fundamental but rather considered to be due to a lack of knowledge about the system, meaning that all aspects of the wave function are completely knowable at least theoretically. D' Hooft does away with uncertainty by evoking a completely deterministic universe.

Perhaps most intriguing of all is the concept how space-time came to exist in the universe. Both Bohm and t' Hooft hint at a universe that is interconnected at its deepest level. Is it crazy, then, to consider the possibility that the universe is one giant wave function? It, including us in it, is an explosion of sorts that is still expanding from a single point-like source. How do we talk, then, about collapsing wave functions of particles when we are within a giant wave function (one that I assume cannot be collapsed from within by our observation)? I am not sure that t' Hooft or Bohm ever went as far as to consider the universe to be a single wave function, and this idea is a very far stretch. Even the combined wave function of two entwined particles is extremely complex and difficult to formulate. No one has yet been able to calculate the Schrodinger wave equation (wave function) of even the simple helium atom, let alone a molecule or an object or a planet. Still, this doesn't mean that a universal wave function is impossible, only complex, and as we will see later on in this article, this idea is gaining traction.

Copenhagen Interpretation: The Wave Function is Not Real

Some physicists argue that the search for what is "real" within quantum mechanics is a search in vain - that quantum mechanics (QM) is not intended to offer us a description of objective reality. It deals only with probabilities involved in measuring and observation. Measurements done within a quantum system do not collapse reality but merely collapse a set of possibilities down to one out of many possible values. The electron, they claim, is NOT the wave function but something else which at present is directly unknowable. Incidentally it is also neither wave nor particle, as both descriptions are meaningless beyond their mathematical structures. This position is called the Copenhagen interpretation of QM, and it is the one most widely taught in schools and universities. The basic concepts behind this interpretation were in place back in the 1920's thanks to the work of Niels Bohr and Werner Heisenberg as well as others as they tried to figure out just what Dirac's disturbing equation meant.

We should keep in mind that these concepts have never been formalized into a single definitive statement and some treatments by different authors contradict each other. This makes the Copenhagen Interpretation one of the more difficult concepts to teach. Nevertheless, there are six basic principles of the Copenhagen interpretation, at least according to Wikipedia, all of which will seem very familiar to you if you are familiar with QM. They are listed on the Wikipedia site but I will summarize them here with quotes around parts that I've taken directly:

1) A wave function completely describes a quantum system. At first glance, this seems straightforward until we try to measure that system, be it a particle or a collection of particles. When any measurement is made of the system, the wave function "collapses into an eigenstate of the observable that is measured." Let's examine this latter statement. Mathematically, this means that, in QM, the state of the system is given by a vector in Hilbert space (Hilbert space is simply any three, four or higher dimensional space, generalized from a two-dimensional Euclidean plane). If a measurement is made on that system, that state is affected in "a non-deterministic but statistically predictable way." This, in turn, means that when a measurement is made, the state described by that single vector is destroyed and is replaced by a statistical ensemble instead. Unlike in a classical system, where we can measure any value and it does not affect the system itself (at least not usually in any significant way), a QM system is, by definition, destroyed by the very act of measuring it, and what we measure then is not the actual system but a statistical ensemble or, put another way, a probability distribution, the most likely outcome at that instant in other words. We have our measurement but it is the measurement of a system that no longer exists in the same state it was in at the instant it was measured. Here I reiterate: we are talking about a mathematical system not one with physical reality, according to this interpretation.

2) Even more unnerving, there is an incompatibility of observables in QM. You can simultaneously measure any values in classical mechanics but not in QM. Mathematically this no-go rule is expressed by the non-commutativity of corresponding operators (expressed as conjugate variables such as, for example, momentum and position) in the Dirac equation. This rule is expressed as the Heisenberg uncertainty principle.

3) The description of nature is probabilistic. What we observe and experience on the classical level is, in QM, described mathematically as the "square of the modulus of the amplitude of the wave function," not the wave function itself, in other words, but the statistically most probable version of it at that instant.

4) Matter, as a wave function, is both wave and particle in nature, which can be shown experimentally (Young's double slit experiment). However, Niels Bohr's complementary principle, which describes these complimentary features of particles, is considered to be strictly mathematical with no physical reality. The superimposed wave-particle of electrons or photons, for example, is therefore only a mathematical construct and not a physical description.

5) Any measuring device is a classical device. It measures only what is translated from the QM system into the classical system. When we measure momentum or position of an electron, for example, we are directly measuring only its classical position or momentum and nothing about its quantum nature.

6) The QM description of a large enough system very closely resembles its classical description. A desk, for example, can be almost entirely described using classical mechanics even though it is composed of countless atoms and their subatomic particles operating in a quantum mechanical system (which in theory would be represented by an astoundingly complex wave function). This is the correspondence principle of Bohr and Heisenberg. In general terms it means that quantum mechanics, as a valid theory, should reproduce the results of older established theories, in this case those of classical mechanics. At some point (called the correspondence limit), when we zoom in on the desk going from the classical microscopic scale down to the quantum (subatomic) scale (which is too small to be observed by any microscope), the rules of classical mechanics should give way to the rules of quantum mechanics. For example, once quantum mechanics was established as a theory (around 1925), two formulations - Schrodinger's wave equation and a formulation called matrix mechanics - offered equivalent mathematical descriptions of the theory. Schrodinger's wave equation describes how a quantum system changes over time and matrix mechanics extends the Bohr model of the atom by explaining how the electron's quantum jumps occur. When Schrodinger's equation is given a probabilistic interpretation, then the (spreading out) wave nature of particles gives way at larger scales to Newton's laws. In matrix mechanics, the first stand-alone consistent formulation of quantum mechanics, the correspondence principle is built right into the formulation. (Both matrix mechanics and the Schrodinger equation were incorporated into the Von Neumann mathematical formulation of quantum mechanics.)

Most importantly, the Copenhagen interpretation makes no promise that the wave function is anything more than a theoretical concept. It is not, in other words, the particle. The question of what the particle is was left, according to Bohr, as a purely meta-physical question. This means that the question I was asking in the prior article about the reality of the electron would be considered meaningless (not empirically verifiable) in a scientific way. In the decades since, the general claim that what cannot be directly measured cannot exist seems to have softened considerably as scientists continue to grapple with the simple question - what is a particle? We can, if we choose, consider the Copenhagen Interpretation as our base model interpretation, and then explore for ourselves various theories about how the core concepts of QM might translate into our macroscopic world, including even those that reverse Bohr's claim that the wave function is not real. We can also reserve the option of sticking to the Copenhagen base model.

What Is Wave Function Collapse?

All versions of the Copenhagen interpretation include some version of wave function collapse , but what is it exactly? As mentioned, when any measurement is made on a quantum mechanical system, the wave function "collapses into an eigenstate of the observable that is measured." First, what does eigenstate mean? When the position, for example, of an electron is pinned down to some exact value, the electron's state becomes an eigenstate of position, and its position has a known value, called an eigenvalue. When a wave function collapses (by measuring it or by a collision), we get one observed eigenvalue - the eigenstate - while all the countless other eigenvalues are removed from consideration. Other possibilities or any superposition of states in the system, in other words, disappear. This process is called quantum decoherence and it provides a way to explain how a classical measurement emerges from a quantum system. Once in a collapsed state, the system can then also move forward in time in a thermodynamically irreversible manner. In this sense thermodynamics emerges in a system not at the quantum level (where time reversibility according to Feynman is part of the picture) but at the classical level where its effects are observed.

According to the Copenhagen interpretation, the EPR experiment presents no problem because the wave function (the superimposed state) of the entangled particles is not real in any sense. As soon as the state of one particle is known, then the other can be known as well, end of discussion. The two known particle states therefore are the only "real" components of the system. As to the argument that information (about the collapse) must be passed to the second particle instantaneously thus faster than light speed, some people use an interesting and under-utilized out: Einstein's special relativity is a classical theory, dealing with the speed limit of classical objects and light signals. It makes no prediction about speed limits for systems in a coherent (superimposed) or quantum state. If you detect a contradiction here you are not alone. That being said, many researchers have grappled with the speed of light in the context of a quantum system and conclude that this universal speed limit is observed classically even when, at the quantum scale, individual particles may momentarily exceed light speed due to Heisenberg uncertainty. Variations in velocity of individual photons at the quantum scale average out to light speed at the classical scale.

I confess that I personally have trouble understanding the utility of the Copenhagen Interpretation if I must agree that it only works in the theoretical world without connection to the physical world. To me it seems like being given a new sports car but without the engine. I suspect I am not alone in concluding that the Copenhagen Interpretation needs something more, and that is likely why we have so many additional interpretations of quantum mechanics, all wandering into territory Bohr himself refused to enter.

So Is the Wave Function Real?

There is probably no interpretation of QM that asserts a more real wave function than the many-worlds interpretation (WWI), developed by Hugh Everett in 1957. It asserts that the wave function is physically real, and universal, and it does not collapse at all. As a consequence there is an almost infinite number of universes, each one a unique combination of every possible past event. Schrodinger's cat, in this interpretation, is alive in one universe and dead in another one - a splitting of universes occurs at every quantum choice. It reconciles such non-deterministic events as random radioactive decay (the premise behind Schrodinger's cat) with deterministic events. Quantum decoherence explains apparent wave function collapse, where every possible eigenvalue exists as a real eigenstate in its own universe. This interpretation is realist, deterministic and local, and the role of the observer is taken out. As well, it offers the advantage of being more streamlined and consistent than its largest current contender, the Copenhagen interpretation. Its huge disadvantage is that a plethora of new universes must be continuously brought into play. Conservation of energy is a concern as all these new universes must come from something (unless, for example, you consider the universe to be infinite). There is also an issue with special relativity.  Consider the "simultaneous" collapse of separated entangled particles, each one in a different reference frame, where one is moving close to light speed relative to the other. How do we define simultaneous, when depending on where you are observing from, one or other of the particles could collapse well before the other one? In this case the observer is not taken out but required instead to define the order of the activities taking place. We can bring the observer into the theory, by stating that the split occurs only when it is observed. How does bringing this seemingly eerie omnipotent flavour into the theory sit with you? It implies that every time you make an observation that observation collapses a wave function and the universe splits into two.

Most believers of WWI (including well-known physicist Max Tegmark) assume that all created universes are non-communicating, but some believe that there may be some overlap. For example, David Deutsch, proposes that the single photon interference pattern observed in the Young slit experiment is due to the interference of photons in multiple universes.

In summary, you have many choices in terms of how you interpret quantum mechanics. None that I've described have been disproven by experimental means. I offer some questions to consider as you develop and clarify your own position:

1) Is the wave function, the superimposed quantum state of a particle, real in a physical sense or real only in a mathematical sense? Or neither?
2) Is the universe truly nonlocal in nature? Is there no intervening "structure" that conveys or organizes the quantum information into wave functions, which would imply locality of some kind? How do two quantumly entangled particles "communicate" if that is what they are doing? Consider Bohm's interpretation, which includes a guiding wave that is nonlocal or local depending on how you define it, for example.
3) Is quantum mechanics really random in nature? Is Heisenberg's uncertainty truly a description of reality or is it a reflection of our current ignorance of the real underlying mechanism of quantum mechanics?
4) Is the unique utility of the many worlds interpretation (which clears up the problems of uncertainty, loses unreal aspects and loses non-locality) worth swapping out for the new problems of bringing with it an ever-expanding entourage of universes into existence? How about other interpretations? Are they worth the baggage they bring?

Is there a fifth choice in interpreting quantum mechanics? Is there a possibility that quantum mechanics illuminates a universe that is both real and not real at the same time? This would be a version of the Copenhagen Interpretation in which the wave function is both real and not real. An electron, a desk, and a galaxy by extension are also both real and not real. This non-dualist view of quantum mechanics offers us a way to take in the mathematical structure of the theory as an accurate blueprint for reality. I will explore this admittedly very metaphysical possibility in the next article.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.