Most of us who follow physics know
Einstein's famous response to quantum mechanics (QM): "God does not play
dice!" Many writers consider this quote as an indication of his refusal to
accept the randomness of QM phenomena, and they consider it to be one of
Einstein's biggest mistakes. The author of "Is The Cosmos Random?" in
Scientific American's September 2015 issue, having reviewed much of Einstein's
written work and correspondence, argues otherwise (a subscription is needed to read it online). A fresh look at what
Einstein thought about QM could lead to new approaches to how we think about quantum
phenomena, especially how we interpret the experiments that reveal its
trademark quirky behaviours.
Einstein's "hidden variable theory"
– that there must be one or more underlying variables to explain the seemingly
random (and spooky) nature of certain particle processes - was debunked by John
Bell in the 1960's. Based on his theorem, a series of elegantly designed experiments were carried out in the 70's and 80's to test the hidden variables theory. Those
results argue very convincingly against hidden variables (at least local variables). The principle of locality means that an object can only be directly influenced by its
immediate surroundings, whether it is an object pushing it, for example, or energy
or a force field acting upon it. In this case there is no evidence that some
hidden force field, or as yet unknown particle, acts on the subatomic particle
in question, influencing its behaviour. It's just, according to any
observations we can make of it, autonomously random.
Subatomic particles do some very weird
things. Their behaviours point to a built-in randomness at the quantum level of
reality. Particles also don't seem to play by the same rules of space and time
that we do at our everyday scale of physics.
Excited atoms, for example, emit one or
more photons when they return to ground state, but exactly when and in what
direction those photons are emitted are entirely random. Similarly, exactly
when a particular radioactive atom emits a beta or gamma particle, or a gamma
photon, is purely random. Despite this, both radioactivity and the emission of
light follow predictable rules of physics at the macro or everyday level, so
that such phenomena can be drawn as predictable curves on graphs even though
the individual particles themselves act entirely randomly.
In one version of the famous double slit experiment, electrons
are shot one at a time through a barrier, which contains two thin slits, toward
a detector screen. Electrons are used in this example but in theory this
experiment can be carried out with any subatomic particle because they all
follow the same quantum rules. Individual impacts are recorded as discrete
points on the detector, as we might expect. However, the impacts are randomly placed on the detector, even
though each electron is shot in an identical manor. Like the previous examples
show, the electrons have a probabilistic (random) nature, revealed here by
where they hit the detector.
This built-in randomness at the particle
level cannot be explained by classical mechanics. In classical mechanics, one
action always leads directly, and reliably, to another action. Classical
mechanics describes a clockwork universe in other words, where every outcome in
nature is ultimately predictable. The double slit experiment reveals that at
the subatomic (quantum) level, nature is not predictable but entirely random
EVEN THOUGH those same electrons, observed at the macro scale, follow Michael
Faraday and James Maxwell's predictable classical rules of electromagnetism.
You have here two layers of reality, where predictable physics is built upon an
unpredictable probabilistic base.
The double slit experiment reveals an
additional and even more perplexing subatomic reality. When electrons continue
to be shot through the slits, another phenomenon emerges. An interference pattern builds up,
like waves interfering with one another in a wave tank. This is not only direct
evidence for the dual particle/wave nature of subatomic particles. It also
reveals that individual particles, each one hitting the detector in a purely
random location, somehow manage to build a distinct pattern AS IF they know how
future electrons will contribute to the interference pattern. The experiment
can be repeated over and over. Electrons will hit the detector in different
random order each time but every time the same interference pattern builds up. This
implies that the particles are acting outside the boundaries of time, as we
understand it. A particle somehow "knows" the end result as it leaves
the electron gun. According to special relativity, no particle can travel faster
than the speed of light, back in time in other words, to plot out its
contribution.
Space somehow also seems to have a different
meaning on the subatomic scale. This example involves quantum entangled
particles. To make an entangled pair, for example, you can allow an unstable
spin zero particle to decay into two spin ½ particles. One will be spin up and
one will be spin down. Other than their opposite spins, these particles will
have identical quantum numbers. They will be identical twins in other words.
When the entangled particles are shot off in two different directions, they seem to communicate
information to one another, instantly, even though one particle may, by the
time it's measured, be across the universe from its entangled partner.
Particle A's spin might be measured (it has
a 50% chance of being either spin up or spin down) at some point. When A is
measured and found to be spin up, at that instant, Particle B's spin is
confined to spin down. Before measurement, both particle spins are said to be
in a superimposed up/down state. Collapse of one into spin up instantly forces
the other, wherever it might be located, to collapse into spin down state. This
experiment reveals the spookiness of the EPR (Einstein/Podolsky/Rosen) paradox,
and it can be reviewed on Wikipedia both here and here. Both entries describe the
phenomenon in great detail. The question for us is how does one particle
"communicate" wavefunction collapse to its partner instantly across
any distance? This goes further than breaking the light speed barrier because
it is instant. It is as if physical space does not exist for the entangled pair.
They are instead acting like one single particle.
All these phenomena have been exhaustively
experimentally verified. As we try to swallow those facts, we seem to be left
with two unsavoury choices: Either we accept at face value the fact that
phenomena occur randomly and in ways that don't make sense in terms of how we
understand space and time. Or, we cling to the hope that there is some
predictable and sensible underlying reality and we just haven't found it yet. If
we chose the latter option, we are treading toward Einstein's ruled out hidden
variables.
George Musser, the author of the Scientific
American article, offers us possible outs for both of these choices. First,
there is good evidence that reality is actually like a layer cake, where
probabilistic and predictable phenomena are layered on top of one another.
Which type of behaviour you observe depends on which scale you are observing.
If you are focused on behaviours the quantum scale, you will find probabilistic
behaviour. Zoom out and look at the same physical system at the everyday scale
and you will likely find predictable classical behaviour. What looks purely
random at one scale averages out to be
predictable behaviour on a grander scale. For example, consider a single isolated
atom in the vacuum of space. It could have any random kinetic energy, but it
has no temperature.* If you place that atom together with a few million of its
friends, you can now measure a specific temperature that is reliably determined
by measuring the average kinetic
energy of the atoms EVEN THOUGH that mixture consists of atoms that have all
kinds of random kinetic energies as they mill about and collide with one
another. Temperature is a predictable phenomenon that follows classical rules.
It is also an emergent phenomenon that does not exist at the quantum scale.
Musser offers even more layers of
phenomenon in the example of weather. At the quantum level, the gaseous
particles in air behave randomly. Get them together in measurable volume and
you find they perfectly follow predictable gas laws of behaviour (again, it is thanks
to averaging out billions of atoms). Now put two or more different large-scale
air masses together and you've got the unpredictability that accompanies any
weather forecast. The more days out you try to forecast, the more unpredictable
it gets because now you are dealing with the physics of chaos theory.
Chaos emerges from a non-chaotic initial state. Take a long view of weather
over several seasons and once again the numbers come back into predictable line
as climate data. Perhaps, considering this, it isn't too much to accept that
our predictable world is built upon the zany behaviours of quantum particles.
Second, we can wonder if there is any
possibility of some kind of reality underlying the quantum scale of physics,
implying that QM is actually only part of an as yet unfinished theory. This,
according to Musser, is really what Einstein was getting at: He wasn't arguing
against randomness so much as he was arguing against taking the random
behaviour observed at face value. There's a subtle difference between taking
that stand and resorting to a hidden force or particle. The layer underpinning QM
could once again be deterministic in nature. Consider this possibility: All the
countless random directions in which a photon can be emitted from an atom could
represent countless possible realities at our scale (the multiverse people
thoroughly explore this possibility). Here is where I veer off: We observe just
one of these possibilities but on its
scale, its reality could consist of
ALL the possible directions, simultaneously. We observe the photon emitting in
just one specific direction, and it looks totally random to us. But add all the
countless possible angles of emission and imagine all these realities simultaneously
coexisting, from the photon's
perspective. From its perspective, it actually achieves all possible emissions.
This, then, is what the quantum world looks like to the quantum particle. We,
on the other hand, see only one emission and it is random. If we follow the SA
article's logic, we can call this difference an abrupt transition from one
scale to the next (while maintaining that both realities are valid WITHIN their
own scale).
Richard Feynman came very close to
describing quantum phenomena the same way. To describe electron and photon interactions,
he started from the standpoint that the particles are waves and they move from
point to point as a wavefront. A wavefront, unlike a point, takes numerous
paths to get from A to B, rather than just one path. To translate that into
mathematical quantum jargon, you call the particle a probability wave, and it
doesn't take numerous paths. It takes ALL paths. This approach assumes that a
particle, just like larger objects, follows the principle of least action. By
assigning arrows that follow each possible path (in theory there are countless
paths remember) and rotating them as you go, you can get a measure of how
difficult, or how long and convoluted, each path is. By adding up all the
arrows as vectors, you get a final vector called the amplitude of the
wavefunction. This is the path integral that the particle takes from A to B,
which also happens to always be the shortest route it can take, and also
happens to be the path, the straight line, that we observe. This might seem
like a pointless exercise, all this fanciness just to get back to the
particle's observed trajectory. However, there is an important point to it, that
ALL possible trajectories DO contribute to the path integral, even routes that
take the photon all around the universe between A and B (those paths don't
contribute very much). Conceptually, this process introduces a whole new way to
think about a particle. The path integral forms the basis of the famous Feynman diagrams, I'll mention later. I
don't know if he ever thought of those infinite paths as a physical reality or
strictly as a mathematical method. I don’t think he ever couched it in the
kinds of terms where you think of the process as a kind of scale transition
from quantum to our macro scale, where one path as an observable phenomenon
emerges from a state of "all possible paths taken."
When you think about this, you might see
how it mingles with Max Tegmark's multiverse theory,
in particular his level III many-worlds interpretation.
Feynman himself suggested a closely related multiple histories interpretation
of QM.
This underpinning (and unimaginable)
possible quantum reality (all paths taken) could be thought of as a kind of
nonlocal, or global, hidden variable. It acts not directly on particles but
redefines them within their scale instead. It would result in a superimposed multiverse
(existing strictly at the quantum scale with only the possible very rare exception
of quantum tunneling). It would contain all quantum possibilities of all
quantum processes that ever have and ever will occur in the universe. In such a
quantum reality, each electron in the double slit experiment does, in fact,
take every possible trajectory to the detector. In an instant, each particle
has already built the interference pattern. From inside our macro-scale perspective,
we observe only an artifact of that reality or, better put, we observe a
different (emergent) reality where a single random path is observed and an
interference pattern mysteriously builds up. The double slit experiment,
therefore, becomes an opportunity to glimpse a direct translation of quantum
reality into our "macro language." We don't see the ultimate reality
of all those trajectories taking place at once (the source of the random
strikes on the detector) and that's why our observations don't make sense to
us. They do make sense, however, from the all-paths taken quantum path integral
perspective.
We can see that the quantum entanglement
phenomenon can also be a translation of quantum reality that we are reading in
our macro reality terms. In such a quantum world, each of the two electrons
shoot off in every possible direction
simultaneously. In that quantum reality they are everywhere at the same time,
and they are indeed part of a single entity, because their quantum states are
superimposed (they share the same total momenta, angular momenta, and
energy).
It seems confusing because most of the time
we don't need to look into QM weirdness. In many cases, we can accurately
describe a particle's behaviour as if it is a point-like particle that travels
in a straight line. Think of the Rutherford gold foil experiment, in which an
alpha particle** is shot at a relatively big gold atom. The occasional
collisions between that particle and the nucleus can be described using
classical dynamics. The alpha particle is deflected as if it were a small hard ball.
Many other experiments also reveal the point-like nature of particles. Only
experiments cleverly designed to single out quantum behaviour reveal it. Entanglement
experiments tell us that it is useless to visualize electrons or photons or any
subatomic particles as point-like particles. They never are point-like, except when
we translate them into our scale (the alpha particle - nucleus collision though
tiny is observed in our scale). Sometimes the translation seems seamless. A
particle is observed as clearly a point-like particle or a wave. Sometimes it's
almost lost (as in two entangled electrons experiment). What we observe is
muddled.
Wave function collapse, from this
perspective, is not a process (there is actually no mathematical framework
describing this process by the way). Instead it is the transition from the quantum
scale to the macro scale. We don't see a quantum particle/wave at all. We don't
see any collapse. When we do see a "particle," it is the artifact-like
trace of what that path integral represents in our reality, at our scale.
From a statistical standpoint, it looks as
if we are measuring only one (random) degree of statistical freedom from within
a quantum reality that contains countless degrees of freedom. In most
experiments, what we observe is actually the path integral of all the possible
degrees of freedom in that quantum system. Because they are path integrals, all
the quantum randomness fits seamlessly into our perception of reality, in the
same way that temperature makes sense – from a distance. The clever electron
version of the double slit experiment is one of the few exceptions where one
single (random to us) degree of freedom is plucked out at a time.
This statistical treatment brings to mind
recent work done on a mathematical object called the amplituhedron.
I wrote an article on it here.
Like a multifaceted higher-dimensional jewel, its volume can calculate the
probabilistic outcomes of particle collisions inside colliders, a very tedious
job that is usually done by large computers, or it could be done by resorting
to drawing many hundreds of Feynman diagrams. The amplituhedron is like a
shortcut that circumvents those calculations and goes directly to a geometric assessment
that can be quickly processed. The researchers also calculated a master
amplituhedron that contains an infinite number of facets, analogous to a full
circle (where every direction is represented) in two dimensions. Its volume, in
theory, represents the total amplitude of all physical interactions in the
universe. Lower dimensional amplituhedra live on the faces of this structure
and represent our observations when a finite number of particles collide.
Both Feynman diagrams and the amplitudehron
seem to do the same thing. The Feynman diagrams take the scenic route (it takes
so many of those calculations) and the amplitudehron takes the direct route.
Both can predict the probabilities of creating various kinds of particles when two
massive particles collide with each other in a collider. Both serve as a kind
of translator taking information from the quantum scale that we can't directly access
and turning it into a macro-scale form we can observe.
Using a scale approach eliminates the need
to make an unsavoury choice between "quantum-scale phenomena are random
and don't make sense" and "there must be some hidden variable
somewhere." Instead we come to a single consistent conceptual framework
and we could say that Einstein was right after all. There is a hidden nonlocal
variable in the sense that quantum reality is all possibilities at once. It can
differ from reality at the macro scale because phenomena unique to the macro
scale are emergent. The shift from one reality to the other is a shift in
scale, where emergence takes place.
The question of whether an electron is
physically real or not takes a back seat to the question of what scale we're
talking about. What does this mean for the reality of a subatomic particle? For
those readers who hope for a physically real particle, the picture here once
again seems to strongly suggest that reality at its most basic level is strictly
built of potentiality. Reality itself is redefined as a scale-dependent
concept. We could argue that what we measure and observe in our quantum
experiments (an electron doing something funky) is as real as all-paths-taken (the
electron being in all places at once) and vice versa. In the same way that temperature
is a real phenomenon to us but not to a subatomic particle, a path integral is
real at the quantum scale but not to us. To us it is just a particle moving in
a straight line from A to B. It might not satisfy some readers to say that a
real object such as a chair, for example, consists of a collection of the
statistical average of all possible quantum potentialities. How do we even
experience the separateness of objects then? I would answer that
"chair-ness" is an emergent property that is physically real to us at
our scale.
There is an unexpected upside to this
approach. The Holographic Universe principle provides a consistent (different)
explanation for quantum entanglement but it takes the randomness of free will
away in the process, something most people find abhorrent since we sense we can
make random choices and change our futures. I tackled that principle a few
years ago in this article.
Thanks to the layer-cake nature of scale, we can retain our unpredictable free
will even though the cells in our brains behave according to established predictable
physical and chemical laws (two different scales). What this approach forbids is
applying rules that work for one scale to another scale. The branch of
psychology that tackles our conception of free will (ego, moral directive, our
subconscious dreams, etc.) doesn't use the same language as neuroscience
(axons, glial cells, receptor flooding, neurochemical reactions, etc.) for good
reason. In physics, it can be all too easy to forget that caution, especially
when many of us carry in the back of our heads the idea that there is one
ultimate reality that should work in all cases, no matter what our perspective
is. When it comes to quantum phenomena, we're lost.
When I say "rules" I don't mean
that physical laws change from one scale to the next, nor am I suggesting that
spacetime is something different at the quantum scale (no one knows what
spacetime is at the quantum scale). I am also not suggesting that quantum
phenomena could actually ever be observed (what do you bounce off an electron to
"see" it without affecting it?) or verified directly. I mean the
rules of observation and interpretation have to be scale-dependent. Just
because a particle acts like a tiny hard ball in one experiment doesn't mean
that the particle really is a tiny hard ball. I only argue that we can use a
scale-dependent approach that borrows from the science of emergent phenomena to
interpret what is going on in the double slit and entanglement experiments at
the quantum scale.
*Here I mean only classical temperature –
thermal motion or the degree of "hotness." The particle will have
entropy as well and it can be precisely measured. Those entropies can in theory
be added up and averaged to get temperature as well. That's the thermodynamic
approach. In fact, temperature theory is quite complex (simply google
"temperature"). I intend only the most basic classical kinetic
approach in my example.