This is a question on many student’s minds: what are the electrons actually doing inside a current-carrying wire?
The answer might seem straightforward at first glance. An electrical current is a flow of charge moving through an electrical conductor or through space. From electronics-notes.com, we have a practical definition of electrical current: it is “the rate of change of flow past a given point in an electric circuit.” From physicsclassroom.com (an excellent teaching site), the definition alludes to a bit of scientific history: An electric current is “by convention the direction in which a positive charge would move.” Electrons “move through the wires in the opposite direction.” Confusing for many students, conventional current in a wire moves from the positive terminal of a battery to the negative terminal. Electrons, being negative charge carriers, actually move in the opposite direction. This might be where many of us stop in our quest to understand current, which is unfortunate because this is a fascinating exploration.
The silver lining of this conventional terminology is that it nudges us toward the history, which shows an incredible advancement in solid-state physics. How was electrical current discovered? We probably all know about Benjamin Franklin’s famous kite experiment in 1752. We might not know that he didn’t actually discover electricity with this experiment, nor was he the first to discover that lightning was actually a type of electricity. Electrical forces had been known for hundreds of years by his time. I think, however, we can fairly credit him for contributing to our confusion about current. He studied static electricity by producing a static charge on the surface of glass, amber and other materials by rubbing them with fur or a dry cloth. This resulted in an exchange of electrons from one material to another. At that time, electrical current was called “electrical fluid,” and as such, Franklin guessed that some materials (such as glass that’s rubbed) contained more of this fluid than others. To his thinking, these charged objects contained excess, or positive, electricity, while others contained a deficiency of the fluid, or negative electricity. Electric batteries were developed soon afterward and it seemed natural to assign the direction of electrical flow from positive to negative (excess to deficient). It was only when electrons, the subatomic particles responsible for static charge, were discovered about one hundred years later, that scientists realized these particles move in the opposite direction. It is an excess of electrons that produces a negative charge, so the flow from excess to deficient must actually be from a negative terminal to a positive terminal.
This “conventional current” (positive to negative) had staying power. The conventional terminal designation is still used worldwide. It’s not a problem to work with as long as it is consistent, but it can present a problem when we try to understand what is actually happening inside the conductor.
What then is going on inside an electrical current-carrying wire, for example? Consider an everyday power cord on a vacuum cleaner. If we could zoom into a cross-section of that cord, we would see wires made of copper through which electrons travel easily and with little resistance, surrounded by material that resists current flow and provides good electrical insulation. We can imagine electrons flowing from the electrical outlet in the wall, through the cord, and into the appliance. But where do these flowing electrons end up? Do they get used up in the process of doing work somehow? When the power is shut off, are we left with an alarming reservoir of electrons somewhere inside the vacuum motor? Where do the electrons in the wall socket originate from? These great questions start us on our journey.
When we think of electrical current as a physical flow of negatively charged particles through a material or through space, we might think of something analogous to water molecules flowing in a stream, and when we do, a number of pressing questions come to mind. Like the out-dated convention of positive to negative terminal current flow, the terms “current” and “flow” themselves lead us away from clear modern evidence that electrical current is a not a physical flow of particles at all. Many physics classrooms begin their discussion of electrical current with a water analogy and it is a good place to go to get a feel for how simple electrical circuits work. But this analogy, while a good start, proves misleading as we deepen our understanding. And there is much more to this fascinating story than this. To learn it we must upgrade our understanding of what electrons are doing at the subatomic level inside a conductor.
What Is an Electron?
All materials are made of atoms. All atoms consist of a nucleus surrounded by electrons. The nucleus is composed of neutrally charged particles straightforwardly called neutrons and positively charged protons. It therefore has a positive charge. These particles are bound tightly together by a fundamental force called the strong force (strong force). This force is indeed strong. It easily overcomes the repulsive forces between the positively charged protons, but it only acts over an extremely short distance, at the scale of the nucleus itself, and from there, its influence drops off dramatically to zero. Negatively charged particles called electrons surround the nucleus. They are attracted to the nucleus through the attraction of opposite electrical charges. An electrically neutral atom contains equal numbers of electrons as protons.
We might imagine electrons moving around the nucleus in planet-like circular orbits, except that here, the atomic force is electrostatic rather than gravitational. This is the familiar Bohr model (below) introduced by Niels Bohr and Ernest Rutherford in 1913. This model of the hydrogen atom shows three possible energy shells for the electron. At n=1, the electron is at its lowest energy (ground) state. If the atom is in an excited state, the electron will be in a higher energy shell. It will emit a photon of light as it returns to a lower energy state.
In an excited pure hydrogen gas, you will see the whole spectrum but hydrogen will never emit green or yellow, for example. These scientists figured out that electrons in atoms must occupy discrete energy levels. They orbit at specific stable distances from the nucleus. The farther an electron is form the nucleus, the higher its energy level. An electron can move up or down only by jumping to/from specific energy levels. Energies are therefore quantized. They come in quanta or packets. This model is useful for predicting the spectral phenomena of simple atoms with few electrons, like hydrogen, but it cannot explain the spectra of large complex atoms. Nor can it explain the different intensities of spectral lines for any given atom. It is also still used as a simplified model for chemical bonding between atoms, in which atoms share one or more electrons located at their outermost energy levels.
The modern model of the atom, the quantum physical model, describes the positions of atomic electrons not as being precisely located within energy shells but as clouds of probability. The hydrogen atom is shown below right. The shapes of the electron orbitals are shown in yellow and blue. Denser regions of colour indicate a higher probability of the electron's location. The energy shells (1, 2 and 3) are arranged in increasing energy from top to bottom. The orbital shapes are s, p and d, shown from left to right.
This model incorporates the fact that we no longer understand electrons to be just point charges, or tiny charge-carrying billiard balls. Under certain circumstances they do act as such. For example, an electron has a measurable momentum and it can take part in elastic collisions. But electrons also act as waves and that wave nature also shows up under certain circumstances, such the creation of wavelike interference patterns. We now think of the electron as both particle and wave, which is not easy to grasp. These qualities seem to be mutually exclusive. Electrons in conductors, however, do display both wave and particle behaviours. To predict such behaviours, we must go quantum and understand electrons as wave functions that exhibit both particle AND wave behaviour. An electron’s position and momentum (or velocity) are now defined as probabilities. Where and how fast an electron is going are assigned probability amplitudes, rather than specific values. Furthermore, thanks to Werner Heisenberg’s uncertainty principle of quantum mechanics, these are complementary variables, which means we can only know one value at the expense of another (this part was actually formulated by Bohr). If we want to know precisely where an electron is, we cannot know its momentum at that same moment, and vice versa. Likewise, the electron’s wave and particle properties are also complementary. A single electron cannot simultaneously exhibit both its full wave-like and particle-like nature, with the exception of the famously fascinating double-slit experiment, in which electrons show some of both behaviours at the same, which I encourage you to look up.
This is a difficult conceptual step to make but we must because the easy-to-visualize Bohr model, as useful as it is, leaves something missing. By using Schrodinger’s equation to mathematically describe atomic electron behaviour as a wave function, physicists can predict many of the spectral phenomena that the Bohr model cannot. It’s not easy to do in practise either; the calculations are very complex.
What is an Electrical Conductor?
The atoms that make up a good electrical conductor have an atomic structure that allows the outermost electrons, called valence electrons, to be loosely bound to the nucleus. Valence electrons are outermost energy shell electrons. These are the electrons that take part in chemical bonding. Now we can describe chemical bonding more precisely, in quantum mechanical terms, as two or more atomic orbitals combining to form a molecular orbital. An atomic orbital is a region of space around the nucleus where an electron is most likely to be. It can be a simple spherical shape or a more complex shape depending on its energy level. The orbital shapes come from solving the Schrodinger equation for electrons bound to their atom through the electric field created by the nucleus. The orbital is part of the electron wavefunction that describes the electron’s location boundary and its wavelike behaviour.
Molecular orbitals, on the other hand, come in three different kinds: 1) bonding orbital, which has a lower potential energy than the atomic orbitals it is formed from, 2) antibonding orbital, which has a higher energy than the atomic orbitals, so it opposes chemical bonding and 3) nonbonding orbital, which has the same energy so it has no effect on chemical bonding either way. A chemical bond is a constructive, in-phase, interaction between two valence electrons of two atoms. An antibonding orbital is an interaction between atomic orbitals that is destructive and out of phase. The wavefunction of an antibonding orbital is zero between the two atoms. This means there are no solutions to the Schrodinger equation for this orbital and therefore there is no probability of an electron being available to bond.
Metals tend to be good conductors because their valence electrons are loosely bound to their nuclei. They are delocalized electrons, which means that they are not associated with a particular atom or chemical bond. These molecular orbital electrons extend outward over many atoms. Although the electrons are delocalized, the metal atoms themselves are bound tightly together through metallic bonding, by electrostatic attractions between the positive nuclei and the “sea” of electrons in which they are embedded. The atoms are held tightly together, which means metals tend to have high melting and boiling points. The nuclei in metals act like positive ions in this arrangement, which means that metallic bonding is similar in this sense to ionic bonding. The resulting metal structure is a tight three-dimensional lattice arrangement, similar to the atomic structure of ionic crystals such as sodium chloride (table salt). In contrast to ionic compounds, valence electrons in the metallic lattice form molecular orbitals that extend across the entire metal. The bonding electrons themselves do not orbit the entire metal but their influence extends across the metal. Valence electrons in metals act more like a collective than they do in conventional chemical bonds.
All of the valence electrons in the metal participate in molecular bonding. As vast as this collection of delocalized electrons is, the number of possible delocalized electron energy states is far greater. All metal atoms contain few electrons in their valence energy orbitals. Transition metal atoms, for example, can hold up 18 electrons in the outermost energy shell (which consists of 5 d orbitals, one s orbital and three p orbitals), but they are barely filled. If this is confusing, we will be exploring this in more detail later. The point here is there are many more possible energy states than there are electrons available to fill them. These empty available states, which are all similar in terms of energy, will become important when we look at band theory later on in this article.
What Happens When an Electric Potential is Applied?
When a metal with high conductivity is placed in an electric field, all the valence electrons tend to move against the direction of the field. An electric field is a vector force field that surrounds an electrical charge and exerts a force on other charges nearby. The electrons themselves each generate an electric field, as do the positively charge nuclei in the metal. When electrons within the metal move, they also generate a magnetic field. We can begin to see electric current as a complex interplay between multiple local electric and magnetic fields.
An analogy of a waterfall is often used to describe the movement of charged particles, such as electrons, as moving along or down an electric potential. An electric potential might at first sound the same as an electric field but the electric potential expresses the effect of an electric field at a particular location in the field. We could place a test charge within an electric field and measure its potential energy as a result of being in that field. (A positive test charge is usually used and this is why electrons move against the direction of an electric field.) The potential energy of the electric field will differ, depending on the location within that field. A negative test charge, for example, will have high potential energy close to a negative source charge and lower energy further away from it. In other words, the electron tends to move from where it would have high potential energy toward lower potential energy. Inside a copper wire attached to a battery, for example, the electron is pushed away from the high-energy negative terminal (labelled positive!) and toward the region of lowest electric potential energy, the positive terminal (labeled negative!). This difference in electric potential energy is called voltage. Voltage is defined as the amount of work required per unit of charge to move a test charge between two points. 1 volt = 1 joule of work done per 1 coulomb of charge. For example, if a 12-volt battery is used in a wire circuit to power a light bulb, every coulomb of charge in the circuit gains 12 joules of potential energy it moves through the battery. Every coulomb in turn loses 12 joules of energy into the environment as light (and some heat) as it powers the light bulb.
The waterfall analogy, as mentioned earlier, can be misleading. Within the wire, the electrical current does not depend on electrons wriggling free from copper atoms and flowing down the wire, like water molecules would flow down a waterfall. The description of a “delocalized sea of electrons” can naturally lead to such an assumption about current. Instead, each valence electron is held loosely enough to its nucleus to “nudge” a neighbouring electron in a neighbouring atom and so on. A better analogy for electrical current might be that of fans in a football stadium row standing up one after another to do the wave. The people stay in place but the wave moves down the row. If we want to stick with water, we could say that the current is analagous to the wave travelling across a sea. Electric current can be defined as the rate at which charge (not electrons) flows past a point in a circuit. 1 ampere of current = 1 coulomb of charge/second.
It’s easy to imagine a flow of electrons under pressure (an analogy for voltage) gliding along inside a wire like water flowing through a pipe, but current is not the physical flow of electrons, but rather the flow of the energy of their movement. We can put this idea more scientifically as a momentum transfer. Put yet another way, it is a transfer of kinetic energy from one electron to another and another and so on. We can think of nudged electrons transferring energy from one to the next like a billiard ball hitting an adjacent billiard ball along a line of billiard balls. This analogy alludes to the particle nature of the electrons.
Electrical Conduction: The Drude Model
The “billiard ball” description of electrical conduction is called the Drude model, proposed in 1900. This model was a bit before it’s time because it predated even the (Rutherford) atomic model by a few years. The “sea of electrons” embedded within a positive matrix that Rutherford envisioned (also called the raisin bun or plum pudding model) just happens to align pretty well with the Bohr valence electron model for metals, where a sea of valence electrons surrounds each atom. The Drude model is essentially a description of how kinetic energy is transferred within a conductor by treating electrons like tiny billiard balls. For those science history buffs like me, we can follow how the Drude model evolved into our modern model. We can read into it how our understanding of electron behaviour evolved. The Drude model was not a modern quantum mechanical model of what is happening, but it could predict some electron conduction behaviour by understanding conduction as a momentum transfer between electrons exhibiting particle-like behaviour. In fact, Paul Drude used Maxwell-Boltzmann statistics to derive his model. These statistics describe an average distribution of non-interacting particles at thermal equilibrium. This is a classical description of how an ideal gas behaves, and it was the model available to him at the time. In this view, each atom in a conducting metal such as copper contributes valence electrons to a sea of non-localized, non-interacting electrons. It turns out he was accidentally right. In most cases, at least in the case of copper, we can effectively neglect the interactive forces between the electrons because they are shielded from those forces. All the relatively stationary (much more massive and more highly charged) atomic nuclei in the copper present an overwhelming influence on them. It is this shielding effect) that allows us to model metal electrons fairly accurately as an ideal gas, a cloud of particles that do not interact with each other. However, the particles, in this case electrons, are far more concentrated together than the atoms in any gas would be. This means that this model doesn’t predict all conduction behaviour. Electrical conduction exhibits behaviours that are too complex to be described through classical ideal gas theory.
Maxwell-Boltzmann statistics were replaced around 1926 by Fermi-Dirac statistics. Rather than focusing on the non-interaction between particles, these statistics describe a distribution of particles over a range of energy states in a group of identical particles that all obey the Pauli exclusion principle. This effectively upgraded the rule for interaction. Rather than saying the electrons don’t interact, we now say they cannot occupy the same quantum state. We now have to treat electrons as quantum particles. Everything about a quantum particle is described by four quantum numbers (spin, magnetic, azimuthal and principle). Electrons belong to a special division of quantum particles called fermions. Fermions can share any three quantum numbers but they can’t overlap in the same location at the same time. A boson such as a photon can, and it is governed by a different set of statistics. We are introducing the Pauli exclusion principle to the gas-like behaviour of valence electrons inside a metal. The Pauli exclusion principle is a critically important rule that electrons in atoms (and all fermions) must obey. It is a quantum mechanical principle. The implications are quite profound. It means that the potential energy of the gas-like electrons must now be taken into account, and by doing so, it limits the numbers of electrons that occupy each orbital in an atom. This rule forbids valence electrons from moving down to occupy already filled lower energy states in the atom. This means there is a lower limit on the potential energy of the atom’s (lowest energy) ground state. By incorporating quantum mechanical effects, this model significantly improved the predictions we can make about conducting electron behaviour.
Band Theory of Metals
The Drude model, as good as it is, is still missing a component we need to take into account with conduction in metals. We need to consider the effects of how molecular bonding relates to the conductivity of the metal. We’ve been modelling the conduction electrons as particles that have limited occupied energy states but otherwise act like strangers to each other.
Molecular orbitals in metals are much larger than atomic orbitals, which are confined within the atom. In fact, molecular orbitals extend and overlap each other throughout the whole material. Bearing this in mind, the word “orbital” here can seem misleading. There is no orbiting implied here. Better said, the set of molecular orbitals that are created in the metal are a set of interactions that are generated by the valence orbitals of the interacting atoms. This molecular interaction, rather than the electrons themselves, extends throughout the material.
Metallic bonding means that a very high number of atomic valence orbitals interact simultaneously within a metal. This is a critical key to explaining why metals can be such good electrical conductors. Within a free atom, an atom that is not chemically bonded to any other atom, the energy that an electron can possess must fall into one of several possible discrete energy levels. Within a conductive metal, on the other hand, due to the overlapping of a huge number of molecular orbitals, an enormous number of new possible energy states opens up. The available energies of the valence electrons are no longer confined to discrete energies but instead now to a band of available energies that can vary in width. This molecular orbital approach is called band theory. It’s a very useful way to visualize how conduction works, and to understand why some materials are better conductors than others.
Why are the valence energies spread out into a band? First, we must distinguish between an orbital and a shell. Inside any atom, valence electrons are confined to the outermost (highest energy or bonding) valence energy shell. An electron shell is all about the energy. An atomic orbital is about where an electron is most likely to be found. To distinguish these two concepts, we can imagine that we are adding electrons to an atom. First we fill up a sphere-shaped 1s-orbital, which can hold two (opposite-spin) electrons. Then we fill up the next highest energy 2s spherical orbital with two electrons Then we start to fill three 2p orbitals (three dumbbells shapes in three-dimensions) with 2 electrons each, six electrons total, and so on. Every atom of every element possesses the same number of potential orbitals and shells, but they differ in the number of electrons inhabiting them and the orbitals themselves can differ in size. The 1s electrons belong to the lowest energy K shell. The 2s and 2p electrons belong to the next higher energy M shell. The M energy shell, which is just a circle in a Bohr diagram, can now described in three-dimensional detail, as a double-lobbed and spherical orbital. The three 2p orbitals (which extend along the x, y and z axis in three-dimensional space) and 2s orbital hold a total of eight electrons in the M energy shell. A simplified diagram is shown below left.
Let’s look at the electron configuration of copper as an example. A copper atom, with 29 electrons in total, has an unexpected electron configuration that hints at why it is so conductive. If we simply fill up orbitals in order (1s, 2s, 2p, 3s, 3p, 4s, 3d; a 4-lobed d orbital can contain up to 10 electrons) we will have 1s22s22p63s23p64s23d9. This is actually wrong, based on experimental evidence. For copper and other transition metals, we must write the 3d orbital before the 4s orbital even though 3d is considered to be higher energy than 4s. This is an oddity of the transition elements. In these elements, the 4s orbital behaves like an outermost highest energy orbital. This short 3-minute youtube video explains why:
Following the transition element rule, we should have 1s22s22p63s23p63d94s2, thinking (correctly) that 4s should still fill up before 3d starts to fill. This is still wrong. According to experimental evidence the correct configuration is 1s22s22p63s23p63d104s1. In these metals, the 3d orbital is just slightly larger than the 4s orbital, but it is still higher energy than 4s the same as it is in other atoms. However, it needs only one more electron to be filled (with 10 electrons total). This filled d orbital state is a more stable lower energy configuration, so it pulls up an electron from the slightly lower energy 4s orbital to achieve it. That electron, moving up from 4s to 3d, increases in energy but the increase is very small. That small cost pays off by significantly lowering the atom’s total potential energy.
At first glance you might conclude that copper has one valence electron, just one electron available to delocalize and form extensive molecular orbitals that take part in electrical conduction. You cannot read it from the configuration, but the 3d and 4s orbitals are very similar in energy. That’s why the electron swap is possible. This means that in practice, copper has eleven electrons available to contribute to the valence energy shell, and therefore to molecular orbitals and to electrical conduction.
If we go back to our copper wire example, these valence electrons will fall into a range of many possible energy states due to the overlapping of a huge number of molecular orbitals. This range is called the valence band. In conductive metals, there are many empty valence vacancies with energies just above the filled orbitals that electrons, can move into and out. This range of energies just above the valence band is called the conduction band. In fact, in metals, these two bands of energies overlap, so that some electrons are always present in the conduction band.
Another energy level that is very useful to understand is the Fermi level. This can be defined as the maximum available energy level for an electron in a material that is at absolute zero. Absolute zero is the coldest possible temperature of a material. It is the lowest possible potential energy state. In a conductor at absolute zero, the valence electrons are all packed into lowest available valence orbitals. Thanks to the Pauli exclusion principle, the electrons must retain a range of base level energies. We can call this range a “Fermi sea” of electron energy states. The Fermi level is like the surface of this sea, where no electron has enough energy to rise above the surface, an analogy I lifted from the Hyperphysics link above. I think this analogy can lead to some confusion. We might assume that the Fermi level is simply the top of the valence band, which is incorrect. From Wikipedia, we can understand the Fermi level more deeply as “a hypothetical energy level of an electron, such that at thermodynamic equilibrium this energy level would have a 50% probability of being occupied at any given time.” It’s critical to note that the Fermi level does not correspond to any actual electron energy level. Instead it is an energy state that lies between the valence and conduction band energies, or where they overlap in the case of metals. In metals, there are an equal number of occupied and unoccupied energy states at the Fermi level. There are many electrons and there are many free states, and therefore, conductivity is maximized. In contrast, in a material where all the energy states are occupied at the Fermi level, there is nowhere for electrons to move to, so it cannot conduct electricity.
At room temperature some copper electrons are already in the conduction band. We might guess that the conductivity of a copper wire would simply continue to increase as the temperature increases above room temperature, but the relationship is more complex. As the temperature goes up, as electrons in the copper atoms gain thermal energy, they also gain non-directional kinetic energy and therefore lattice vibrations in the copper grow. This kind of electron motion is random and “jittery.” Electrical conduction depends on the motion of electrons in one direction. As the temperature increases, electrical conduction becomes increasingly hindered by internal collisions between delocalized electrons, and conductivity therefore decreases. If we cool copper down toward absolute zero, we would expect fewer and fewer electrons with enough energy to conduct electricity as they fall below the Fermi level. At the same time, thermal vibrations within the metal lattice calm down so conducting electrons are less likely to be scattering by other electrons. If it were possible to reach absolute zero, we would expect no electrons at all above the Fermi level and there would be no conduction possible. It seems to me we could never apply an electric potential to test for current without adding the energy of an electric field that would excite some electrons into conduction.
Electric Insulators
Every element and material has a valence band and a conduction band. In terms of electrical conductivity, materials fall into one of three categories: conductor, insulator and semiconductor. Even the best insulator is not a perfect insulator. It too has a conduction band that, under the right conditions, can be populated by a few mobile electrons. There is a significant difference between insulators, semiconductors and conductors in terms of how far apart the energies of the valence and conduction bands are. In the diagram below the energy is increasing from bottom to top.
We can describe the valence electrons in every material as having two possible ranges of energy levels, in other words, two energy bands: a valence band and a conduction band. Between these bands is a gap, a range of energies that no valence electrons can occupy. Its width varies greatly between materials. Each element or material therefore has a unique band structure. In all insulators, there is a large range of energy above the valence band where no electron energy orbitals are available. They have large band gaps. In an insulator, the organization of the atoms does not allow for free electrons. All of the electrons are tightly bound to their nuclei (there is no shielding effect) and they form tight localized bonds between atoms. A great deal of energy must applied to an insulator before its valence electrons have enough energy to jump the band gap into the conduction band. Once enough energy is applied, however, even an insulator will start to conduct electricity. At ground state, valence electrons are tightly bound to the atoms in the insulator material. In a sufficiently high energy environment, they become excited and delocalized into conduction electrons.
I am being careful to write “energy” rather than “temperature” here because a strong electric field is much better than heat for tearing valence electrons away from atoms and turning them into conduction electrons. Heat is a randomly directed influence that generates random jittery electron movements in the material.
To see what happens when an insulator becomes a conductor, consider a resistor hooked up to a circuit. Let’s apply a voltage much higher than it is rated for. The electric field created by the applied voltage will eventually overcome the material’s resistance. We call this resisting property its dielectric strength. When its dielectric strength is overcome, the material breaks down into a conductor. In a solid, the breakdown is physical and permanent. Our little resistor is burned and ruined. We can also observe this breakdown as an electrostatic discharge, a familiar example being lightning. A sudden giant spark is emitted as air, normally a strong insulator, breaks down through ionization. Electrons stripped from the gas atoms become conduction electrons when a sufficient electric potential difference between one area and another builds up during a storm.
Semiconductors
Semiconductors are widely used in electronic circuits. A semiconductor is any material that conducts current but only partially. Its conductivity falls between an insulator and a conductor. Most are made of crystals, so they have lattice structures similar to metals. In semiconductors, the band gap is smaller than in insulators, so a smaller amount of energy (a small amount of heat in many cases) is required to bridge the band gap into the conduction band. Some semiconductors can be chemically altered to further enhance their conductivity, through a process called doping. In all materials, the Fermi energy level is somewhere in the middle of the band gap and if there is no band gap, as in metals, it is near the top of the valence band.
In semiconductors, doping can shift the Fermi level either much closer to the conduction band (N type) or much closer to the valence band (P-type). Doping is done by adding a few foreign atoms into the lattice structure of the material. These impurities add extra available energy levels. In N-type semiconductors (shown below left), energy levels are added near the top of the band gap (along with additional free valence electrons that contribute to conduction).
These are holes that electrons would normally occupy in the top of the valence band of the material. Electrons can jump in and out of them. N-semiconductors are much more conductive than P-type semiconductors because there are many more electrons available in the conduction band in the N-type than there are holes in the valence band in the P-type.
Conductors
In conductors, there is no energy gap at all. The valence band overlaps the conduction band. At room temperature, some electrons are energetic enough to be conduction electrons. When an electric potential is applied, current is generated in the material. When we want to know how good an electrical conductor is, we can ask how many electrons are available in the conduction band. Copper, as we discovered, has eleven electrons per atom that populate the valence band. These electrons are delocalized in the lattice as molecular orbitals. That’s a large number of electrons available to generate a current when an electric potential is applied.
If all metals have overlapping valence and conduction bands, why are some better conductors than others? The answer can be quite complicated but we can get some idea by comparing copper with iron, which is much less conductive. Copper has a conductivity of 5.96 x 107 S/m (Siemens per metre) at 20°C. Iron’s is 1.00 x 107 S/m at 20°C, about six times lower. Copper displays an almost perfect Fermi surface which means it acts like a hypothetical free electron sphere would act. Its valence electrons are all lined up close to the Fermi level, the line between occupied and unoccupied energy shells. They move easily in any direction. Iron has two main differences. Iron atoms have a strong magnetic moment; they act like tiny magnets. This splits the band structure into two parts, based on the two magnetic spin states. This splitting separates the electrons, reducing the ways they can move. The second difference is the Fermi surface isn’t smooth for iron. A Fermi surface is an abstract geometric representation of all occupied versus unoccupied electron energy states in a metal at absolute zero. The shape is derived from the symmetry and periodicity of the metal lattice. In iron, it is broken up into many disconnected pockets, rather than a smooth electron sphere. Electrons have to jump from pocket to pocket in order to move. Even though iron has free (delocalized) electrons in molecular orbitals just as copper does, these two factors make it much harder for the electrons to move and generate current.
To sum up, when wondering about the conductivity of a material, a crucial question to answer is how close the conduction band is to the valence band. In a conductive material the two bands are very close and in metals they overlap. In a semiconductor at room temperature, there is a small gap in energy between the valence band and the conduction band with the Fermi level within that gap. Doping adjusts the Fermi level and may provide additional conducting electrons. In an insulator at room temperature, the energy difference between valence band and conduction band is so large that at room temperature no electrons in the valence band can absorb enough energy to populate the conduction band. If the energy of an insulator is increased (usually be applying very high voltage), some of the conductor’s valence electrons may absorb enough energy to cross the Fermi level and jump the gap into the conduction energy band. The dielectric strength of the material is overcome and the material breaks down.
There is a finer difference between conductors and semiconductors, in terms of the density of energy states crossing the band energy gap. In a semiconductor, the conduction band is above the Fermi level, so as its energy goes up, the conduction band begins to get populated by electrons as they cross the gap, starting from zero to one electron to two and so on. Conductivity gradually increases starting from zero. In a good conductor, the Fermi level is in the conduction band because it overlaps with the valence band at room temperature. As the energy rises, the number of conduction electrons starts with an already populated conduction band and increases from there. This is additional reason why metal conductors conduct current so readily.
In addition to their conductivity, band theory also explains many physical properties of metals.
Metals conduct heat better than other materials because their delocalized electrons can easily transfer thermal energy from one region to another. Generally, the better a metal is at conducting electricity, the better it is at conducting heat. Metals at room temperature feel cool to the touch because the electrons in the metal readily absorb the thermal energy in your warmer fingertips. This energy absorption represents a very small jump in energy to slightly higher available energy levels. For the same reason, metals under a hot sun tend to feel hotter than other objects nearby. The electrons easily transfer thermal energy to relatively cooler objects such as your fingertips. Metals tend to have a small specific heat capacity, which is a measure of how much heat must be added to a material to raise its temperature. In a similar way, metals are lustrous because numerous similar energy levels available to the valence electrons means they can absorb the energy of various wavelengths of visible light (they absorb various colours simultaneously). When electrons inevitably decay back to lower rest state energy levels, they emit light across the same wide range of wavelengths. As light is continuously absorbed and re-emitted from the surface of the metal, we see it as lustrous and shiny.
Three Different Velocities
Now that we know how electrical conduction starts, we can focus on what’s happening when the current flows. Here we can further our quantum mechanical understanding of how the electrons in a conductor behave. A metal, as we know, is organized into a three-dimensional lattice of atoms. Each atom has an outer energy shell of delocalized valence electrons. These electrons are somewhat free of the attractive influence of their nuclei and they interact with each other, forming molecular bonds between atoms within the lattice. This “sea of electrons” is free to move and conduct an electrical current through the metal. For example, when a voltage is applied to a circuit, electrons in a copper wire drift from the negatively charged terminal toward the positive charge. The electrons themselves drift very slowly through the metal, on the order of a just few metres per hour. If we looked inside the copper wire with the circuit switch turned off, we would see electrons continuously making microscopically short trips in random directions, changing direction as they strike other electrons and bounce off them. The net velocity of the these electrons is zero when no voltage is applied. When a voltage is applied, there is a net flow, a drift, of electrons in the opposite direction (negative to positive) to the electrical field (positive to negative) that is superimposed over the random drift motion.
Wikipedia supplies an interesting mathematical example of electron drift velocity, worked out for a 2-mm diameter copper wire. The drift velocity (which is proportional to the current; here 1 ampere) works out to be 23 um/s. That’s micrometres. For a typical household 60 Hz alternating current, the electrons in the wire drift less than 0.2 um per half cycle. This means that the electrons flowing across the contact point in a switch, for example, never even leave the switch. This doesn’t mean that the electrons are individually lumbering along within the wire. Individual electrons in any material at room temperature have a tremendous amount of kinetic energy, called Fermi energy. The Fermi velocity of electrons is about 1570 km/s! The drift, the net movement of electrons in one direction under an electric potential, however, is extremely slow. The speed of electricity (the speed at which an electrical signal or energy travels) through a copper wire is yet again different. The energy that runs the motor in a vacuum travels as an electromagnetic wave from the socket, through the cord, and to the motor. The wave’s propagation speed is close to but not quite the speed of light. This is why the vacuum starts up immediately once you flip the switch. And this is where our simpler billiard ball explanation of current propagation falls down. If electrons were like tiny balls striking one another down a line, like football fans doing the wave in a stand, delays caused by the inertia of each electron as it is put into directed motion would build up quickly and significantly and slow the current to a stop. The electric signal instead travels as propagating synchronized oscillations of electric and magnetic fields generated by the oscillations of electrons in conducting atoms.
The wire guides, rather than physically carries the wave of electromagnetic energy. This wave of energy, in turn, generates an electromagnetic field that propagates through space, responding to the (just preceding) energy flow. The time required for the field to propagate along the metal means that the electric field can lag slightly behind the electromagnetic wave, an effect that grows with the length of the wire, but the lag is immeasurably small at the scale of a vacuum cleaner cord. The vacuum is switched on and the wave of energy, which we can think of as an electromagnetic field signal, travels at near light speed down the cord. All the mobile electrons in the copper wires immediately get the signal to start oscillating along the electrical circuit. The electric signal, as an electromagnetic (EM) wave, is composed of oscillating electric and magnetic fields. The electromagnetic energy flows as oscillating electric and magnetic fields that are generating just ahead of it. The electrons, acting as moving charge carriers and magnetic diploes, generate the oscillations of the (EM) signal.
The EM wave loses energy along the cord, as energy is transferred into work done by the charge carriers in the wire. Energy is always lost during the transmission of electricity. Using Ohm’s law, we know that voltage, current and resistance are related to each other. Losses square with the current, which means that a small jump in current through a wire leads to a big jump in loss (through increased electrical resistance). This is why, for example, long-distance transmission lines are so high voltage, to minimize current loss due to resistance. A lot of energy is lost as microscopic friction (electrons bumping into electrons) inside the wire, and it is released as heat. A current-carrying copper wire warms up. Because the atoms are bound tightly together and there are many electrons in close proximity to one another, electrons will experience some resistance even in a good conductor such as copper.
Bloch Theorem
We haven’t explored yet how the three-dimensional lattice arrangement of atoms in a metal impacts its conductivity. In 1928, Felix Bloch formulated his Bloch theorem to deal with these effects.
To use this tool to understand how current propagates within a metal lattice, we need to deepen our understanding of quantum mechanics once again. Electrons, like any matter particles, have a dual wave and particle nature. Resistivity can be explained by electron-electron collisions, a particle-like phenomenon. Conductivity, however, is better understood as the transmission of energy waves between electrons. Bloch theorem incorporates both natures simultaneously by treating electrons mathematically as wavefunctions. Put more precisely, this theorem is set up in quantum mechanics as special limitations put on Schrodinger’s famous equation. This equation describes the wave function of a quantum mechanical system. A wave function, in turn, is a mathematical description of an isolated quantum state. The wavefunction is useful because it gives us measurable information about that quantum state. I think it’s fair to say that the wavefunction, a complex function of space and time, pins down the un-pin-able physical properties of the electron. These physical properties are position, momentum, energy and angular momentum, the four quantum numbers mentioned previously. The “pin” is mathematical. If you know these four numbers, you know everything there is to know about any particular individual electron. In a quantum mechanical system the set of possible values for these physical properties cannot be specific values as they would be in a classical system. Here, they are expressed instead as eigenvalues, or as ranges of possible values.
You will notice I said “isolated state” when we really want to describe what is happening in a complex system containing many electrons. None of these electrons are acting as perfectly isolated particles, but again we start by simplifying matters. We make the assumption that the electrons are acting like free particles and we ignore their interactions with each other. Copper, as mentioned earlier, comes pretty close to this hypothetical ideal state. An electron acting like a free electron in a metal such as copper can be treated mathematically as a wavefunction. In this case, we put limits on the wavefunction, so that our solutions to Schrodinger’s wave equation take into account the periodic nature of a three-dimensional lattice. Put more precisely, these solutions form a plane wave that is modulated by a periodic function. A plane wave is set up as a set of two-dimensional wavefronts traveling in forward through three-dimensional space in a perpendicular direction. A periodic function is a mathematical way to describe a system that repeats its values at regular intervals, like the regular intervals of a crystalline arrangement in a metal. These functions are called Bloch functions and we can use them to describe the special wavefunctions, the special quantum states, of electrons in a metallic crystalline solid. The Bloch function itself is not periodic but its probability wavefunction includes the periodicity of the lattice, which tells us that the probability of finding an electron is the same at any equivalent position in the lattice.
This mathematical description of electrons underlies the band theory of electron conduction that we discussed. Band gaps, energy states where electrons are forbidden, can now be described mathematically as values for energy, E, where there are no eigenfunctions in the Schrodinger equation. An eigenfunction, also known as a Bloch function here, is the mathematical description of any particular electron, as a wavefunction within a crystalline (metal) solid. We can predict the energy range where electrons are forbidden (the band gap) by working out where there are no eigenfunctions (no wavefunction solutions). The actual calculations are exceedingly complex and I don’t pretend to understand them. I do think, though, that it’s quite astounding that we have a way to precisely predict and describe a very complex behaviour that is critical to understanding how electric conduction works.
Emergent Behaviours and Properties
It might strike us as surprising that the Bloch theorem actually works so well, considering that we are ignoring many complications that could arise from electron-electron interactions. Complex interactions between subatomic particles in a metal can lead to the emergence of unexpected properties at the larger macro- or everyday scale.
The electron, as a elementary particle, has a charge and a mass. But, because it is a quantum particle, its charge and mass are qualities that can act in independent and unexpected ways. This is where our notion of the electron as a tiny physical ball of charge is really challenged. The Bloch theorem works because the electron’s charge moves within a periodic electrical potential as if it were a free electron in a vacuum. It’s mass, in contrast, becomes an effective mass. The mass of an electron at rest is always about 9 x 10-31 kg. Inside a metal, however, an electron can seem to have a different mass based on how it reacts to various forces acted upon it. Effective mass can range considerably, from zero to around 1000 times the electron rest mass, and this can have a big influence on how the metal behaves. For example, “heavy fermion“ metals, those with an effective electron mass in the 1000 range, can exhibit superconducting properties, such as zero electrical resistance below a low critical temperature. Effective mass can even be negative as in the case of P-semiconductor electron holes mentioned earlier. An electron hole is a lack of an electron mass where one should exist in the lattice, and leaving a local net positive charge. Each hole acts like a particle and is referred to a quasiparticle, in this case a positively charged one. It is phenomenon that arises from a complex system. This behaviour plays an important role in current conduction through semiconductors. Excited electrons leave behind holes in their old ground state energy level, and they can move just like electrons do, resulting in an electric current moving in the opposite direction.
Conclusion
Electric current is perhaps the most basic concept at the heart of the science of electricity. It’s a rare day that goes by when we don’t make use of at least one light bulb or our mobile phone. It’s so familiar to us that we might not give it much thought. Yet, electrical current is mysterious. It cannot be directly seen, heard or felt. It’s not easy to gain an idea of what it actually is. In order to do this we had to dive deeper and deeper into theory, from classical to quantum mechanical, while we updated our mental snapshot of the electron along the way, from the Rutherford haze to the tiny charged billiard ball to a modern quantum cloud of mathematical probabilities.