My Faster-Than-Light Murder Paradox

The strange relativistic result that the order of events can flip for different reference frames leads us into a new aspect of reality: the deep issues of causality and free will. These issues can be dramatized by my story of the faster than light murder.

A tachyon is a hypothetical particle that travels faster than the speed of light. Remarkably, relativity does not prohibit particles from traveling that fast. It says only that massless particles must travel at light speed, and that particles that have a real rest mass cannot travel at that speed (since the gamma factor would be infinite, and they would have infinite energy). The equations don’t prohibit faster-than-light travel per sec….

{Faster-than-light (also superluminal or FTL) communication and travel refer to the propagation of information or matter faster than the speed of light. Under the special theory of relativity, a particle (that has rest mass) with subluminal velocity needs infinite energy to accelerate to the speed of light, although special relativity does not prohibit the existence of particles that travel faster than light at all times (tachyons)}. 

Yet despite the upside of [discovering a tachyon], many decided many years ago not to bother searching for a tachyon. My reason borders on the religious. I believe that I have free will, and the existence of tachyons would violate that belief. Let me explain.

Imagine that Clary is standing 40 feet away from Jack. She has a tachyon gun that fires tachyon bullets that move at 4c, four times the speed of light. She fires. Light moves at a speed of 1 foot per nanosecond (billionth of a second), so her tachyons move at 4 feet per nanosecond. In just 10 nanoseconds, the tachyon bullet enters Jack’s heart and kills him. Let’s assume he dies instantly.

Clary is brought to trial. She doesn’t deny any of the facts I just described, but she insists on an unusual change of venue. She says she has a right to argue the case in whatever reference frame she chooses. They are all valid, the judge knows, so he allows her to proceed. She chooses a frame moving at half lightspeed, ½ c. Since that frame is moving slower than the speed of light, according to relativity it is a valid reference frame.

In the Earth frame, the two events (fire gun, hit heart) are separated by +10 nanoseconds. The same two events described in a reference frame moving at ½c have a time separation of –15.5 nanoseconds. The negative sign means that the two events occur in the opposite order. The bullet enters the victim’s heart before Clary fires the gun! Clary has the perfect alibi. Jack was already dead when she pulled the trigger. You can’t murder a dead person.

This murder example is based on the same relativity principle that caused confusion in the twin and pole-in-the-barn paradoxes. If two events are sufficiently separated in space, and not too different in time, then there will be frames in which the order of events will reverse. Such distant events are called “space-like.” Two events that occur near each other but separated in time are called “time-like.” The order of space-like events depends on the frame of reference; the order of time-like events does not.

Is the tachyon gun murder scenario possible? How could analysis in the ½ c frame be valid, if it has such an absurd implication? Does this mean that tachyons don’t exist, or does it mean that relativity is nonsense? What if tachyons are really found? …

One possible resolution for the tachyon murder paradox is that, in this world that has tachyon guns, Clary does not have free will. Even though she pulled the trigger after Jack died, she had no choice but to do so, since without free will, choice is illusory. All of her actions arise from influences and forces outside of herself. John died because it was inevitable that Clary would pull the trigger; the inevitability of physics created the combined scenario of shooting and death, and the order in which they occurred is irrelevant. There is no paradox if the world is governed by causal physics equations. The scenario presents a problem only if you think people have free will, if you believe that Clary could have decided not to fire the gun. If physics rules, then she does only what the various forces and influences on her cause her to do.

Wave Function And Holism : A Million Dices

  • the origin of new, original concepts from nothing is a property of emergence
  • classical physics can have few emerge properties, but quantum physics is dominated by emergent scenarios
Does anything really new ever come about in physics? Are properties or characteristics of things ever created or evoked “from nothing”? Emergence is a view that holds the answer to these questions to be, in some very important sense, yes. Emergence is the rise of a system that cannot be predicted or explained from antecedent conditions. George Henry Lewes, the 19th-century English philosopher of science, distinguished between resultants and emergents–phenomena that are predictable from their constituent parts and those that are not (e.g., a physical mixture of sand and talcum powder as contrasted with a chemical compound such as salt, which looks nothing like sodium or chlorine). The evolutionary account of life is a continuous history marked by stages at which fundamentally new forms have appeared: (1) the origin of life; (2) the origin of nucleus-bearing protozoa; (3) the origin of sexually reproducing forms, with an individual destiny lacking in cells that reproduce by fission; (4) the rise of sentient animals, with nervous systems and protobrains; and (5) the appearance of cogitative animals, namely humans. Each of these new modes of life, though grounded in the physicochemical and biochemical conditions of the previous and simpler stage, is intelligible only in terms of its own ordering principle. These are thus cases of emergence.A property is said to be emergent if it cannot be defined or explained in terms of the properties of its parts or if it is not reducible to these properties and relations. Classical physics is reductionist. But, quantum entities have emergent properties. Things like position and energy simply do not exist until they are measured or observed (i.e. until the wave function collapses). They exist as potentialities, but can not explain the properties of the actualities.

  • the wave function contains a whole that is greater than the sum of the parts
In this sense, quantum entities have a whole that is greater than the sum of its parts. And the reverse is true, that nothing can ever be wholly reduced to the sum of its constituent parts.


  • holism is a philosophy that the whole is primary and often greater than the sum of the parts
  • a holist is concerned with relationships not the pieces
  • quantum physics is difficult to reconcile with reductionism, requires a holistic view of Nature
  • the particle or wave aspect of a quantum entity requires a dialogue with the environment
This is the holistic nature of the quantum world, with the behavior of individual particles being shaped into a pattern by something that cannot be explained in terms of the Newtonian reductionist paradigm. Newtonian physics is reductionist, quantum physics is holistic. Holism as an idea or philosophical concept is diametrically opposed to atomism. Where the atomist believes that any whole can be broken down or analyzed into its separate parts and the relationships between them, the holist maintains that the whole is primary and often greater than the sum of its parts. The atomist divides things up in order to know them better; the holist looks at things or systems in aggregate and argues that we can know more about them viewed as such, and better understand their nature and their purpose.The early Greek atomism of Leucippus and Democritus (fifth century B.C.) was a forerunner of classical physics. According to their view, everything in the universe consists of indivisible, indestructible atoms of various kinds. Change is a rearrangement of these atoms. This kind of thinking was a reaction to the still earlier holism of Parmenides, who argued that at some primary level the world is a changeless unity. According to him, “All is One. Nor is it divisible, wherefore it is wholly continuous…. It is complete on every side like the mass of a rounded sphere.”

In the seventeenth century, at the same time that classical physics gave renewed emphasis to atomism and reductionism, Spinoza developed a holistic philosophy reminiscent of Parmenides. According to Spinoza, all the differences and apparent divisions we see in the world are really only aspects of an underlying single substance, which he called God or nature. Based on pantheistic religious experience, this emphasis on an underlying unity is reflected in the mystical thinking of most major spiritual traditions. It also reflects developments in modern quantum field theory, which describes all existence as an excitation of the underlying quantum vacuum, as though all existing things were like ripples on a universal pond.

Where atomism was apparently legitimized by the sweeping successes of classical physics, holism found no such foundation in the hard sciences. It remained a change of emphasis rather than a new philosophical position. There were attempts to found it on the idea of organism in biology – the emergence of biological form and the cooperative relation between biological and ecological systems – but these, too, were ultimately reducible to simpler parts, their properties, and the relation between them. Even systems theory, although it emphasizes the complexity of aggregates, does so in terms of causal feedback loops between various constituent parts. It is only with quantum theory and the dependence of the very being or identity of quantum entities upon their contexts and relationships that a genuinely new, “deep” holism emerges.

Every quantum entity has both a wavelike and a particlelike aspect. The wavelike aspect is indeterminate, spread out all over space and time and the realm of possibility. The particlelike aspect is determinate, located at one place in space and time and limited to the domain of actuality. The particlelike aspect is fixed, but the wavelike aspect becomes fixed only in dialogue with its surroundings – in dialogue with an experimental context or in relationship to another entity in measurement or observation. It is the indeterminate, wavelike aspect – the set of potentialities associated with the entity – that unites quantum things or systems in a truly emergent, relational holism that cannot be reduced to any previously existing parts or their properties.

  • numerous experiments have shown that quantum interactions produce results that are not predictable by analysis of components
If two or more quantum entities are “introduced” – that is, issue from the same source – their potentialities are entangled. Their indeterminate wave aspects are literally interwoven, to the extent that a change in potentiality in one brings about a correlated change in the same potentiality of the other. In the nonlocality experiments, measuring the previously indeterminate polarization of a photon on one side of a room effects an instantaneous fixing of the polarization of a paired photon shot off to the other side of the room. The polarizations are said to be correlated; they are always determined simultaneously and always found to be opposite. This paired-though-opposite polarization is described as an emergent property of the photons’ “relational holism” – a property that comes into being only through the entanglement of their potentialities. It is not based on individual polarizations, which are not present until the photons are observed. They literally do not previously exist, although their oppositeness was a fixed characteristic of their combined system when it was formed.In the coming together or simultaneous measurement of any two entangled quantum entities, their relationship brings about a “further fact.” Quantum relationship evokes a new reality that could not have been predicted by breaking down the two relational entities into their individual properties.

The emergence of a quantum entity’s previously indeterminate properties in the context of a given experimental situation is another example of relational holism. We cannot say that a photon is a wave or a particle until it is measured, and how we measure it determines what we will see. The quantum entity acquires a certain new property – position, momentum, polarization – only in relation to its measuring apparatus. The property did not exist prior to this relationship. It was indeterminate.

Quantum relational holism, resting on the nonlocal entanglement of potentialities, is a kind of holism not previously defined. Because each related entity has some characteristics – mass, charge, spin – before its emergent properties are evoked, each can be reduced to some extent to atomistic parts, as in classical physics. The holism is not the extreme holism of Parmenides or Spinoza, where everything is an aspect of the One. Yet because some of their properties emerge only through relationship, quantum entities are not wholly subject to reduction either. The truth is somewhere between Newton and Spinoza. A quantum system may also vary between being more atomistic at some times and more holistic at others; the degree of entanglement vary. Where a reductionist believes that any whole can be broken down or analyzed into its separate parts and the relationships between them, the holist maintains that the whole is primary and often greater than the sum of its parts. Nothing can be wholly reduced to the sum of its parts.

  • the rules of the quantum world follow logic, but a logic of both/and rather than the logic of either/or of the macroscopic world
The highest development of quantum theory returns to the philosophy of Parmenides by describing all of existence as an excitation of the underlying quantum vacuum, like ripples on a universal pond. The substratum of all is the quantum vacuum, similar to Buddhist idea of permanent identity.Quantum reality is a bizarre world of both/and, whereas macroscopic world is ruled by either/or. The most outstanding problem in modern physics is to explain how the both/and is converted to either/or during the act of observation.

Note that since there are most probable positions and energy associated with the wave function, then there is some reductionism available for the observer. The truth is somewhere between Newton and Parmenides.

The Cosmological Constant And Einstien’s Real Blunder

If Las Vegas were taking bets on dark energy, the odds would favor a concept known as vacuum energy or the cosmological constant. In essence, it suggests that space itself produces energy, which is “pushing” the universe outward.

Albert Einstein invented the cosmological constant as part of his theory of gravity, known as General Relativity.

Particle Collisions

 1. Empty space. 2. Two particles suddenly appear. 3. Particles ram together and annihilate each other. 4. They leave ripples of energy through space.

Einstein’s equations showed that the gravity of all the matter in the universe would exert a strong pull, pulling all the stars and galaxies toward each other and eventually causing the universe to collapse. At the time, though, astronomers believed that the universe was static – that it was neither expanding nor contracting. To counteract this problem, Einstein added another term to his equations, called the cosmological constant, to balance the inward pull of gravity.

Within about a decade, though, astronomer Edwin Hubble discovered that the universe is expanding. Einstein discarded the cosmological constant, calling it his greatest scientific blunder.

When dark energy was discovered, though, many physicists began to think that Einstein’s only blunder was in removing the constant. This “repulsive” force could begin to explain the acceleration of the universe. In other words, it might be the dark energy.

Today, physicists explain the cosmological constant as the vacuum energy of space.

In essence, this says that pairs of particles are constantly popping into existence throughout the universe. These “virtual pairs” consist of one particle with a negative charge and one with a positive charge. They exist for only a tiny fraction of a second before they collide and annihilate each other in a tiny burst of energy. This energy may be pushing outward on space itself, causing the universe to accelerate faster.

One of the appealing elements of vacuum energy is that it could explain why the acceleration has only started fairly recently on the cosmic timescale.

In the early universe, all the matter was packed much more densely today. In other words, there was less space between galaxies. With everything so close together, gravity was the dominant force, slowing down the acceleration of the universe that was imparted in the Big Bang. In addition, since there was less space in the universe, and the vacuum energy comes from space itself, it played a much smaller role in the early universe.

Today – 13.7 billion years after the Big Bang – the universe has grown much larger, so the galaxies are not packed so close together. Their gravitational pull on each other is weakened, allowing the vacuum energy to play a more dominant role.

Vacuum energy has its own set of problems, though. It should be far too weak to account for the acceleration seen in the present-day universe, for example — by a factor of at least 1057 (a one followed by 57 zeroes), and perhaps as much as 10120 (a one followed by 120 zeroes). Yet it is the most complete scenario to date, so it leads the pack of dark-energy contenders.


Planetary Configurations:

The planets outside of the Earth’s orbit (Mars, Jupiter, Saturn, Uranus, Neptune) are called superior planets

Likewise, the planets inside of the Earth’s orbit (Mercury, Venus) are called inferior planets.

Other configurations are:

  • Object at greatest western elongation = “morning star”
  • Object at greatest eastern elongation = “evening star”
  • only inferior planets have phases
  • transit = passage of an inferior planet across the Sun

Galileo’s laws of Motion:

Aside from his numerous inventions, Galileo also laid down the first accurate laws of motion for masses. Galileo realized that all bodies accelerate at the same rate regardless of their size or mass. Everyday experience tells you differently because a feather falls slower than a cannonball. Galileo’s genius lay in spotting that the differences that occur in the everyday world are in incidental complication (in this case, air friction) and are irrelevant to the real underlying properties (that is, gravity). He was able to abstract from the complexity of real-life situations the simplicity of an idealized law of gravity.

Key among his investigations are:

  • developed the concept of motion in terms of velocity (speed and direction) through the use of inclined planes.
  • developed the idea of force, as a cause for motion.
  • determined that the natural state of an object is rest or uniform motion, i.e. objects always have a velocity, sometimes that velocity has a magnitude of zero = rest.
  • objects resist change in motion, which is called inertia.

Galileo also showed that objects fall with the same speed regardless of their mass. The fact that a feather falls slowly than a steel ball is due to amount of air resistence that a feather experiences (alot) versus the steel ball (very little).

Hammer and Feather on Moon

Kepler’s laws of Planetary Motion:

Kepler developed, using Tycho Brahe’s observations, the first kinematic description of orbits, Newton will develop a dynamic description that involves the underlying influence (gravity)

  • 1st law (law of elliptic orbits): Each planet moves in an elliptical orbit with the Sun at one focus.


      that are highly flattened have high eccentricity. Ellipses that are close to a circle have low eccentricity.
  • 2nd law (law of equal areas): a line connection the Sun and a planet (called the radius vector) sweeps out equal areas in equal timesObjects travel fastest at the low point of their orbit, and travel slowest at the high point of their orbit.
  • 3rd law (law of harmonics): The square of a planet’s orbital period is proportional to its mean distance from the Sun cubed.The mathematical way to describe Kepler’s 3rd law is:P2 α R3where the α symbol means `proportional to’. Proportions are expressions that imply there exists some constant, k, that relates the period, P, and the radius, R, such thatP2 = kR3We can determine k by expressing the formula in units of the Earth and its orbit around the Sun, such that

    (1 yr)2 = k (1 A.U.)3so k is equal to one, as long as we use units of years and A.U.’s (the Astronomical Unit, i.e. the distance from the Earth from the Sun). With k=1, then kepler’s 3rd law becomes

    P2 = RThe 3rd law is used to develop a “yardstick” for the Solar System, expressing the distance to all the planets relative to Earth’s orbit by just knowing their period (timing how long it takes for them to go around the Sun).


Many years after Kepler, it was shown that orbits actually come in many flavors, ellipses, circles, parabolas and hyperbolas; a family of curves called conic sections. There are five basic types of simple orbits: radial, ballistic, stable, polar and geosynchronous.

For an escape orbit, the velocity sufficient to escape gravitation pull of the planet, i.e. the major axis is infinite, such as the Voyager spacecraft

The direction a body travels in orbit can be direct, or prograde, in which the spacecraft moves in the same direction as the planet rotates, or retrograde, going in a direction opposite the planet’s rotation.

The semi-major axis of an orbit is determined by the kinetic energy acquired by the rocket at burnout. This is equilvent to the burnout velocity. For low burnout velocities (below 25,000 ft/sec) the orbit is ballistic, meaning it does not escape the surface of the Earth. Burnout velocities above 25,000 ft/sec achieve stable orbit. At 35,000 ft/sec, the orbit reaches the distance of the Moon.

The amount of burnout velocity also determines the orbit type, an ellipse, a parabola or a hyperbolic path.

Satellites use a wide variety of orbits to fullfil their missions. The orbit chosen for a satellite is a comprimise between the mission requirements, the capabilities of the rocket used to launch the satellite and orbital mechanics.

  • The orbital period. This increases with the mean altitude of the orbit, so a satellite in a low earth orbit moves faster than a satellite in a geostationary orbit. Also the velocity of a satellite in an eccentric orbit varies along the orbit, being fastest at perigee & slowest at apogee (Keplers second law of equal areas).
  • Inclination. The angle between the plane (major axis) of the satellite orbit and the equator
  • Eccentricity: A perfectly circular orbit has an eccentricity of zero, an elliptical orbit an eccentricity of 0< to <1, a parabolic orbit an eccentricity of 1 and a hyperbolic orbit of >1. The low point of an orbit is known as perigee, whilst the high point is apogee. The major axis is the vector connecting the perigee to the apogee.
  • The ascending node is where the orbit crosses the equator in a northbound direction (ie. the direction of the satellite motion). Likewise, the descending node is where the orbit crosses the equator in a southbound direction.

Low Earth Orbit:

Weather and spy satellites use over pole orbits so that Earth turns under them once per day, i.e. total coverage of the Earth’s surface

Landsat 7 is an earth resources spacecraft which images the earth’s surface in visible and infrared light. Therefore this satellite orbit is optimised for earth observation. For this reason a near polar orbit of 700km, 98.8 inclination, 98 minute period is used which ensures that the satellite can (at least in theory) observe the entire globe. Several other features of this orbit make it especially useful for remote sensing satellites.

  • Circle of visibility = yellow circle around satellite indicating the region of the earth visible from the satellite.
  • Part of orbit in sunlight = yellow.
  • Part of orbit in shadow = red.
  • Dayside of earth = light blue.
  • Nightside of earth = dark blue, after the terminator three lines indicate the boundaries of civil, nautical & astronomical twilight.

General view of Landsat 7 orbit.

Left: View perpendicular to plane of orbit
Right: View of orbit from ascending node

In theory an orbit should remain fixed in space whilst the earth rotates beneath the satellite. In reality the earth is slightly bulged and the effect of this bulge is to shift the point of perigee and the ascending node for any orbit which has an inclination other than 90. This effect is known as nodal regression, the result of which is that the plane of the orbit rotates or precesses.

Ground tracks. Red dots along the ground track show the position of the satellite at regular intervals. Closely spaced dots indicate slow speed, widely spaced dots high speed.

However, this effect is used to advantage here to shift the orbit at exactly the same rate as the daily change in position of the sun over any point of the earth. So the satellite always passes over the earth on the sunlit part of its orbit at the same local time of day (for example at 9 am local time). This ensures that lighting conditions are similar (ignoring seasonal differences) for images taken of the same spot on the earth at different times. Additionally the orbit is resonant with the rotation period of the earth, meaning that the satellite passes over the same point on the earth at the same time of day at regular intervals (which may be daily or every 2 or more days depending on the resonance). In the case of Landsat there are 14.5 orbits per day or 29 orbits every 2 days.

Geosynchronous Orbits (GEO):

Communication satellites use geosynchronous orbits for continuous coverage of one region of the globe, i.e. the orbital period is exactly one day. This turns out to be approximately 24,000 miles up.

A geosynchronous orbit is an orbit which has an orbital period close to that of the earths rotation. A geostationary orbit is a special case of the geosynchronous orbit where inclination = 0 and the period is equal to the rotation period of the earth (approx 1436 minutes), corresponding to a cricular orbit of approx. 35,700km altitude. A satellite in this orbit appears essentially stationary in the sky, which is why this orbit is used extensively for telecommunications & weather satellites. In reality lunar & solar gravitational influences perturb the satellites orbit, so that through the day the satellites position shifts slightly.

Below is shown the orbit of the TDRS-7 satellite, one of a series of NASA satellites which used to provide a near continous communications link with the Space Shuttle, International Space Station & other spacecraft such as the Hubble Space Telescope.

General view of TDRS-7 orbit

View of orbit from ascending node

Compared with the LEO orbit of Mir a much larger portion of the earth’s surface is visible from the TDRS-7 spacecraft. The zone of visibility of the spacecraft has been highlighted by a cone. Approximately 40% of the earths surface can be viewed at any one time from geostationary altitude. Additionally, the spacecraft orbit is sunlight apart from a small zone which passes into the earths shadow. Actually, geostationary satellites only experience eclipses at two periods of the year – for a few weeks at a time at the spring and autumn equinoxes. The reason for this is simple. The earths rotation axis is inclined with respect to the ecliptic, hence the earth’s shadow cone misses the plane of a zero inclination geostationary orbit apart from the times when the suns declination is close to zero. This occurs twice a year, once at the spring equinox and once at the autumn equinox.

Ground tracks. Red dots along the ground track show the position of the satellite at regular intervals. Closely spaced dots indicate slow speed, widely spaced dots high speed.

As can be seen from this graphic a perfectly geostationary satellite stays over the same spot on the equator all day. However, if we were to look closely we would see that the satellite does appear to change position, generally describing a small figure of 8 or an arc due to the effect of lunar / solar pertubations dragging the satellite into a slightly elliptical, slightly inclined orbit. There are many non operational satellites in “graveyard” orbits slightly above or below a true geostationary orbit. Since the orbital period is slightly more or less than the earths rotation period these satellites appear to drift slowly around the earth.

Anti-de Sitter/Conformal Field Theory

The AdS/CFT correspondence is one of the largest areas of research in string theory. AdS/CFT stands for Anti-de Sitter/Conformal Field Theory, an expression that’s not particularly elucidating.

AdS/CFT is a particular, and deeply surprising, example of a duality. It relates two very different theories and at first sight seems obviously wrong. It states that there is a duality between theories of gravity in five dimensions and quantum field theories (QFTs) in four dimensions. This correspondence was first formulated by Juan Maldacena in 1997, and is generally thought to be the single most important result in string theory in the last twenty years.

The original example of AdS/CFT linked two very special theories. The gravitational side involved a particular extension of gravity (type IIB supergravity) on a particular geometry (5-dimensional Anti-de-Sitter space). The QFT was the unique theory with the largest possible amount of supersymmetry. There’s a specific dictionary that translates between the theories.

This relationship has no formal mathematical proof. However a very large number of checks have been performed. These checks involve two calculations, using different techniques and methods, of quantities related by the dictionary. Continual agreement of these calculations constitutes strong evidence for the correspondence.

The first example has by now been extended to many other cases, and AdS/CFT is more generally referred to as the gauge-gravity correspondence. Formally this is the statement that gravitational theories in (N+1) dimensions can be entirely and completely equivalent to non-gravitational quantum field theories in N dimensions.

The AdS/CFT correspondence has a very useful property. When the gravitational theory is hard to solve, the QFT is easy to solve, and vice-versa! This opens the door to previously intractable problems in QFT through simple calculations in gravity theories.

Moreover AdS/CFT allows a conceptual reworking of the classic problems of general relativity. Indeed if general relativity can be equivalent to a QFT, then neither one is deeper than the other. Finally physicists can use it to develop new intuitions for both QFT and general relativity.

Nuclear Fission/Fusion And Anti-Matter


  • since quantum events do not have a “cause”, this also means that all possible quantum events must and will happen
  • without cause and effect, conservation laws can be violated, although only on very short timescales (things have to add up in the end)
  • violation of mass/energy allowed for the understanding of the source of nuclear power in the Universe, fission and fusion
One of the surprising results of quantum physics is that if a physical event is not specifically forbidden by a quantum rule, than it can and will happen. While this may strange, it is a direct result of the uncertainty principle. Things that are strict laws in the macroscopic world, such as the conversation of mass and energy, can be broken in the quantum world with the caveat that they can only broken for very small intervals of time (less than a Planck time). The violation of conservation laws led to the one of the greatest breakthroughs of the early 20th century, the understanding of radioactivity decay (fission) and the source of the power in stars (fusion).Nuclear fission is the breakdown of large atomic nuclei into smaller elements. This can happen spontaneously (radioactive decay) or induced by the collision with a free neutron. Spontaneously fission is due to the fact that the wave function of a large nuclei is ‘fuzzier’ than the wave function of a small particle like the alpha particle. The uncertainty principle states that, sometimes, an alpha particle (2 protons and 2 neutrons) can tunnel outside the nucleus and escape.

  • fission is the splitting of atomic nuclei, either spontaneously or by collision (induced)
  • fusion is the merger of atomic particles to form new particles
Induced fission occurs when a free neutron strikes a nucleus and deforms it. Under classical physics, the nucleus would just reform. However, under quantum physics there is a finite probability that the deformed nucleus will tunnel into two new nuclei and release some neutrons in the process, to produce a chain reaction.Fusion is the production of heavier elements by the fusing of lighter elements. The process requires high temperatures in order to produce sufficiently high velocities for the two light elements to overcome each others electrostatic barriers.

  • quantum tunneling and uncertainty are required for these processes
  • and quantum physics, even though centered on probabilities, is our most accurate science in its predictions
Even for the high temperatures in the center of a star, fusion requires the quantum tunneling of a neutron or proton to overcome the repulsive electrostatic forces of an atomic nuclei. Notice that both fission and fusion release energy by converting some of the nuclear mass into gamma-rays, this is the famous formulation by Einstein that E=mc2.Although it deals with probabilities and uncertainties, the quantum mechanics has been spectacularly successful in explaining otherwise inaccessible atomic phenomena and in meeting every experimental test. Its predictions are the most precise and the best checked of any in physics; some of them have been tested and found accurate to better than one part per billion.


  • symmetry in quantum physics lead to the prediction of opposite matter, or antimatter
  • matter and antimatter can combine to form pure energy, and the opposite is true, energy can combine to form matter/antimatter pairs
A combination of quantum mechanics and relativity allows us to examine subatomic processes in a new light. Symmetry is very important to physical theories. Thus, the existence of a type of `opposite’ matter was hypothesized soon after the development of quantum physics. `Opposite’ matter is called antimatter. Particles of antimatter has the same mass and characteristics of regular matter, but opposite in charge. When matter and antimatter come in contact they are both instantaneously converted into pure energy, in the form of photons.Antimatter is produced all the time by the collision of high energy photons, a process called pair production, where an electron and its antimatter twin (the positron) are created from energy (E=mc2). A typical spacetime diagram of pair production looks like the following:

  • spacetime diagrams provide a backwards time interpretation for antimatter, symmetry in space and time
Positrons only survive for a short time since they are attracted to other electrons and disintegrate. Since quantum mechanics states that energy, time and space can be violated, another way of looking at pair production is to state that the positron does not exist, but rather it is an electron traveling backwards in time. Since it is going backwards in time, its charge would be reversed and its spacetime diagram would look like the following:

  • the quantum world leads to new ways of looking at existence and reality
In this interpretation, the collision of an electron and two photons causes the electron to go backward in time till it meets another pair of photons, then reverses itself again. The world of quantum physics allows for many such strange views of subatomic interactions.

Superposition and Schrodinger’s Equation+Cat

Quantum Mechanics:

  • quantum mechanics is to the microscopic world what classic mechanics and calculus is to the macroscopic world
  • it is the operational process of calculating quantum physics phenomenon
  • its primary task is to bring order and prediction to the uncertainty of the quantum world, its main tool is Schrodinger’s equation
The field of quantum mechanics concerns the description of phenomenon on small scales where classical physics breaks down. The biggest difference between the classical and microscopic realm, is that the quantum world can be not be perceived directly, but rather through the use of instruments. And a key assumption to an quantum physics is that quantum mechanical principles must reduce to Newtonian principles at the macroscopic level (there is a continuity between quantum and Newtonian mechanics).Quantum mechanics was capable of bringing order to the uncertainty of the microscopic world by treatment of the wave function with new mathematics. Key to this idea was the fact that relative probabilities of different possible states are still determined by laws. Thus, there is a difference between the role of chance in quantum mechanics and the unrestricted chaos of a lawless Universe.

Every quantum particle is characterized by a wave function. In 1925 Erwin Schrodinger developed the differential equation which describes the evolution of those wave functions. By using Schrodinger equation, scientists can find the wave function which solves a particular problem in quantum mechanics. Unfortunately, it is usually impossible to find an exact solution to the equation, so certain assumptions are used in order to obtain an approximate answer for the particular problem.

  • the key difference between quantum and classical mechanics is the role of probability and chance
  • quantum objects are described by probability fields, however, this does not mean they are indeterminit, only uncertain
The difference between quantum mechanics and newtonian mechanics is the role of probability and statistics. While the uncertainty principle means that quantum objects have to be described by probability fields, this doesn’t mean that the microscopic world fails to conform to deterministic laws. In fact it does. And measurement is an act by which the measurer and the measured interact to produce a result. Although this is not simply the determination of a preexisting property.The quantum description of reality is objective (weak form) in the sense that everyone armed with a quantum physics education can do the same experiments and come to the same conclusions. Strong objectivity, as in classical physics, requires that the picture of the world yielded by the sum total of all experimental results to be not just a picture or model, but identical with the objective world, something that exists outside of us and prior to any measurement we might have of it. Quantum physics does not have this characteristic due to its built-in indeterminacy.

For centuries, scientists have gotten used to the idea that something like strong objectivity is the foundation of knowledge. So much so that we have come to believe that it is an essential part of the scientific method and that without this most solid kind of objectivity science would be pointless and arbitrary. However, the Copenhagen interpretation of quantum physics (see below) denies that there is any such thing as a true and unambiguous reality at the bottom of everything. Reality is what you measure it to be, and no more. No matter how uncomfortable science is with this viewpoint, quantum physics is extremely accurate and is the foundation of modern physics (perhaps then an objective view of reality is not essential to the conduct of physics). And concepts, such as cause and effect, survive only as a consequence of the collective behavior of large quantum systems.

Schrodinger’s Cat and Quantum Reality:

  • an example of the weirdness of the quantum world is given by the famous Schrodinger cat paradox
In 1935 Schrodinger, who was responsible for formulating much of the wave mechanics in quantum physics, published an essay describing the conceptual problems in quantum mechanics. A brief paragraph in this essay described the, now famous, cat paradox.
  • the paradox is phrased such that a quantum event determines if a cat is killed or not
  • from a quantum perspective, the whole system state is tied to the wave function of the quantum event, i.e. the cat is both dead and alive at the same time
One can even set up quite ridiculous cases where quantum physics rebells against common sense. For example, consider a cat is penned up in a steel chamber, along with the following diabolical device (which must be secured against direct interference by the cat). In the device is a Geiger counter with a tiny bit of radioactive substance, so small that perhaps in the course of one hour only one of the atoms decays, but also, with equal probability, perhaps none. If the decay happens, the counter tube discharges and through a relay releases a hammer which shatters a small flask of hydrocyanic acid. If one has left this entire system to itself for an hour, one would say that the cat still lives if meanwhile no atom has decayed. The first atomic decay would have poisoned it. The wave function for the entire system would express this by having in it the living and the dead cat mixed or smeared out in equal parts.

  • the paradox in some sense is not a paradox, but instead points out the tension between the microscopic and macroscopic worlds and the importance of the observer in a quantum scenario
  • quantum objects exist in superposition, many states, as shown by interference
  • the observer collapses the wave function
It is typical of these cases that an indeterminacy originally restricted to the atomic domain becomes transformed into macroscopic indeterminacy, which can then be resolved by direct observation. That prevents us from so naively accepting as valid a “blurred model” for representing reality. In itself it would not embody anything unclear or contradictory. There is a difference between a shaky or out-of-focus photograph and a snapshot of clouds and fog banks. We know that superposition of possible outcomes must exist simultaneously at a microscopic level because we can observe interference effects from these. We know (at least most of us know) that the cat in the box is dead, alive or dying and not in a smeared out state between the alternatives. When and how does the model of many microscopic possibilities resolve itself into a particular macroscopic state? When and how does the fog bank of microscopic possibilities transform itself to the blurred picture we have of a definite macroscopic state. That is the collapse of the wave function problem and Schrodinger’s cat is a simple and elegant explanation of that problem.

Macroscopic/Microscopic World Interface:

  • events in the microscopic world can happen *without* cause = indeterminacy
  • phenomenon such as tunneling shows that quantum physics leaks into the macroscopic world
The macroscopic world is Newtonian and deterministic for local events (note however that even the macroscopic world suffers from chaos). On the other hand, the microscopic quantum world radical indeterminacy limits any certainty surrounding the unfolding of physical events. Many things in the Newtonian world are unpredictable since we can never obtain all the factors effecting a physical system. But, quantum theory is much more unsettling in that events often happen without cause (e.g. radioactive decay).Note that the indeterminacy of the microscopic world has little effect on macroscopic objects. This is due to the fact that wave function for large objects is extremely small compared to the size of the macroscopic world. Your personal wave function is much smaller than any currently measurable sizes. And the indeterminacy of the quantum world is not complete because it is possible to assign probabilities to the wave function.

But, as Schrodinger’s Cat paradox show us, the probability rules of the microscopic world can leak into the macroscopic world. The paradox of Schrodinger’s cat has provoked a great deal of debate among theoretical physicists and philosophers. Although some thinkers have argued that the cat actually does exist in two superposed states, most contend that superposition only occurs when a quantum system is isolated from the rest of its environment. Various explanations have been advanced to account for this paradox–including the idea that the cat, or simply the animal’s physical environment (such as the photons in the box), can act as an observer.

The question is, at what point, or scale, do the probabilistic rules of the quantum realm give way to the deterministic laws that govern the macroscopic world? This question has been brought into vivid relief by the recent work where an NIST group confined a charged beryllium atom in a tiny electromagnetic cage and then cooled it with a laser to its lowest energy state. In this state the position of the atom and its “spin” (a quantum property that is only metaphorically analogous to spin in the ordinary sense) could be ascertained to within a very high degree of accuracy, limited by Heisenberg’s uncertainty principle.

  • decoherence prevents a macroscopic Schrodinger cat paradox
  • new technology allows the manipulation of objects at the quantum level
  • future research will investigate areas such as quantum teleportation and quantum computing
The workers then stimulated the atom with a laser just enough to change its wave function; according to the new wave function of the atom, it now had a 50 percent probability of being in a “spin-up” state in its initial position and an equal probability of being in a “spin-down” state in a position as much as 80 nanometers away, a vast distance indeed for the atomic realm. In effect, the atom was in two different places, as well as two different spin states, at the same time–an atomic analog of a cat both living and dead.The clinching evidence that the NIST researchers had achieved their goal came from their observation of an interference pattern; that phenomenon is a telltale sign that a single beryllium atom produced two distinct wave functions that interfered with each other.

The modern view of quantum mechanics states that Schrodinger’s cat, or any macroscopic object, does not exist as superpositions of existence due to decoherence. A pristine wave function is coherent, i.e. undisturbed by observation. But Schrodinger’s cat is not a pristine wave function, its is constantly interacting with other objects, such as air molecules in the box, or the box itself. Thus a macroscopic object becomes decoherent by many atomic interactions with its surrounding environment.

Decoherence explains why we do not routinely see quantum superpositions in the world around us. It is not because quantum mechanics intrinsically stops working for objects larger than some magic size. Instead, macroscopic objects such as cats and cards are almost impossible to keep isolated to the extent needed to prevent decoherence. Microscopic objects, in contrast, are more easily isolated from their surroundings so that they retain their quantum secrets and quantum behavior.


Uncerntainty Principle

  • the uncertainty principle states that the position and velocity cannot both be measured,exactly, at the same time (actually pairs of position, energy and time)
  • uncertainty principle derives from the measurement problem, the intimate connection between the wave and particle nature of quantum objects
  • the change in a velocity of a particle becomes more ill defined as the wave function is confined to a smaller region
Classical physics was on loose footing with problems of wave/particle duality, but was caught completely off-guard with the discovery of the uncertainty principle.The uncertainty principle also called the Heisenberg Uncertainty Principle, or Indeterminacy Principle, articulated (1927) by the German physicist Werner Heisenberg, that the position and the velocity of an object cannot both be measured exactly, at the same time, even in theory. The very concepts of exact position and exact velocity together, in fact, have no meaning in nature.

Ordinary experience provides no clue of this principle. It is easy to measure both the position and the velocity of, say, an automobile, because the uncertainties implied by this principle for ordinary objects are too small to be observed. The complete rule stipulates that the product of the uncertainties in position and velocity is equal to or greater than a tiny physical quantity, or constant (about 10-34 joule-second, the value of the quantity h (where h is Planck’s constant). Only for the exceedingly small masses of atoms and subatomic particles does the product of the uncertainties become significant.

Any attempt to measure precisely the velocity of a subatomic particle, such as an electron, will knock it about in an unpredictable way, so that a simultaneous measurement of its position has no validity. This result has nothing to do with inadequacies in the measuring instruments, the technique, or the observer; it arises out of the intimate connection in nature between particles and waves in the realm of subatomic dimensions.

Every particle has a wave associated with it; each particle actually exhibits wavelike behavior. The particle is most likely to be found in those places where the undulations of the wave are greatest, or most intense. The more intense the undulations of the associated wave become, however, the more ill defined becomes the wavelength, which in turn determines the momentum of the particle. So a strictly localized wave has an indeterminate wavelength; its associated particle, while having a definite position, has no certain velocity. A particle wave having a well-defined wavelength, on the other hand, is spread out; the associated particle, while having a rather precise velocity, may be almost anywhere. A quite accurate measurement of one observable involves a relatively large uncertainty in the measurement of the other.

The uncertainty principle is alternatively expressed in terms of a particle’s momentum and position. The momentum of a particle is equal to the product of its mass times its velocity. Thus, the product of the uncertainties in the momentum and the position of a particle equals h/(2) or more. The principle applies to other related (conjugate) pairs of observables, such as energy and time: the product of the uncertainty in an energy measurement and the uncertainty in the time interval during which the measurement is made also equals h/(2) or more. The same relation holds, for an unstable atom or nucleus, between the uncertainty in the quantity of energy radiated and the uncertainty in the lifetime of the unstable system as it makes a transition to a more stable state.

  • the wave nature to particles means a particle is a wave packet, the composite of many waves
  • many waves = many momentums, observation makes one momentum out of many
  • exact knowledge of complementarity pairs (position, energy, time) is impossible
The uncertainty principle, developed by W. Heisenberg, is a statement of the effects of wave-particle duality on the properties of subatomic objects. Consider the concept of momentum in the wave-like microscopic world. The momentum of wave is given by its wavelength. A wave packet like a photon or electron is a composite of many waves. Therefore, it must be made of many momentums. But how can an object have many momentums?Of course, once a measurement of the particle is made, a single momentum is observed. But, like fuzzy position, momentum before the observation is intrinsically uncertain. This is what is know as the uncertainty principle, that certain quantities, such as position, energy and time, are unknown, except by probabilities. In its purest form, the uncertainty principle states that accurate knowledge of complementarity pairs is impossible. For example, you can measure the location of an electron, but not its momentum (energy) at the same time.

  • complementarity also means that different experiments yield different results (e.g. the two slit experiment)
  • therefore, a single reality can not be applied at the quantum level
A characteristic feature of quantum physics is the principle of complementarity, which “implies the impossibility of any sharp separation between the behavior of atomic objects and the interaction with the measuring instruments which serve to define the conditions under which the phenomena appear.” As a result, “evidence obtained under different experimental conditions cannot be comprehended within a single picture, but must be regarded as complementary in the sense that only the totality of the phenomena exhausts the possible information about the objects.” This interpretation of the meaning of quantum physics, which implied an altered view of the meaning of physical explanation, gradually came to be accepted by the majority of physicists during the 1930’s.Mathematically we describe the uncertainty principle as the following, where `x’ is position and `p’ is momentum:

  • the mathematical form of the uncertainty principle relates complementary to Planck’s constant
  • knowledge is not unlimited, built-in indeterminacy exists, but only in the microscopic world, all collapses to determinism in the macroscopic world
This is perhaps the most famous equation next to E=mc2 in physics. It basically says that the combination of the error in position times the error in momentum must always be greater than Planck’s constant. So, you can measure the position of an electron to some accuracy, but then its momentum will be inside a very large range of values. Likewise, you can measure the momentum precisely, but then its position is unknown.Notice that this is not the measurement problem in another form, the combination of position, energy (momentum) and time are actually undefined for a quantum particle until a measurement is made (then the wave function collapses).

Also notice that the uncertainty principle is unimportant to macroscopic objects since Planck’s constant, h, is so small (10-34). For example, the uncertainty in position of a thrown baseball is 10-30 millimeters.

The depth of the uncertainty principle is realized when we ask the question; is our knowledge of reality unlimited? The answer is no, because the uncertainty principle states that there is a built-in uncertainty, indeterminacy, unpredictability to Nature.

   It is often stated that of all the theories proposed in this 
   century, the silliest is quantum theory.  Some say the the only 
   thing that quantum theory has going for it, in fact, is that it 
   is unquestionably correct. 

  - R. Feynman


Atom and Wave Particle Duality

Bohr Atom:

  • classical physics fails to describe the properties of atoms, Planck’s constant served to bridge the gap between the classical world and the new physics
  • Bohr proposed a quantized shell model for the atom using the same basic structure as Rutherford, but restricting the behavior of electrons to quantized orbits
Perhaps the foremost scientists of the 20th century was Niels Bohr, the first to apply Planck’s quantum idea to problems in atomic physics. In the early 1900’s, Bohr proposed a quantum mechanical description of the atom to replace the early model of Rutherford.In 1913 Bohr proposed his quantized shell model of the atom to explain how electrons can have stable orbits around the nucleus. The motion of the electrons in the Rutherford model was unstable because, according to classical mechanics and electromagnetic theory, any charged particle moving on a curved path emits electromagnetic radiation; thus, the electrons would lose energy and spiral into the nucleus. To remedy the stability problem, Bohr modified the Rutherford model by requiring that the electrons move in orbits of fixed size and energy. The energy of an electron depends on the size of the orbit and is lower for smaller orbits. Radiation can occur only when the electron jumps from one orbit to another. The atom will be completely stable in the state with the smallest orbit, since there is no orbit of lower energy into which the electron can jump.

Bohr’s starting point was to realize that classical mechanics by itself could never explain the atom’s stability. A stable atom has a certain size so that any equation describing it must contain some fundamental constant or combination of constants with a dimension of length. The classical fundamental constants–namely, the charges and the masses of the electron and the nucleus–cannot be combined to make a length. Bohr noticed, however, that the quantum constant formulated by the German physicist Max Planck has dimensions which, when combined with the mass and charge of the electron, produce a measure of length. Numerically, the measure is close to the known size of atoms. This encouraged Bohr to use Planck’s constant in searching for a theory of the atom.

  • Bohr’s calculation produce an accurate map of the hydrogen atom energy levels
  • changes in electron orbits requires the release or gain of energy in the form of photons
  • Bohr’s atom perfectly explains the spectra in stars as gaps due to the absorption of photons of particular wavelengths that match the electron orbits of the various elements
  • larger formulations explain all the properties outlined by Kirchoff’s laws
Planck had introduced his constant in 1900 in a formula explaining the light radiation emitted from heated bodies. According to classical theory, comparable amounts of light energy should be produced at all frequencies. This is not only contrary to observation but also implies the absurd result that the total energy radiated by a heated body should be infinite. Planck postulated that energy can only be emitted or absorbed in discrete amounts, which he called quanta (the Latin word for “how much”). The energy quantum is related to the frequency of the light by a new fundamental constant, h. When a body is heated, its radiant energy in a particular frequency range is, according to classical theory, proportional to the temperature of the body. With Planck’s hypothesis, however, the radiation can occur only in quantum amounts of energy. If the radiant energy is less than the quantum of energy, the amount of light in that frequency range will be reduced. Planck’s formula correctly describes radiation from heated bodies. Planck’s constant has the dimensions of action, which may be expressed as units of energy multiplied by time, units of momentum multiplied by length, or units of angular momentum. For example, Planck’s constant can be written as h = 6.6×10-34 joule seconds.Using Planck’s constant, Bohr obtained an accurate formula for the energy levels of the hydrogen atom. He postulated that the angular momentum of the electron is quantized–i.e., it can have only discrete values. He assumed that otherwise electrons obey the laws of classical mechanics by traveling around the nucleus in circular orbits. Because of the quantization, the electron orbits have fixed sizes and energies. The orbits are labeled by an integer, the quantum number n.

With his model, Bohr explained how electrons could jump from one orbit to another only by emitting or absorbing energy in fixed quanta. For example, if an electron jumps one orbit closer to the nucleus, it must emit energy equal to the difference of the energies of the two orbits. Conversely, when the electron jumps to a larger orbit, it must absorb a quantum of light equal in energy to the difference in orbits. The Bohr model basically assigned discrete orbits for the electron, multiples of Planck’s constant, rather than allowing a continuum of energies as allowed by classical physics.

The power in the Bohr model was its ability to predict the spectra of light emitted by atoms. In particular, its ability to explain the spectral lines of atoms as the absorption and emission of photons by the electrons in quantized orbits.

  • Heisenberg and Schroedinger formalize Bohr’s model and produce quantum mechanics
  • quantum mechanics is an all encompassing science that crosses over into many fields
Our current understanding of atomic structure was formalized by Heisenberg and Schroedinger in the mid-1920’s where the discreteness of the allowed energy states emerges from more general aspects, rather than imposed as in Bohr’s model. The Heisenberg/Schroedinger quantum mechanics have consistent fundamental principles, such as the wave character of matter and the incorporation of the uncertainty principle.In principle, all of atomic and molecular physics, including the structure of atoms and their dynamics, the periodic table of elements and their chemical behavior, as well as the spectroscopic, electrical, and other physical properties of atoms and molecules, can be accounted for by quantum mechanics => fundamental science.

de Broglie Matter Waves:


  • early quantum physics did not ask the question of `why’ quantum effects are found in the microscopic world
Perhaps one of the key questions when Bohr offered his quantized orbits as an explanation to the UV catastrophe and spectral lines is, why does an electron follow quantized orbits? The response to this question arrived from the Ph.D. thesis of Louis de Broglie in 1923. de Broglie argued that since light can display wave and particle properties, then perhaps matter can also be a particle and a wave too.

  • One way of thinking of a matter wave (or a photon) is to think of a wave packet. Normal waves look with this:

  • having no beginning and no end. A composition of several waves of different wavelength can produce a wave packet that looks like this:
  • the wave packet interpretation requires the particle to have no set position
  • momentum of a particle is proportional to the wavelength of the particle
So a photon, or a free moving electron, can be thought of as a wave packet, having both wave-like properties and also the single position and size we associate with a particle. There are some slight problems, such as the wave packet doesn’t really stop at a finite distance from its peak, it also goes on for every and every. Does this mean an electron exists at all places in its trajectory?de Broglie also produced a simple formula that the wavelength of a matter particle is related to the momentum of the particle. So energy is also connected to the wave property of matter.
  • Lastly, the wave nature of the electron makes for an elegant explanation to quantized orbits around the atom. Consider what a wave looks like around an orbit, as shown below
  • only certain wavelengths will fit into orbit, so quantiziation is due to wavelike nature of particles
The electron matter wave is both finite and unbounded (remember the 1st lecture on math). But only certain wavelengths will `fit’ into an orbit. If the wavelength is longer or shorter, then the ends do not connect. Thus, de Broglie explains the Bohr atom in that on certain orbits can exist to match the natural wavelength of the electron. If an electron is in some sense a wave, then in order to fit into an orbit around a nucleus, the size of the orbit must correspond to a whole number of wavelengths.

  • wavelike nature also means that a particles existence is spread out, a probability field
Notice also that this means the electron does not exist at one single spot in its orbit, it has a wave nature and exists at all places in the allowed orbit. Thus, a physicist speaks of allowed orbits and allowed transitions to produce particular photons (that make up the fingerprint pattern of spectral lines). And the Bohr atom really looks like the following diagram:

  • the idea of atoms being solid billiard ball type objects fails with quantum physics
  • quantum effects fade on larger scales since macroscopic objects have high momentum values and therefore small wavelengths
While de Broglie waves were difficult to accept after centuries of thinking of particles are solid things with definite size and positions, electron waves were confirmed in the laboratory by running electron beams through slits and demonstrating that interference patterns formed.How does the de Broglie idea fit into the macroscopic world? The length of the wave diminishes in proportion to the momentum of the object. So the greater the mass of the object involved, the shorter the waves. The wavelength of a person, for example, is only one millionth of a centimeter, much to short to be measured. This is why people don’t `tunnel’ through chairs when they sit down.

Probability Fields:

  • wave interpretation requires a statistical or probability mathematical description of the position of a particle
  • where wave represents the probability of finding the particle at a particular point
The idea that an electron is a wave around the atom, instead of a particle in orbit begs the question of `where’ the electron is at any particular moment. The answer, by experimentation, is that the electron can be anywhere around the atom. But ‘where’ is not evenly distributed. The electron as a wave has a maximum chance of being observed where the wave has the highest amplitude. Thus, the electron has the highest probability to exist at a certain orbit.Where probability is often used in physics to describe the behavior of many objects, this is the first instance of an individual object, an electron, being assigned a probability for a Newtonian characteristic such as position. Thus, an accurate description of an electron orbit is one where we have a probability field that surrounds the nucleus, as shown below:


  • for higher orbits the probability field becomes distorted
For more complicated orbits, and higher electron shells, the probability field becomes distorted by other electrons and their fields, like the following example:


  • meaning of existence has an elusive nature in the quantum world
Thus, for the first time, the concept of existence begins to take on an elusive character at the subatomic level.

The Birth Of Quantum Mechanics

  • accelerating electron produces EM radiation (light), loses energy and spirals into nucleus, i.e. atom should not work
The UV catastrophe and the dilemma of spectral lines were already serious problems for attempts to understand how light and matter interact. Planck also noticed another fatal flaw in our physics by demonstrating that the electron in orbit around the nucleus accelerates. Acceleration means a changing electric field (the electron has charge), when means photons should be emitted. But, then the electron would lose energy and fall into the nucleus. Therefore, atoms shouldn’t exist!

  • Planck makes `quantum’ assumption to resolve this problem
  • a quantum is a discrete, and smallest, unit of energy
  • all forms of energy are transfered in quantums, not continuous
To resolve this problem, Planck made a wild assumption that energy, at the sub-atomic level, can only be transfered in small units, called quanta. Due to his insight, we call this unit Planck’s constant (h). The word quantum derives from quantity and refers to a small packet of action or process, the smallest unit of either that can be associated with a single event in the microscopic world.Quantum, in physics, discrete natural unit, or packet, of energy, charge, angular momentum, or other physical property. Light, for example, appearing in some respects as a continuous electromagnetic wave, on the submicroscopic level is emitted and absorbed in discrete amounts, or quanta; and for light of a given wavelength, the magnitude of all the quanta emitted or absorbed is the same in both energy and momentum. These particle-like packets of light are called photons, a term also applicable to quanta of other forms of electromagnetic energy such as X rays and gamma rays.

All phenomena in submicroscopic systems (the realm of quantum mechanics) exhibit quantization: observable quantities are restricted to a natural set of discrete values. When the values are multiples of a constant least amount, that amount is referred to as a quantum of the observable. Thus Planck’s constant h is the quantum of action, and h/ (i.e., h/2 ) is the quantum of angular momentum, or spin.

  • electron transition from orbit to orbit must be in discrete quantum jumps
  • experiments show that there is no `inbetween’ for quantum transitions = new kind of reality
  • despite strangeness, experiments confirm quantum predictions and resolves UV catastrophe
Changes of energy, such as the transition of an electron from one orbit to another around the nucleus of an atom, is done in discrete quanta. Quanta are not divisible. The term quantum leap refers to the abrupt movement from one discrete energy level to another, with no smooth transition. There is no “inbetween”.The quantization, or “jumpiness” of action as depicted in quantum physics differs sharply from classical physics which represented motion as smooth, continuous change. Quantization limits the energy to be transfered to photons and resolves the UV catastrophe problem.

Wave-Particle Dualism:

  • The wave-like nature of light explains most of its properties:
    1. reflection/refraction
    2. diffraction/interference
    3. Doppler effect
  • however, a particle description is suggested by the photoelectric effect, the release of electrons by a beam of energetic blue/UV light
  • wavelike descriptions of light fail to explain the lack of the photoelectric effect for red light
The results from spectroscopy (emission and absorption spectra) can only be explained if light has a particle nature as shown by Bohr’s atom and the photon description of light.This dualism to the nature of light is best demonstrated by the photoelectric effect, where a weak UV light produces a current flow (releases electrons) but a strong red light does not release electrons no matter how intense the red light.

An unusual phenomenon was discovered in the early 1900’s. If a beam of light is pointed at the negative end of a pair of charged plates, a current flow is measured. A current is simply a flow of electrons in a metal, such as a wire. Thus, the beam of light must be liberating electrons from one metal plate, which are attracted to the other plate by electrostatic forces. This results in a current flow.

In classical physics, one would expect the current flow to be proportional to the strength of the beam of light (more light = more electrons liberated = more current). However, the observed phenomenon was that the current flow was basically constant with light strength, yet varied strong with the wavelength of light such that there was a sharp cutoff and no current flow for long wavelengths.

Einstein successful explained the photoelectric effect within the context of the new physics of the time, quantum physics. In his scientific paper, he showed that light was made of packets of energy quantum called photons. Each photon carries a specific energy related to its wavelength, such that photons of short wavelength (blue light) carry more energy than long wavelength (red light) photons. To release an electron from a metal plate required a minimal energy which could only be transfered by a photon of energy equal or greater than that minimal threshold energy (i.e. the wavelength of the light had to be a sufficiently short). Each photon of blue light released an electron. But all red photons were too weak. The result is no matter how much red light was shown on the metal plate, there was no current.

The photoelectric earned Einstein the Nobel Prize, and introduced the term “photon” of light into our terminology.

  • particle and wave properties to light is called wave-particle dualism and continues the strange characteristics to the new science of quantum physics
  • wave-particle dualism is extended to matter particles, i.e. electrons act as waves
Einstein explained that light exists in a particle-like state as packets of energy (quanta) called photons. The photoelectric effect occurs because the packets of energy carried by each individual red photons are too weak to knock the electrons off the atoms no matter how many red photons you beamed onto the cathode. But the individual UV photons were each strong enough to release the electron and cause a current flow.It is one of the strange, but fundamental, concepts in modern physics that light has both a wave and particle state (but not at the same time), called wave-particle dualism.

Wave/particle duality is the possession by physical entities (such as light and electrons) of both wavelike and particle-like characteristics. On the basis of experimental evidence, the German physicist Albert Einstein first showed (1905) that light, which had been considered a form of electromagnetic waves, must also be thought of as particle-like, or localized in packets of discrete energy. The French physicist Louis de Broglie proposed (1924) that electrons and other discrete bits of matter, which until then had been conceived only as material particles, also have wave properties such as wavelength and frequency. Later (1927) the wave nature of electrons was experimentally established. An understanding of the complementary relation between the wave aspects and the particle aspects of the same phenomenon was announced in 1928.

Dualism is not such a strange concept, consider the following picture, are the swirls moving or not or both?