Superposition and Schrodinger’s Equation+Cat

Quantum Mechanics:

  • quantum mechanics is to the microscopic world what classic mechanics and calculus is to the macroscopic world
  • it is the operational process of calculating quantum physics phenomenon
  • its primary task is to bring order and prediction to the uncertainty of the quantum world, its main tool is Schrodinger’s equation
The field of quantum mechanics concerns the description of phenomenon on small scales where classical physics breaks down. The biggest difference between the classical and microscopic realm, is that the quantum world can be not be perceived directly, but rather through the use of instruments. And a key assumption to an quantum physics is that quantum mechanical principles must reduce to Newtonian principles at the macroscopic level (there is a continuity between quantum and Newtonian mechanics).Quantum mechanics was capable of bringing order to the uncertainty of the microscopic world by treatment of the wave function with new mathematics. Key to this idea was the fact that relative probabilities of different possible states are still determined by laws. Thus, there is a difference between the role of chance in quantum mechanics and the unrestricted chaos of a lawless Universe.

Every quantum particle is characterized by a wave function. In 1925 Erwin Schrodinger developed the differential equation which describes the evolution of those wave functions. By using Schrodinger equation, scientists can find the wave function which solves a particular problem in quantum mechanics. Unfortunately, it is usually impossible to find an exact solution to the equation, so certain assumptions are used in order to obtain an approximate answer for the particular problem.

  • the key difference between quantum and classical mechanics is the role of probability and chance
  • quantum objects are described by probability fields, however, this does not mean they are indeterminit, only uncertain
The difference between quantum mechanics and newtonian mechanics is the role of probability and statistics. While the uncertainty principle means that quantum objects have to be described by probability fields, this doesn’t mean that the microscopic world fails to conform to deterministic laws. In fact it does. And measurement is an act by which the measurer and the measured interact to produce a result. Although this is not simply the determination of a preexisting property.The quantum description of reality is objective (weak form) in the sense that everyone armed with a quantum physics education can do the same experiments and come to the same conclusions. Strong objectivity, as in classical physics, requires that the picture of the world yielded by the sum total of all experimental results to be not just a picture or model, but identical with the objective world, something that exists outside of us and prior to any measurement we might have of it. Quantum physics does not have this characteristic due to its built-in indeterminacy.

For centuries, scientists have gotten used to the idea that something like strong objectivity is the foundation of knowledge. So much so that we have come to believe that it is an essential part of the scientific method and that without this most solid kind of objectivity science would be pointless and arbitrary. However, the Copenhagen interpretation of quantum physics (see below) denies that there is any such thing as a true and unambiguous reality at the bottom of everything. Reality is what you measure it to be, and no more. No matter how uncomfortable science is with this viewpoint, quantum physics is extremely accurate and is the foundation of modern physics (perhaps then an objective view of reality is not essential to the conduct of physics). And concepts, such as cause and effect, survive only as a consequence of the collective behavior of large quantum systems.

Schrodinger’s Cat and Quantum Reality:

  • an example of the weirdness of the quantum world is given by the famous Schrodinger cat paradox
In 1935 Schrodinger, who was responsible for formulating much of the wave mechanics in quantum physics, published an essay describing the conceptual problems in quantum mechanics. A brief paragraph in this essay described the, now famous, cat paradox.
  • the paradox is phrased such that a quantum event determines if a cat is killed or not
  • from a quantum perspective, the whole system state is tied to the wave function of the quantum event, i.e. the cat is both dead and alive at the same time
One can even set up quite ridiculous cases where quantum physics rebells against common sense. For example, consider a cat is penned up in a steel chamber, along with the following diabolical device (which must be secured against direct interference by the cat). In the device is a Geiger counter with a tiny bit of radioactive substance, so small that perhaps in the course of one hour only one of the atoms decays, but also, with equal probability, perhaps none. If the decay happens, the counter tube discharges and through a relay releases a hammer which shatters a small flask of hydrocyanic acid. If one has left this entire system to itself for an hour, one would say that the cat still lives if meanwhile no atom has decayed. The first atomic decay would have poisoned it. The wave function for the entire system would express this by having in it the living and the dead cat mixed or smeared out in equal parts.

  • the paradox in some sense is not a paradox, but instead points out the tension between the microscopic and macroscopic worlds and the importance of the observer in a quantum scenario
  • quantum objects exist in superposition, many states, as shown by interference
  • the observer collapses the wave function
It is typical of these cases that an indeterminacy originally restricted to the atomic domain becomes transformed into macroscopic indeterminacy, which can then be resolved by direct observation. That prevents us from so naively accepting as valid a “blurred model” for representing reality. In itself it would not embody anything unclear or contradictory. There is a difference between a shaky or out-of-focus photograph and a snapshot of clouds and fog banks. We know that superposition of possible outcomes must exist simultaneously at a microscopic level because we can observe interference effects from these. We know (at least most of us know) that the cat in the box is dead, alive or dying and not in a smeared out state between the alternatives. When and how does the model of many microscopic possibilities resolve itself into a particular macroscopic state? When and how does the fog bank of microscopic possibilities transform itself to the blurred picture we have of a definite macroscopic state. That is the collapse of the wave function problem and Schrodinger’s cat is a simple and elegant explanation of that problem.

Macroscopic/Microscopic World Interface:

  • events in the microscopic world can happen *without* cause = indeterminacy
  • phenomenon such as tunneling shows that quantum physics leaks into the macroscopic world
The macroscopic world is Newtonian and deterministic for local events (note however that even the macroscopic world suffers from chaos). On the other hand, the microscopic quantum world radical indeterminacy limits any certainty surrounding the unfolding of physical events. Many things in the Newtonian world are unpredictable since we can never obtain all the factors effecting a physical system. But, quantum theory is much more unsettling in that events often happen without cause (e.g. radioactive decay).Note that the indeterminacy of the microscopic world has little effect on macroscopic objects. This is due to the fact that wave function for large objects is extremely small compared to the size of the macroscopic world. Your personal wave function is much smaller than any currently measurable sizes. And the indeterminacy of the quantum world is not complete because it is possible to assign probabilities to the wave function.

But, as Schrodinger’s Cat paradox show us, the probability rules of the microscopic world can leak into the macroscopic world. The paradox of Schrodinger’s cat has provoked a great deal of debate among theoretical physicists and philosophers. Although some thinkers have argued that the cat actually does exist in two superposed states, most contend that superposition only occurs when a quantum system is isolated from the rest of its environment. Various explanations have been advanced to account for this paradox–including the idea that the cat, or simply the animal’s physical environment (such as the photons in the box), can act as an observer.

The question is, at what point, or scale, do the probabilistic rules of the quantum realm give way to the deterministic laws that govern the macroscopic world? This question has been brought into vivid relief by the recent work where an NIST group confined a charged beryllium atom in a tiny electromagnetic cage and then cooled it with a laser to its lowest energy state. In this state the position of the atom and its “spin” (a quantum property that is only metaphorically analogous to spin in the ordinary sense) could be ascertained to within a very high degree of accuracy, limited by Heisenberg’s uncertainty principle.

  • decoherence prevents a macroscopic Schrodinger cat paradox
  • new technology allows the manipulation of objects at the quantum level
  • future research will investigate areas such as quantum teleportation and quantum computing
The workers then stimulated the atom with a laser just enough to change its wave function; according to the new wave function of the atom, it now had a 50 percent probability of being in a “spin-up” state in its initial position and an equal probability of being in a “spin-down” state in a position as much as 80 nanometers away, a vast distance indeed for the atomic realm. In effect, the atom was in two different places, as well as two different spin states, at the same time–an atomic analog of a cat both living and dead.The clinching evidence that the NIST researchers had achieved their goal came from their observation of an interference pattern; that phenomenon is a telltale sign that a single beryllium atom produced two distinct wave functions that interfered with each other.

The modern view of quantum mechanics states that Schrodinger’s cat, or any macroscopic object, does not exist as superpositions of existence due to decoherence. A pristine wave function is coherent, i.e. undisturbed by observation. But Schrodinger’s cat is not a pristine wave function, its is constantly interacting with other objects, such as air molecules in the box, or the box itself. Thus a macroscopic object becomes decoherent by many atomic interactions with its surrounding environment.

Decoherence explains why we do not routinely see quantum superpositions in the world around us. It is not because quantum mechanics intrinsically stops working for objects larger than some magic size. Instead, macroscopic objects such as cats and cards are almost impossible to keep isolated to the extent needed to prevent decoherence. Microscopic objects, in contrast, are more easily isolated from their surroundings so that they retain their quantum secrets and quantum behavior.


Uncerntainty Principle

  • the uncertainty principle states that the position and velocity cannot both be measured,exactly, at the same time (actually pairs of position, energy and time)
  • uncertainty principle derives from the measurement problem, the intimate connection between the wave and particle nature of quantum objects
  • the change in a velocity of a particle becomes more ill defined as the wave function is confined to a smaller region
Classical physics was on loose footing with problems of wave/particle duality, but was caught completely off-guard with the discovery of the uncertainty principle.The uncertainty principle also called the Heisenberg Uncertainty Principle, or Indeterminacy Principle, articulated (1927) by the German physicist Werner Heisenberg, that the position and the velocity of an object cannot both be measured exactly, at the same time, even in theory. The very concepts of exact position and exact velocity together, in fact, have no meaning in nature.

Ordinary experience provides no clue of this principle. It is easy to measure both the position and the velocity of, say, an automobile, because the uncertainties implied by this principle for ordinary objects are too small to be observed. The complete rule stipulates that the product of the uncertainties in position and velocity is equal to or greater than a tiny physical quantity, or constant (about 10-34 joule-second, the value of the quantity h (where h is Planck’s constant). Only for the exceedingly small masses of atoms and subatomic particles does the product of the uncertainties become significant.

Any attempt to measure precisely the velocity of a subatomic particle, such as an electron, will knock it about in an unpredictable way, so that a simultaneous measurement of its position has no validity. This result has nothing to do with inadequacies in the measuring instruments, the technique, or the observer; it arises out of the intimate connection in nature between particles and waves in the realm of subatomic dimensions.

Every particle has a wave associated with it; each particle actually exhibits wavelike behavior. The particle is most likely to be found in those places where the undulations of the wave are greatest, or most intense. The more intense the undulations of the associated wave become, however, the more ill defined becomes the wavelength, which in turn determines the momentum of the particle. So a strictly localized wave has an indeterminate wavelength; its associated particle, while having a definite position, has no certain velocity. A particle wave having a well-defined wavelength, on the other hand, is spread out; the associated particle, while having a rather precise velocity, may be almost anywhere. A quite accurate measurement of one observable involves a relatively large uncertainty in the measurement of the other.

The uncertainty principle is alternatively expressed in terms of a particle’s momentum and position. The momentum of a particle is equal to the product of its mass times its velocity. Thus, the product of the uncertainties in the momentum and the position of a particle equals h/(2) or more. The principle applies to other related (conjugate) pairs of observables, such as energy and time: the product of the uncertainty in an energy measurement and the uncertainty in the time interval during which the measurement is made also equals h/(2) or more. The same relation holds, for an unstable atom or nucleus, between the uncertainty in the quantity of energy radiated and the uncertainty in the lifetime of the unstable system as it makes a transition to a more stable state.

  • the wave nature to particles means a particle is a wave packet, the composite of many waves
  • many waves = many momentums, observation makes one momentum out of many
  • exact knowledge of complementarity pairs (position, energy, time) is impossible
The uncertainty principle, developed by W. Heisenberg, is a statement of the effects of wave-particle duality on the properties of subatomic objects. Consider the concept of momentum in the wave-like microscopic world. The momentum of wave is given by its wavelength. A wave packet like a photon or electron is a composite of many waves. Therefore, it must be made of many momentums. But how can an object have many momentums?Of course, once a measurement of the particle is made, a single momentum is observed. But, like fuzzy position, momentum before the observation is intrinsically uncertain. This is what is know as the uncertainty principle, that certain quantities, such as position, energy and time, are unknown, except by probabilities. In its purest form, the uncertainty principle states that accurate knowledge of complementarity pairs is impossible. For example, you can measure the location of an electron, but not its momentum (energy) at the same time.

  • complementarity also means that different experiments yield different results (e.g. the two slit experiment)
  • therefore, a single reality can not be applied at the quantum level
A characteristic feature of quantum physics is the principle of complementarity, which “implies the impossibility of any sharp separation between the behavior of atomic objects and the interaction with the measuring instruments which serve to define the conditions under which the phenomena appear.” As a result, “evidence obtained under different experimental conditions cannot be comprehended within a single picture, but must be regarded as complementary in the sense that only the totality of the phenomena exhausts the possible information about the objects.” This interpretation of the meaning of quantum physics, which implied an altered view of the meaning of physical explanation, gradually came to be accepted by the majority of physicists during the 1930’s.Mathematically we describe the uncertainty principle as the following, where `x’ is position and `p’ is momentum:

  • the mathematical form of the uncertainty principle relates complementary to Planck’s constant
  • knowledge is not unlimited, built-in indeterminacy exists, but only in the microscopic world, all collapses to determinism in the macroscopic world
This is perhaps the most famous equation next to E=mc2 in physics. It basically says that the combination of the error in position times the error in momentum must always be greater than Planck’s constant. So, you can measure the position of an electron to some accuracy, but then its momentum will be inside a very large range of values. Likewise, you can measure the momentum precisely, but then its position is unknown.Notice that this is not the measurement problem in another form, the combination of position, energy (momentum) and time are actually undefined for a quantum particle until a measurement is made (then the wave function collapses).

Also notice that the uncertainty principle is unimportant to macroscopic objects since Planck’s constant, h, is so small (10-34). For example, the uncertainty in position of a thrown baseball is 10-30 millimeters.

The depth of the uncertainty principle is realized when we ask the question; is our knowledge of reality unlimited? The answer is no, because the uncertainty principle states that there is a built-in uncertainty, indeterminacy, unpredictability to Nature.

   It is often stated that of all the theories proposed in this 
   century, the silliest is quantum theory.  Some say the the only 
   thing that quantum theory has going for it, in fact, is that it 
   is unquestionably correct. 

  - R. Feynman


Atom and Wave Particle Duality

Bohr Atom:

  • classical physics fails to describe the properties of atoms, Planck’s constant served to bridge the gap between the classical world and the new physics
  • Bohr proposed a quantized shell model for the atom using the same basic structure as Rutherford, but restricting the behavior of electrons to quantized orbits
Perhaps the foremost scientists of the 20th century was Niels Bohr, the first to apply Planck’s quantum idea to problems in atomic physics. In the early 1900’s, Bohr proposed a quantum mechanical description of the atom to replace the early model of Rutherford.In 1913 Bohr proposed his quantized shell model of the atom to explain how electrons can have stable orbits around the nucleus. The motion of the electrons in the Rutherford model was unstable because, according to classical mechanics and electromagnetic theory, any charged particle moving on a curved path emits electromagnetic radiation; thus, the electrons would lose energy and spiral into the nucleus. To remedy the stability problem, Bohr modified the Rutherford model by requiring that the electrons move in orbits of fixed size and energy. The energy of an electron depends on the size of the orbit and is lower for smaller orbits. Radiation can occur only when the electron jumps from one orbit to another. The atom will be completely stable in the state with the smallest orbit, since there is no orbit of lower energy into which the electron can jump.

Bohr’s starting point was to realize that classical mechanics by itself could never explain the atom’s stability. A stable atom has a certain size so that any equation describing it must contain some fundamental constant or combination of constants with a dimension of length. The classical fundamental constants–namely, the charges and the masses of the electron and the nucleus–cannot be combined to make a length. Bohr noticed, however, that the quantum constant formulated by the German physicist Max Planck has dimensions which, when combined with the mass and charge of the electron, produce a measure of length. Numerically, the measure is close to the known size of atoms. This encouraged Bohr to use Planck’s constant in searching for a theory of the atom.

  • Bohr’s calculation produce an accurate map of the hydrogen atom energy levels
  • changes in electron orbits requires the release or gain of energy in the form of photons
  • Bohr’s atom perfectly explains the spectra in stars as gaps due to the absorption of photons of particular wavelengths that match the electron orbits of the various elements
  • larger formulations explain all the properties outlined by Kirchoff’s laws
Planck had introduced his constant in 1900 in a formula explaining the light radiation emitted from heated bodies. According to classical theory, comparable amounts of light energy should be produced at all frequencies. This is not only contrary to observation but also implies the absurd result that the total energy radiated by a heated body should be infinite. Planck postulated that energy can only be emitted or absorbed in discrete amounts, which he called quanta (the Latin word for “how much”). The energy quantum is related to the frequency of the light by a new fundamental constant, h. When a body is heated, its radiant energy in a particular frequency range is, according to classical theory, proportional to the temperature of the body. With Planck’s hypothesis, however, the radiation can occur only in quantum amounts of energy. If the radiant energy is less than the quantum of energy, the amount of light in that frequency range will be reduced. Planck’s formula correctly describes radiation from heated bodies. Planck’s constant has the dimensions of action, which may be expressed as units of energy multiplied by time, units of momentum multiplied by length, or units of angular momentum. For example, Planck’s constant can be written as h = 6.6×10-34 joule seconds.Using Planck’s constant, Bohr obtained an accurate formula for the energy levels of the hydrogen atom. He postulated that the angular momentum of the electron is quantized–i.e., it can have only discrete values. He assumed that otherwise electrons obey the laws of classical mechanics by traveling around the nucleus in circular orbits. Because of the quantization, the electron orbits have fixed sizes and energies. The orbits are labeled by an integer, the quantum number n.

With his model, Bohr explained how electrons could jump from one orbit to another only by emitting or absorbing energy in fixed quanta. For example, if an electron jumps one orbit closer to the nucleus, it must emit energy equal to the difference of the energies of the two orbits. Conversely, when the electron jumps to a larger orbit, it must absorb a quantum of light equal in energy to the difference in orbits. The Bohr model basically assigned discrete orbits for the electron, multiples of Planck’s constant, rather than allowing a continuum of energies as allowed by classical physics.

The power in the Bohr model was its ability to predict the spectra of light emitted by atoms. In particular, its ability to explain the spectral lines of atoms as the absorption and emission of photons by the electrons in quantized orbits.

  • Heisenberg and Schroedinger formalize Bohr’s model and produce quantum mechanics
  • quantum mechanics is an all encompassing science that crosses over into many fields
Our current understanding of atomic structure was formalized by Heisenberg and Schroedinger in the mid-1920’s where the discreteness of the allowed energy states emerges from more general aspects, rather than imposed as in Bohr’s model. The Heisenberg/Schroedinger quantum mechanics have consistent fundamental principles, such as the wave character of matter and the incorporation of the uncertainty principle.In principle, all of atomic and molecular physics, including the structure of atoms and their dynamics, the periodic table of elements and their chemical behavior, as well as the spectroscopic, electrical, and other physical properties of atoms and molecules, can be accounted for by quantum mechanics => fundamental science.

de Broglie Matter Waves:


  • early quantum physics did not ask the question of `why’ quantum effects are found in the microscopic world
Perhaps one of the key questions when Bohr offered his quantized orbits as an explanation to the UV catastrophe and spectral lines is, why does an electron follow quantized orbits? The response to this question arrived from the Ph.D. thesis of Louis de Broglie in 1923. de Broglie argued that since light can display wave and particle properties, then perhaps matter can also be a particle and a wave too.

  • One way of thinking of a matter wave (or a photon) is to think of a wave packet. Normal waves look with this:

  • having no beginning and no end. A composition of several waves of different wavelength can produce a wave packet that looks like this:
  • the wave packet interpretation requires the particle to have no set position
  • momentum of a particle is proportional to the wavelength of the particle
So a photon, or a free moving electron, can be thought of as a wave packet, having both wave-like properties and also the single position and size we associate with a particle. There are some slight problems, such as the wave packet doesn’t really stop at a finite distance from its peak, it also goes on for every and every. Does this mean an electron exists at all places in its trajectory?de Broglie also produced a simple formula that the wavelength of a matter particle is related to the momentum of the particle. So energy is also connected to the wave property of matter.
  • Lastly, the wave nature of the electron makes for an elegant explanation to quantized orbits around the atom. Consider what a wave looks like around an orbit, as shown below
  • only certain wavelengths will fit into orbit, so quantiziation is due to wavelike nature of particles
The electron matter wave is both finite and unbounded (remember the 1st lecture on math). But only certain wavelengths will `fit’ into an orbit. If the wavelength is longer or shorter, then the ends do not connect. Thus, de Broglie explains the Bohr atom in that on certain orbits can exist to match the natural wavelength of the electron. If an electron is in some sense a wave, then in order to fit into an orbit around a nucleus, the size of the orbit must correspond to a whole number of wavelengths.

  • wavelike nature also means that a particles existence is spread out, a probability field
Notice also that this means the electron does not exist at one single spot in its orbit, it has a wave nature and exists at all places in the allowed orbit. Thus, a physicist speaks of allowed orbits and allowed transitions to produce particular photons (that make up the fingerprint pattern of spectral lines). And the Bohr atom really looks like the following diagram:

  • the idea of atoms being solid billiard ball type objects fails with quantum physics
  • quantum effects fade on larger scales since macroscopic objects have high momentum values and therefore small wavelengths
While de Broglie waves were difficult to accept after centuries of thinking of particles are solid things with definite size and positions, electron waves were confirmed in the laboratory by running electron beams through slits and demonstrating that interference patterns formed.How does the de Broglie idea fit into the macroscopic world? The length of the wave diminishes in proportion to the momentum of the object. So the greater the mass of the object involved, the shorter the waves. The wavelength of a person, for example, is only one millionth of a centimeter, much to short to be measured. This is why people don’t `tunnel’ through chairs when they sit down.

Probability Fields:

  • wave interpretation requires a statistical or probability mathematical description of the position of a particle
  • where wave represents the probability of finding the particle at a particular point
The idea that an electron is a wave around the atom, instead of a particle in orbit begs the question of `where’ the electron is at any particular moment. The answer, by experimentation, is that the electron can be anywhere around the atom. But ‘where’ is not evenly distributed. The electron as a wave has a maximum chance of being observed where the wave has the highest amplitude. Thus, the electron has the highest probability to exist at a certain orbit.Where probability is often used in physics to describe the behavior of many objects, this is the first instance of an individual object, an electron, being assigned a probability for a Newtonian characteristic such as position. Thus, an accurate description of an electron orbit is one where we have a probability field that surrounds the nucleus, as shown below:


  • for higher orbits the probability field becomes distorted
For more complicated orbits, and higher electron shells, the probability field becomes distorted by other electrons and their fields, like the following example:


  • meaning of existence has an elusive nature in the quantum world
Thus, for the first time, the concept of existence begins to take on an elusive character at the subatomic level.

The Birth Of Quantum Mechanics

  • accelerating electron produces EM radiation (light), loses energy and spirals into nucleus, i.e. atom should not work
The UV catastrophe and the dilemma of spectral lines were already serious problems for attempts to understand how light and matter interact. Planck also noticed another fatal flaw in our physics by demonstrating that the electron in orbit around the nucleus accelerates. Acceleration means a changing electric field (the electron has charge), when means photons should be emitted. But, then the electron would lose energy and fall into the nucleus. Therefore, atoms shouldn’t exist!

  • Planck makes `quantum’ assumption to resolve this problem
  • a quantum is a discrete, and smallest, unit of energy
  • all forms of energy are transfered in quantums, not continuous
To resolve this problem, Planck made a wild assumption that energy, at the sub-atomic level, can only be transfered in small units, called quanta. Due to his insight, we call this unit Planck’s constant (h). The word quantum derives from quantity and refers to a small packet of action or process, the smallest unit of either that can be associated with a single event in the microscopic world.Quantum, in physics, discrete natural unit, or packet, of energy, charge, angular momentum, or other physical property. Light, for example, appearing in some respects as a continuous electromagnetic wave, on the submicroscopic level is emitted and absorbed in discrete amounts, or quanta; and for light of a given wavelength, the magnitude of all the quanta emitted or absorbed is the same in both energy and momentum. These particle-like packets of light are called photons, a term also applicable to quanta of other forms of electromagnetic energy such as X rays and gamma rays.

All phenomena in submicroscopic systems (the realm of quantum mechanics) exhibit quantization: observable quantities are restricted to a natural set of discrete values. When the values are multiples of a constant least amount, that amount is referred to as a quantum of the observable. Thus Planck’s constant h is the quantum of action, and h/ (i.e., h/2 ) is the quantum of angular momentum, or spin.

  • electron transition from orbit to orbit must be in discrete quantum jumps
  • experiments show that there is no `inbetween’ for quantum transitions = new kind of reality
  • despite strangeness, experiments confirm quantum predictions and resolves UV catastrophe
Changes of energy, such as the transition of an electron from one orbit to another around the nucleus of an atom, is done in discrete quanta. Quanta are not divisible. The term quantum leap refers to the abrupt movement from one discrete energy level to another, with no smooth transition. There is no “inbetween”.The quantization, or “jumpiness” of action as depicted in quantum physics differs sharply from classical physics which represented motion as smooth, continuous change. Quantization limits the energy to be transfered to photons and resolves the UV catastrophe problem.

Wave-Particle Dualism:

  • The wave-like nature of light explains most of its properties:
    1. reflection/refraction
    2. diffraction/interference
    3. Doppler effect
  • however, a particle description is suggested by the photoelectric effect, the release of electrons by a beam of energetic blue/UV light
  • wavelike descriptions of light fail to explain the lack of the photoelectric effect for red light
The results from spectroscopy (emission and absorption spectra) can only be explained if light has a particle nature as shown by Bohr’s atom and the photon description of light.This dualism to the nature of light is best demonstrated by the photoelectric effect, where a weak UV light produces a current flow (releases electrons) but a strong red light does not release electrons no matter how intense the red light.

An unusual phenomenon was discovered in the early 1900’s. If a beam of light is pointed at the negative end of a pair of charged plates, a current flow is measured. A current is simply a flow of electrons in a metal, such as a wire. Thus, the beam of light must be liberating electrons from one metal plate, which are attracted to the other plate by electrostatic forces. This results in a current flow.

In classical physics, one would expect the current flow to be proportional to the strength of the beam of light (more light = more electrons liberated = more current). However, the observed phenomenon was that the current flow was basically constant with light strength, yet varied strong with the wavelength of light such that there was a sharp cutoff and no current flow for long wavelengths.

Einstein successful explained the photoelectric effect within the context of the new physics of the time, quantum physics. In his scientific paper, he showed that light was made of packets of energy quantum called photons. Each photon carries a specific energy related to its wavelength, such that photons of short wavelength (blue light) carry more energy than long wavelength (red light) photons. To release an electron from a metal plate required a minimal energy which could only be transfered by a photon of energy equal or greater than that minimal threshold energy (i.e. the wavelength of the light had to be a sufficiently short). Each photon of blue light released an electron. But all red photons were too weak. The result is no matter how much red light was shown on the metal plate, there was no current.

The photoelectric earned Einstein the Nobel Prize, and introduced the term “photon” of light into our terminology.

  • particle and wave properties to light is called wave-particle dualism and continues the strange characteristics to the new science of quantum physics
  • wave-particle dualism is extended to matter particles, i.e. electrons act as waves
Einstein explained that light exists in a particle-like state as packets of energy (quanta) called photons. The photoelectric effect occurs because the packets of energy carried by each individual red photons are too weak to knock the electrons off the atoms no matter how many red photons you beamed onto the cathode. But the individual UV photons were each strong enough to release the electron and cause a current flow.It is one of the strange, but fundamental, concepts in modern physics that light has both a wave and particle state (but not at the same time), called wave-particle dualism.

Wave/particle duality is the possession by physical entities (such as light and electrons) of both wavelike and particle-like characteristics. On the basis of experimental evidence, the German physicist Albert Einstein first showed (1905) that light, which had been considered a form of electromagnetic waves, must also be thought of as particle-like, or localized in packets of discrete energy. The French physicist Louis de Broglie proposed (1924) that electrons and other discrete bits of matter, which until then had been conceived only as material particles, also have wave properties such as wavelength and frequency. Later (1927) the wave nature of electrons was experimentally established. An understanding of the complementary relation between the wave aspects and the particle aspects of the same phenomenon was announced in 1928.

Dualism is not such a strange concept, consider the following picture, are the swirls moving or not or both?



Satellites Orbiting Earth

How a Satellite Works

Satellites are very complex machines that require precise mathematical calculations in order for them to function. The satellite has tracking systems and very sophisticated computer systems on board. Accuracy in orbit and speed are required for the satellite to keep from crashing back down to Earth. There are several different types of orbits that the satellite can take. Some orbits are stationary and some are elliptical.”Satellite Orbit”

Low Earth Orbit

A satellite is in “Low Earth Orbit” when it circles in an elliptical orbit close to Earth. Satellites in low orbit are just hundreds of miles away. These satellites travel at high speeds preventing gravity from pulling them back to Earth. Low Orbit Satellites travel approximately 17,000 miles per hour and circle the Earth in an hour and a half.

Polar Orbit

This is how a satellite travels in a polar orbit.

This is how a satellite travels in a polar orbit. These orbits eventually pass the entire surface of the Earth.

Polar Orbiting Satellites circle the planet in a north-south direction as Earth spins beneath it in an east-west direction. Polar Orbits enable satellites to scan the entire surface of the Earth. Like pealing an orange peal in a circular motion from top to bottom. Remote sensing satellites, weather satellites, and government satellites are almost always in polar orbit because of the coverage. Polar orbits cover the Earth’s surface thoroughly. The polar obit occupied by a satellite has a constant location in which it passes over. ALL POLAR ORBITING SATELLITES INTERSECT The North Pole at their same point. While one Polar orbit satellite is over America, another Polar Satellite is passing over the North Pole. So the North Pole has a constant flow of UHF and higher microwaves hitting it. The illustration shows that the common passing point for Polar Orbiting Satellites is over the North Pole.

A polar orbiting satellite will pass over the Earths equator at a different longitude on each of its orbits; however, Polar Orbiting satellites pass over the North Pole every time. Polar orbits are often used for earth mapping, earth observation, weather satellites, and reconnaissance satellites. This orbit has a disadvantage. No one spot of the Earth’s surface can be sensed continuously from a satellite in a polar orbit.

This is from U.S. Army Information Systems Engineering Command.

“In order to fulfill the military need for protected communication service, especially low probability of intercept/detection (LPI/LPD), to units operating north of 65 degree northern latitude, the space communications architecture includes the polar satellite system capability. An acceptable approach to achieving this goal is to fly a low capacity EHF system in a highly elliptical orbit, either as a hosted payload or as a “free-flyer,” to provide service during a transition period, nominally 1997-2010. A single, hosted EHF payload is already planned. Providing this service 24 hours-a-day requires a two satellite constellation at high earth orbit (HEO). Beyond 2010, the LPI/LPD polar service could continue to be provided by a high elliptical orbit HEO EHF payload, or by the future UHF systems.” (quote from


“Geo Synchronous” Orbit

This is how a satellite travels in a Equitorial orbit

This is how a satellite travels in a “Geo Synchronous” orbit. Equatorial orbits are also called “Geostationary”. These satellites follow the rotation of the Earth.

A satellite in a “Geo Synchronous” orbit hovers over one spot and follows the Earths spin along the equator. Go to this link for more information on “Geo synchronous Orbits”. Earth takes 24 hours to spin on its axis.  In the illustration you can see that an “Geo Synchronous” Orbit follows the equator and never covers the North or South Poles. The footprints of “Geo Synchronous” orbiting satellites do not cover the polar regions, so communication satellites in “Geo Synchronous” orbits in cannot be accessed in the northern and southern polar regions.

Because the “Geo Synchronous” satellite does not move from the area that it covers, these satellites are used for telecommunications, gps trackers, television broadcasting, government, and internet. Because they are stationary, their orbits are much farther from the Earth than the Polar orbiting satellites. If a stationary satellite is too close to the Earth, it will crash back down at a faster rate. They say there are about 300 “Geo Synchronous” satellites in orbit right now. Of course, these are the satellites that the public is allowed to know about, that are not governmentally classified.

Satellite Anatomy

This is the Atatomy of a Satellite.

This is the Anatomy of a Satellite.

A satellite is made up of several instruments that work together to operate the satellite during its mission. This illustration to the left demonstrates the parts of a satellite.

The command and data system controls all of the satellite functions. This is a very complex computer system that communicates all of the satellite flight operations, where the satellite points, and any other mathematical operations.

The Pointing control directs the satellite in order for the satellite to keep a steady flight path. This system is a complex sensor instrument that keeps the satellite pointing in the same direction. The satellite uses a propulsion system called “momentum wheels” that adjusts the position of the satellite into its proper place. Scientific observation satellites have more precise propulsion systems than do communications satellites.

The Communications system has a transmitter, a receiver, and various antennas to transmit data to the Earth . On Earth, Ground control sends instructions and data to the satellite’s computer through the Antenna. Pictures, data, television, radio, and many other data is sent by the satellite back to practically everyone on Earth.

The Power system needed power and operate the satellite is an efficient solar panel array that obtains energy from the Sun’s rays. Solar arrays make electricity from the sunlight and store the electricity in rechargeable batteries.

The Payload is what a satellite needs to perform its job. A weather satellite would have a payload that consist of an Image sensor, digital camera, telescope, and other thermal and weather sensing devices.

The Thermal Control is the protection required to prevent damage to the satellite’s instrumentation and components in. Satellite are exposed to extreme temperature changes. Temperatures range from 120 degrees below zero to 180 degrees above zero. Heat distribution units and thermal blankets to protect the electronics and components from temperature damage.

Satellite Footprints

A single satellite footprint

Here you can see one footprint covers an enormous area.

Geostationary satellites have a very broad view of Earth. The footprint of one Echo Starbroadcast satellite covers almost all of North America. They stay over the Earth at same the same location so we always know where they are. Direct contact with the satellite can be made because Equatorial Satellites are fixed.

Many communications satellites travel in Equatorial orbits, including those that relay TV signals into our homes; However, the size of the footprint of one satellite covers the entire Northern America.

The multi path effect that occurs when satellite transmissions are obstructed by topographical entities also provides insight on microwave global warming. Microwaves are being bombarded upon our planet. Our planet absorbs and obstructs the waves from space. Microwaves penetrate through all of our atmosphere and bounce and echo off of the Earth. Imagine the footprint overlaps that are being produced by the thousands of satellites in orbit right now?

coverage 8 pic

Here you can see the footprint overlapping the that satellites make. Each satellite covers an enormous area.

The closer the satellite is to something the more power will be exerted on the object. The farther the waves have to go the less power they will have. Because the atmosphere is so much closer to the satellite, there is a stronger beam of energy going through the clouds and atmosphere. This stronger power causes a higher rate of warming in the atmosphere than it does on the surface of the Earth.

The illustration to the right shows how eight satellites microwave an enormous part of our Earth. When the radio signals reflect off of surrounding terrain; buildings, canyon walls, hard ground multi path issues occur due to multiple waves doubling over themselves. These delayed signals can cause poor signals. Ultimately, the water, ice, and Earth are absorbing and reflecting microwaves in many different directions. Microwaves passing through Earths atmospheres are causing radio frequency heating at the molecular level.

System spectral efficiency

“In wireless networks, the system spectral efficiency is a measure of the quantity of users or services that can be simultaneously supported by a limited radio frequency bandwidth in a defined geographic area.” The capacity of a wireless network can be measured by calculating the maximum simultaneous phone calls over 1 MHz frequency spectrum. This is measured in Erlangs//MHz/cell, Erlangs/MHz/sector, Erlangs/MHz/site, or Erlangs/MHz/km measurements. Modern day cell phones take advantage of this type of transmission. These cell phones transmit a microwave transmission that is twice the frequency of a microwave oven in your home.

This is a misconception of how microwave frequencies travel.

This is a misconception of how microwave frequencies travel.

An example of a spectral efficiency can be found in the satellite RADARSAT-1. In 1995 RADARSAT-1, an Earth observation satellite from Canada, was launched in an orbit above the Earth. RADRASAT-1 provides images of the Earth, scientific and commercial, used in agriculture, geology, hydrology, arctic surveillance, oceanography, cartography, ice and ocean monitoring, forestry, detecting ocean oil slicks, and many other applications. This satellite uses continuous high microwave transmissions. A Synthetic Aperture Radar (SAR) system is a type of sensor that images the Earth at a single microwave frequency of 5.3 GHz. SAR systems transmit microwaves towards the surface of the Earthy and record the reflections from the surface. This satellite can image the Earth during any time and in any atmospheric condition.

This is how microwave frequencies travel

This is how microwave frequencies actually travel.

A Common misconception about microwave transmissions is that the transmission is directly beaming straight into the receiving antennae. (See misconception illustration) This however, is not true. Transmissions are spread into the air in a spherical direction. The waves travel in every direction until they find a receiver or some dielectric material to pass into.

When a microwave transmission is sent to a receiving satellite dish the transmission is sent in a spherical direction. (See how microwaves travel illustration) The signal passes through all parts of that sphere until it finds a connection. All microwaves, not received by an antennae, pass through the dielectric material in the earth. Dielectric material is primarily water and ice.

The Building Blocks Of Nature

  • particle physics is the search for the fundamental building blocks of Nature, a reductionist goal
  • elementary particles should be structureless, resulting in simple interactions
One of the primary goals in modern physics is to answer the question “What is the Universe made of?” Often that question reduces to “What is matter and what holds it together?” This continues the line of investigation started by Democritus, Dalton and Rutherford.Modern physics speaks of fundamental building blocks of Nature, where fundamental takes on a reductionist meaning of simple and structureless. Many of the particles we have discussed so far appear simple in their properties. All electrons have the exact same characteristics (mass, charge, etc.), so we call an electron fundamental because they are all non-unique.

The search for the origin of matter means the understanding of elementary particles. And with the advent of holism, the understanding of elementary particles requires an understanding of not only their characteristics, but how they interact and relate to other particles and forces of Nature, the field of physics called particle physics.

  • more advanced technology lead to the discovery of hundreds of new particles, forcing the search for some underlying principles to unite the chain of particles to something simpler
The study of particles is also a story of advanced technology begins with the search for the primary constituent. More than 200 subatomic particles have been discovered so far, all detected in sophisticated particle accelerators. However, most are not fundamental, most are composed of other, simpler particles. For example, Rutherford showed that the atom was composed of a nucleus and orbiting electrons. Later physicists showed that the nucleus was composed of neutrons and protons. More recent work has shown that protons and neutrons are composed of quarks.Short History of Elementary Particles

Generations of Matter:

  • the two most fundamental types of particles are quarks and leptons
  • the quarks and leptons are divided into 6 flavors corresponding to three generations of matter
  • quarks (and antiquarks) have electric charges in units of 1/3 or 2/3’s
A quark is any of a group of subatomic particles believed to be among the fundamental constituents of matter. In much the same way that protons and neutrons make up atomic nuclei, these particles themselves are thought to consist of quarks. Quarks constitute all hadrons (baryons and mesons)–i.e., all particles that interact by means of the strong force, the force that binds the components of the nucleus.According to prevailing theory, quarks have mass and exhibit a spin (i.e., type of intrinsic angular momentum corresponding to a rotation around an axis through the particle). Quarks appear to be truly fundamental. They have no apparent structure; that is, they cannot be resolved into something smaller. Quarks always seem to occur in combination with other quarks or antiquarks, never alone. For years physicists have attempted to knock a quark out of a baryon in experiments with particle accelerators to observe it in a free state but have not yet succeeded in doing so.

Throughout the 1960s theoretical physicists, trying to account for the ever-growing number of subatomic particles observed in experiments, considered the possibility that protons and neutrons were composed of smaller units of matter. In 1961 two physicists, Murray Gell-Mann of the United States and Yuval Ne`eman of Israel, proposed a particle classification scheme called the Eightfold Way, based on the mathematical symmetry group SU(3), that described strongly interacting particles in terms of building blocks. In 1964 Gell-Mann introduced the concept of quarks as a physical basis for the scheme, adopting the fanciful term from a passage in James Joyce’s novel Finnegans Wake. (The American physicist George Zweig developed a similar theory independently that same year and called his fundamental particles “aces.”) Gell-Mann’s model provided a simple picture in which all mesons are shown as consisting of a quark and an antiquark and all baryons as composed of three quarks. It postulated the existence of three types of quarks, distinguished by distinctive “flavours.” These three quark types are now commonly designated as “up” (u), “down” (d), and “strange” (s). Each carries a fractional electric charge (i.e., a charge less than that of the electron). The up and down quarks are thought to make up protons and neutrons and are thus the ones observed in ordinary matter. Strange quarks occur as components of K mesons and various other extremely short-lived subatomic particles that were first observed in cosmic rays but that play no part in ordinary matter.

Most problems with quarks were resolved by the introduction of the concept of color, as formulated in quantum chromodynamics (QCD). In this theory of strong interactions, developed in 1977, the term color has nothing to do with the colors of the everyday world but rather represents a special quantum property of quarks. The colors red, green, and blue are ascribed to quarks, and their opposites, minus-red, minus-green, and minus-blue, to antiquarks. According to QCD, all combinations of quarks must contain equal mixtures of these imaginary colors so that they will cancel out one another, with the resulting particle having no net color. A baryon, for example, always consists of a combination of one red, one green, and one blue quark. The property of color in strong interactions plays a role analogous to an electric charge in electromagnetic interactions. Charge implies the exchange of photons between charged particles. Similarly, color involves the exchange of massless particles called gluons among quarks. Just as photons carry electromagnetic force, gluons transmit the forces that bind quarks together. Quarks change their color as they emit and absorb gluons, and the exchange of gluons maintains proper quark color distribution.

  • leptons are a separate class since they do not interact with quarks by the strong force
  • leptons have charges in units of 1 or 0
Leptons are any member of a class of fermions that respond only to electromagnetic, weak, and gravitational forces and do not take part in strong interactions. Like all fermions, leptons have a half-integral spin. (In quantum-mechanical terms, spin constitutes the property of intrinsic angular momentum.) Leptons obey the Pauli exclusion principle, which prohibits any two identical fermions in a given population from occupying the same quantum state. Leptons are said to be fundamental particles; that is, they do not appear to be made up of smaller units of matter.Leptons can either carry one unit of electric charge or be neutral. The charged leptons are the electrons, muons, and taus. Each of these types has a negative charge and a distinct mass. Electrons, the lightest leptons, have a mass only 0.0005 that of a proton. Muons are heavier, having more than 200 times as much mass as electrons. Taus, in turn, are approximately 3,700 times more massive than electrons. Each charged lepton has an associated neutral partner, or neutrino (i.e., electron-, muon-, and tau-neutrino), that has no electric charge and no significant mass. Moreover, all leptons, including the neutrinos, have antiparticles called antileptons. The mass of the antileptons is identical to that of the leptons, but all of the other properties are reversed.

  • the up and down quark, electron and neutrino (leptons) work together to form normal, everyday matter
  • note that for every quark or lepton there is a corresponding antiparticle. For example, there is an up antiquark, an anti-electron (called a positron) and an anti-neutrino
The electron is the lightest stable subatomic particle known. It carries a negative charge which is considered the basic charge of electricity.An electron is nearly massless. It has a rest mass of 9.1×10-28 gram, which is only 0.0005 the mass of a proton. The electron reacts only by the electromagnetic, weak, and gravitational forces; it does not respond to the short-range strong nuclear force that acts between quarks and binds protons and neutrons in the atomic nucleus. The electron has an antimatter counterpart called the positron. This antiparticle has precisely the same mass and spin, but it carries a positive charge. If it meets an electron, both are annihilated in a burst of energy. Positrons are rare on the Earth, being produced only in high-energy processes (e.g., by cosmic rays) and live only for brief intervals before annihilation by electrons that abound everywhere.

The electron was the first subatomic particle discovered. It was identified in 1897 by the British physicist J.J. Thomson during investigations of cathode rays. His discovery of electrons, which he initially called corpuscles, played a pivotal role in revolutionizing knowledge of atomic structure.

Under ordinary conditions, electrons are bound to the positively charged nuclei of atoms by the attraction between opposite electric charges. In a neutral atom the number of electrons is identical to the number of positive charges on the nucleus. Any atom, however, may have more or fewer electrons than positive charges and thus be negatively or positively charged as a whole; these charged atoms are known as ions. Not all electrons are associated with atoms. Some occur in a free state with ions in the form of matter known as plasma.



Copenhagen Interpretation And Quantum Multiverses

  • wave-particle duality is a manifestation of quantum entities
Wave-particle duality does not mean that a photon or subatomic particle is both a wave and particle simultaneously, but that it could manifest either a wave or a particle aspect depending on circumstances. Complementarity, uncertainty, and the statistical interpretation of Schroedinger’s wave function were all related. Together they formed a logical interpretation of the physical meaning of quantum mechanics known as the “Copenhagen interpretation.

  • The Copenhagen Interpretation has three primary parts:
    • The wave function is a complete description of a wave/particle. Any information that cannot be derived from the wave function does not exist. For example, a wave is spread over a broad region, therefore does not have a specific location.
    • When a measurement of the wave/particle is made, its wave function collapses. In the case of momentum, a wave packet is made of many waves each with its own momentum value. Measurement reduced the wave packet to a single wave and a single momentum.
    • If two properties are related by an uncertainty relation, no measurement can simultaneously determine both properties to a precision greater than the uncertainty relation allows. So, if we measure a wave/particles position, its momentum becomes uncertain.
Central to the Copenhagen Interpretation is the principle known as complementarity. That the wave and particle nature of objects can be regarded as complementary aspects of a single reality, like the two sides of a coin. An electron, for example, can behave sometimes as a wave and sometimes as a particle, but never both together, just as a tossed coin may fall either heads or tails up, but not both at once.One must resist the temptation to regard matter or photon waves as waves of some material substance like sound or water waves. The correct interpretation, proposed by Born in the 1920’s, is that the waves are measures of probability. Waves of probability relate to the uncertainty principle in that it cannot be certain what any given particle will do. Only betting odds can be given. This fundamental limitation represents a breakdown of determinism in nature. It means that identical electrons in identical experiments may do different things. But, statistically, the outcome of the experiment is predictable.

Bohr, the leader of the Copenhagen Interpretation, admonished those who would ask what an electron really is, a wave or a particle. He denounced the question as meaningless or without context (such as `what is north of the north pole?’). To observe the properties of an electron is to conduct some sort of measurement. Experiments designed to measure waves will see the wave aspect of electrons. Those experiments designed to measure particle properties will see electrons as particles. No experiment can ever measure both aspects simultaneously and so we never see a mixture of wave and particle.

  • probabilities in the macroscopic world reflect a lack of knowledge
  • the quantum world is pure probability
The adoption of the Copenhagen Interpretation for quantum phenomenon poses a sharp divide between classical or macroscopic physics and quantum or microscopic physics. In the macroscopic world events appear to be deterministic. Every event has a cause. Often, the cause is difficult to directly determine, for example an apple falls from a tree because its stem weakens. We cannot tell exactly when it will fall, but we know some direct mechanical action is the cause and if we had precise knowledge of the state of its fibers we would know when and why. Thus, we resort to probabilities as a substitute for exact knowledge of the acting causes.However, the conceptual abyss seems to separate classical from quantum physics. In the quantum world, probabilities are not a substitute for detailed knowledge of hidden, relevant details; there are no relevant details, just pure chance. The classical world is determinism, the quantum world is pure probabilist. And, the probabilism nature to quantum physics has been confirmed by numerous experiments.

Hidden Variables Hypothesis:

  • macroscopic physics states that all variables are there, just hard to measure
  • Copenhagen Interpretation states that variables are not there, randomness is fundamental
In general, quantum theory predicts only the probability of a certain result. Consider the case of radioactivity. Imagine a box of atoms with identical nuclei that can undergo decay with the emission of an alpha particle. In a given time interval, a certain fraction will decay. The theory may tell precisely what that fraction will be, but it cannot predict which particular nuclei will decay. The theory asserts that, at the beginning of the time interval, all the nuclei are in an identical state and that the decay is a completely random process.Even in classical physics, many processes appear random. For example, one says that, when a roulette wheel is spun, the ball will drop at random into one of the numbered compartments in the wheel. Based on this belief, the casino owner and the players give and accept identical odds against each number for each throw. However, the fact is that the winning number could be predicted if one noted the exact location of the wheel when the croupier released the ball, the initial speed of the wheel, and various other physical parameters. It is only ignorance of the initial conditions and the difficulty of doing the calculations that makes the outcome appear to be random. In quantum mechanics, on the other hand, the randomness is asserted to be absolutely fundamental. The theory says that, though one nucleus decayed and the other did not, they were previously in the identical state.

  • indeterminacy was unpopular (not platonic)
  • Bell hypothesis is that quantum variables exist, but are hidden, special forces required
  • hidden variables are not testable, poor science
Many eminent physicists, including Einstein, could not accept this indeterminacy. They have rejected the notion that the nuclei were initially in the identical state. Instead, they postulated that there must be some other property–presently unknown, but existing nonetheless–that is different for the two nuclei. This type of unknown property is termed a hidden variable; if it existed, it would restore determinacy to physics.If the initial values of the hidden variables were known, it would be possible to predict which nuclei would decay. Such a theory would, of course, also have to account for the wealth of experimental data which conventional quantum mechanics explains from a few simple assumptions. For example, the electron would definitely have to go through only one slit in the two-slit experiment. To explain that interference occurs only when the other slit is open, it is necessary to postulate a special force on the electron which exists only when that slit is open. Such artificial additions make hidden variable theories unattractive, and there is little support for them among physicists.

The Copenhagen view of understanding the physical world stresses the importance of basing theory on what can be observed and measured experimentally. It therefore rejects the idea of hidden variables as quantities that cannot be measured. The Copenhagen view is that the indeterminacy observed in nature is fundamental and does not reflect an inadequacy in present scientific knowledge. One should therefore accept the indeterminacy without trying to “explain” it and see what consequences come from it.

Many-Worlds Hypothesis :

  • collapse of the wave function still presents a problem for deterministic physics
  • solution is to not collapse the wave function, rather split reality
  • many worlds hypothesis is allows for the existence of all quantum states, observation splits the worlds containing the states
The many possibilities carried by quantum superpositions are spread out over space and time. However, Newtonian physics is an accurate description of ordinary experience. What is the relationship between the strange quantum world and the classical world of common sense? Clearly the difference occurs when we measure or observe a quantum system. Whatever the process, it occurs at that time. The “how and why” of this process is unsolved and many believe modern physics will be incomplete until it is resolved.By the 1950’s, the ongoing parade of successes had made it abundantly clear that quantum theory was far more than a short-lived temporary fix. And so, in the mid 1950’s, a Princeton graduate student named Hugh Everett III decided to revisit the collapse postulate in his Ph.D. thesis. Everett’s idea is known as the relative-state, many-histories or many-universes interpretation or metatheory of quantum theory. Dr Hugh Everett, III, its originator, called it the “relative-state metatheory” or the “theory of the universal wavefunction”, but it is generally called “many-worlds”.

Many-worlds is a re-formulation of quantum theory which treats the process of observation or measurement entirely within the wave-mechanics of quantum theory, rather than an input as additional assumption, as in the Copenhagen interpretation. Everett considered the wavefunction a real object. Many-worlds is a return to the classical, pre-quantum view of the universe in which all the mathematical entities of a physical theory are real. For example the electromagnetic fields of James Clark Maxwell or the atoms of Dalton were considered as real objects in classical physics. Everett treats the wavefunction in a similar fashion. Everett also assumed that the wavefunction obeyed the same wave equation during observation or measurement as at all other times. This is the central assumption of many-worlds: that the wave equation is obeyed universally and at all times.

Quantum systems, like particles, that interact become entangled. If one of the systems is an observer and the interaction an observation then the effect of the observation is to split the observer into a number of copies, each copy observing just one of the possible results of a measurement and unaware of the other results and all its observer copies. Interactions between systems and their environments, including communication between different observers in the same world, transmits the correlations that induce local splitting or decoherence into non-interfering branches of the universal wavefunction. Thus the entire world is split, quite rapidly, into a host of mutually unobservable but equally real worlds.

According to many-worlds all the possible outcomes of a quantum interaction are realised. The wavefunction, instead of collapsing at the moment of observation, carries on evolving in a deterministic fashion, embracing all possibilities embedded within it. All outcomes exist simultaneously but do not interfere further with each other, each single prior world having split into mutually unobservable but equally real worlds.

  • macroscopic systems exhibit irreversible behavior (entropy) that prevents the reconnection of past worlds and present the observed world as real to individuals
  • many worlds does not allow communication between the worlds, but their existence can be tested in two slit experiments (the other worlds are doing the interfering) and with reversible mind experiments (nano-AI’s)
Worlds, or branches of the universal wavefunction, split when different components of a quantum superposition “decohere” from each other. Decoherence refers to the loss of coherency or absence of interference effects between the elements of the superposition. For two branches or worlds to interfere with each other all the atoms, subatomic particles, photons and other degrees of freedom in each world have to be in the same state, which usually means they all must be in the same place or significantly overlap in both worlds, simultaneously.For small microscopic systems it is quite possible for all their atomic components to overlap at some future point. In the double slit experiment, for instance, it only requires that the divergent paths of the diffracted particle overlap again at some space-time point for an interference pattern to form, because only the single particle has been split.

Such future coincidence of positions in all the components is virtually impossible in more complex, macroscopic systems because all the constituent particles have to overlap with their counterparts simultaneously. Any system complex enough to be described by thermodynamics and exhibit irreversible behaviour is a system complex enough to exclude, for all practical purposes, any possibility of future interference between its decoherent branches. An irreversible process is one in, or linked to, a system with a large number of internal, unconstrained degrees of freedom. Once the irreversible process has started then alterations of the values of the many degrees of freedom leaves an imprint which can’t be removed. If we try to intervene to restore the original status quo the intervention causes more disruption elsewhere.

Ms Kitty exampleThere is no “where” for cat, not even both true, wave funtcion is the description of the cat. The worlds already exist, there is no spliting.


M-Theory ; The Grand Masterpiece

There are five different superstring theories, each ten-dimensional, all seemingly incompatible. But in 1995, Edward Witten proposed that the five theories were actually all part of a large, mysterious and uncharted framework that he dubbed M-theory.

We don’t have the full equations for M-theory, but there are many hints as to how it works. Witten showed that the five theories are linked to each other via dualities: one formulation at strong coupling is identical to another at weak coupling. M-theory is the complete skeleton whilst the five superstring models are individual bones.

M-theory doesn’t have ten spacetime dimensions, but eleven – ten space and one time! Now there isn’t a string theory in eleven dimensions, but there is a supersymmetric theory of gravity, calledsupergravity. Witten showed that there was a continuous path between the ten-dimensional string theories and the eleven-dimensional theory of supergravity; supergravity is part of the M-theory web.

Our understanding of M-theory is by no means complete. It seems to be the single unifying structure into which all string theories fit. Dualities allow us to relate some of the fringes, where interactions are very weak or very strong. But the middle of the web remains impenetrable.

Some of the duality calculations are surprising, and impressive. Nonetheless we can only see the edges of the picture, and we grasp little of its mathematics. We have yet to derive any concrete predictions and experimental evidence of the required extra dimensions remains elusive. Like an artistic masterpiece with a hole through the middle, it gives us a tantalising glimpse of what might be the ultimate unifying theory


M-theory is not just populated by strings, but also by membranes called D-branes. These are multi-dimensional surfaces that move through the eleven dimensions of M-theory. We can have D-branes of up to nine spatial dimensions (though that’s a little hard to visualise)! A point is a D0-brane, a string a D1-brane, a sheet a D2-brane and so on.

Eleven-dimensional M-theory can look exactly like ten-dimensional string theory. This happens when one of the eleven dimensions is extremely small and circular. A two-dimensional D-brane wrapped around this extra dimension will look like a cylinder. But if the circular dimension is tiny then this cylinder will be very thin. As a result the D-brane will appear to be a one-dimensional string moving in ten dimensions (see picture).

In recent years D-branes have become increasingly important to research. They are natural places for fixed endpoints of open strings to live. And strings living on D-branes give rise to the same kind of forces that appear in the Standard Model.

But there is an even more potent reason driving interest in D-branes: they are non-perturbative objects. D-branes allow physicists to do calculations that transcend the approximate methods of perturbation theory. Thus we can uncover elements of the theory in regimes where interactions are strong. Historically this was uncharted terrain.

D-branes are a central ingredient in modern research. They can be used to construct cosmological models within string theory. Researchers in brane cosmology build models of inflation based on brane collisions. And the study of D-branes has shed light on some of the most elusive elements in the universe, black holes. Finally, D-branes played an essential role in formulating the AdS/CFT correspondence.

Nuclear Bombs : History, Creation, Ingredients, Chemical Composition, Fusion, Types and Detonation

American nuclear technology evolved rapidly between 1944 and 1950, moving from the primitive Fat Man and Little Boy to more sophisticated, lighter, more powerful, and more efficient designs. Much design effort shifted from fission to thermonuclear weapons after President Truman decided that the United States should proceed to develop a hydrogen bomb, a task which occupied the Los Alamos Laboratory from 1950 through 1952. The “George” shot of Operation Greenhouse (May 9, 1951) confirmed for the first time that a fission device could produce the conditions needed to ignite a thermonuclear reaction. The “Mike” test of Operation Ivy, 1 November, 1952, was the first explosion of a true two-stage thermonuclear device.

From 1952 until the early years of the ICBM era [roughly to the development of the first multiple independently targeted reentry vehicles (MIRVs) in the late 1960’s], new concepts in both fission primary and fusion secondary design were developed rapidly. However, after the introduction of the principal families of weapons in the modern stockpile (approximately the mid 1970’s), the rate of design innovations and truly new concepts slowed as nuclear weapon technology became a mature science. It is believed that other nations’ experiences have been roughly similar, although the United States probably has the greatest breadth of experience with innovative designs simply because of the more than 1,100 nuclear detonations it has conducted. The number of useful variations on the themes of primary and secondary design is finite, and designers’ final choices are frequently constrained by considerations of weapon size, weight, safety, and the availability of special materials.

Nuclear weaponry has advanced considerably since 1945, as can be seen at an unclassified level by comparing the size and weight of “Fat Man” with the far smaller, lighter, and more powerful weapons carried by modern ballistic missiles. Most nations of the world, including those of proliferation interest, have subscribed to the 1963 Limited Test Ban Treaty, which requires that nuclear explosions only take place underground. Underground testing can be detected by seismic means and by observing radioactive effluent in the atmosphere. It is probably easier to detect and identify a small nuclear test in the atmosphere than it is to detect and identify a similarly sized underground test. In either case, highly specialized instrumentation is required if a nuclear test explosion is to yield useful data to the nation carrying out the experiment.

US nuclear weapons technology is mature and might not have shown many more qualitative advances over the long haul, even absent a test ban. The same is roughly true for Russia, the UK, and possibly for France. The design of the nuclear device for a specific nuclear weapon is constrained by several factors. The most important of these are the weight the delivery vehicle can carry plus the size of the space available in which to carry the weapon (e.g., the diameter and length of a nosecone or the length and width of a bomb bay). The required yield of the device is established by the target vulnerability. The possible yield is set by the state of nuclear weapon technology and by the availability of special materials. Finally, the choices of specific design details of the device are determined by the taste of its designers, who will be influenced by their experience and the traditions of their organization.

Fission Weapons

An ordinary “atomic” bomb of the kinds used in World War II uses the process of nuclear fission to release the binding energy in certain nuclei. The energy release is rapid and, because of the large amounts of energy locked in nuclei, violent. The principal materials used for fission weapons are U-235 and Pu-239, which are termed fissile because they can be split into two roughly equal-mass fragments when struck by a neutron of even low energies. When a large enough mass of either material is assembled, a self-sustaining chain reaction results after the first fission is produced.The minimum mass of fissile material that can sustain a nuclear chain reaction is called a critical mass and depends on the density, shape, and type of fissile material, as well as the effectiveness of any surrounding material (called a reflector or tamper) at reflecting neutrons back into the fissioning mass. Critical masses in spherical geometry for weapon-grade materials are as follows:

Uranium-235      Plutonium-239

Bare sphere: 56 kg 11 kg
Thick Tamper: 15 kg 5 kg

The critical mass of compressed fissile material decreases as the inverse square of the density achieved. Since critical mass decreases rapidly as density increases, the implosion technique can make do with substantially less nuclear material than the gun-assembly method. The “Fat Man” atomic bomb that destroyed Nagasaki in 1945 used 6.2 kilograms of plutonium and produced an explosive yield of 21-23 kilotons [a 1987 reassessment of the Japanese bombings placed the yield at 21 Kt]. Until January 1994, the Department of Energy (DOE) estimated that 8 kilograms would typically be needed to make a small nuclear weapon. Subsequently, however, DOE reduced the estimate of the amount of plutonium needed to 4 kilograms. Some US scientists believe that 1 kilogram of plutonium will suffice.

If any more material is added to a critical mass a condition of supercriticality results. The chain reaction in a supercritical mass increases rapidly in intensity until the heat generated by the nuclear reactions causes the mass to expand so greatly that the assembly is no longer critical.

Fission weapons require a system to assemble a supercritical mass from a sub-critical mass in a very short time. Two classic assembly systems have been used, gun and implosion. In the simpler gun-type device, two subcritical masses are brought together by using a mechanism similar to an artillery gun to shoot one mass (the projectile) at the other mass (the target). The Hiroshima weapon was gun-assembled and used 235 U as a fuel. Gun-assembled weapons using highly enriched uranium are considered the easiest of all nuclear devices to construct and the most foolproof.


In the gun device, two pieces of fissionable material, each less than a critical mass, are brought together very rapidly to forma single supercritical one. This gun-type assembly may be achieved in a tubular device in which a high explosive is used to blow one subcritical piece of fissionable material from one end of the tube into another subcritical piece held at the opposite end of the tube.

Manhattan Project scientists were so confident in the performance of the “Little Boy” uranium bomb that the device was not even tested before it was used. This 15-kt weapon was airdropped on 06 August 1945 at Hiroshima, Japan. The device contained 64.1 kg of highly enriched uranium, with an average enrichment of 80%. The six bombs built by the Republic of South Africa were gun-assembled and used 50kg of uranium enriched to between 80 percent and 93 percent in the isotope U-235.Compared with the implosion approach, this method assembles the masses relatively slowly and at normal densities; it is practical only with highly enriched uranium. If plutonium – even weapon-grade — were used in a gun-assembly design, neutrons released from spontaneous fission of its even-numbered isotopes would likely trigger the nuclear chain reaction too soon, resulting in a “fizzle” of dramatically reduced yield.


Because of the short time interval between spontaneous neutron emissions (and, therefore, the large number of background neutrons) found in plutonium because of the decay by spontaneous fission of the isotope Pu-240, Manhattan Project scientists devised the implosion method of assembly in which high explosives are arranged to form an imploding shock wave which compresses the fissile material to supercriticality.

The core of fissile material that is formed into a super-critical mass by chemical high explosives (HE) or propellants. When the high explosive is detonated, an inwardly directed implosion wave is produced. This wave compresses the sphere of fissionable material. The decrease in surface to volume ratio of this compressed mass plus its increased density is then such as to make the mass supercritical. The HE is exploded by detonators timed electronically by a fuzing system, which may use altitude sensors or other means of control.

The nuclear chain-reaction is normally started by an initiator that injects a burst of neutrons into the fissile core at an appropriate moment. The timing of the initiation of the chain reaction is important and must be carefully designed for the weapon to have a predictable yield. A neutron generator emits a burst of neutrons to initiate the chain reaction at the proper moment  near the point of maximum compression in an implosion design or of full assembly in the gun-barrel design.

A surrounding tamper may help keep the nuclear material assembled for a longer time before it blows itself apart, thus increasing the yield. The tamper often doubles as a neutron reflector.

Implosion systems can be built using either Pu-239 or U-235 but the gun assembly only works for uranium. Implosion weapons are more difficult to build than gun weapons, but they are also more efficient, requiring less SNM and producing larger yields. Iraq attempted to build an implosion bomb using U-235. In contrast, North Korea chose to use 239 Pu produced in a nuclear reactor.

Boosted Weapons

To fission more of a given amount of fissile material, a small amount of material that can undergo fusion, deuterium and tritium (D-T) gas, can be placed inside the core of a fission device. Here, just as the fission chain reaction gets underway, the D-T gas undergoes fusion, releasing an intense burst of high-energy neutrons (along with a small amount of fusion energy as well) that fissions the surrounding material more completely. This approach, called boosting, is used in most modem nuclear weapons to maintain their yields while greatly decreas-ing their overall size and weight.

Enhanced Radiation Weapons

An enhanced radiation (ER) weapon, by special design techniques, has an output in which neutrons and x-rays are made to constitute a substantial portion of the total energy released. For example, a standard fission weapon’s total energy output would be partitioned as follows: 50% as blast; 35% as thermal energy; and 15% as nuclear radiation. An ER weapon’s total energy would be partitioned as follows: 30% as blast; 20% as thermal; and 50% as nuclear radiation. Thus, a 3-kiloton ER weapon will produce the nuclear radiation of a 10-kiloton fission weapon and the blast and thermal radiation of a 1-kiloton fission device. However, the energy distribution percentages of nuclear weapons are a function of yield.

Fusion Weapons

A more powerful but more complex weapon uses the fusion of heavy isotopes of hydrogen, deuterium, and tritium to release large numbers of neutrons when the fusile (sometimes termed “fusionable”) material is compressed by the energy released by a fission device called a primary. Fusion (or thermonuclear’ weapons derive a significant amount of their total energy from fusion reactions. The intense temperatures and pressures generated by a fission explosion overcome the strong electrical repulsion that would otherwise keep the positively charged nuclei of the fusion fuel from reacting. The fusion part of the weapon is called a secondary.In general, the x-rays from a fission primary heat and compress material surrounding a secondary fusion stage.

It is inconvenient to carry deuterium and tritium as gases in a thermonuclear weapon, and certainly impractical to carry them as liquefied gases, which requires high pressures and cryogenic temperatures. Instead, one can make a “dry” device in which 6Li is combined with deuterium to form the compound 6Li D (lithium-6 deuteride). Neutrons from a fission “primary” device bombard the 6 Li in the compound, liberating tritium, which quickly fuses with the nearby deuterium. The a particles, being electrically charged and at high temperatures, contribute directly to forming the nuclear fireball. The neutrons can bombard additional 6Li nuclei or cause the remaining uranium and plutonium in the weapon to undergo fission. This two-stage thermonuclear weapon has explosive yields far greater than can be achieved with one point safe designs of pure fission weapons, and thermonuclear fusion stages can be ignited in sequence to deliver any desired yield. Such bombs, in theory, can be designed with arbitrarily large yields: the Soviet Union once tested a device with a yield of about 59 megatons.

In a relatively crude sense, 6 Li can be thought of as consisting of an alpha particle ( 4He) and a deuteron ( 2H) bound together. When bombarded by neutrons, 6 Li disintegrates into a triton ( 3 H) and an alpha:

6 Li + Neutron = 3 H + 3 He + Energy.This is the key to its importance in nuclear weapons physics. The nuclear fusion reaction which ignites most readily is

2 H + 3 H =
4 He + n + 17.6 MeV,or, phrased in other terms, deuterium plus tritium produces 4He plus a neutron plus 17.6 MeV of free energy:

D + T = 4 He + n + 17.6 MeV.Lithium-7 also contributes to the production of tritium in a thermonuclear secondary, albeit at a lower rate than 6Li. The fusion reactions derived from tritium produced from 7 Li contributed many unexpected neutrons (and hence far more energy release than planned) to the final stage of the infamous 1953 Castle/BRAVO atmospheric test, nearly doubling its expected yield.

Safing, Arming, Fuzing, and Firing (SAFF)

The ability to make effective use of a nuclear weapon is limited unless the device can be handled safely, taken safely from storage when required, delivered to its intended target, and then detonated at the correct point in space and time to achieve the desired goal. Although the intended scenarios for use of its weapons will strongly influence specific weaponization concepts and approaches, functional capabilities for safing, arming, fuzing, and firing (SAFF) will be fundamental.Nuclear weapons are particularly destructive, with immediate effects including blast and thermal radiation and delayed effects produced by ionizing radiation, neutrons, and radioactive fallout. They are expensive to build, maintain, and employ, requiring a significant fraction of the total defense resources of a small nation. In a totalitarian state the leader must always worry that they will be used against the government; in a democracy the possibility of an unauthorized or accidental use must never be discounted. A nuclear detonation as the result of an accident would be a local catastrophe.Because of their destructiveness, nuclear weapons require precautions to prevent accidental detonation during any part of their manufacture and lifetime. And because of their value, the weapons require reliable arming and fuzing mechanisms to ensure that they explode when delivered to target. Therefore, any nuclear power is likely to pay some attention to the issues of safing and safety, arming, fuzing, and firing of its nuclear weapons. The solutions adopted depend upon the level of technology in the proliferant state, the number of weapons in its stockpile, and the political consequences of an accidental detonation.Whether to protect their investment in nuclear arms or to deny potential access to and use of the weapons by unauthorized persons, proliferators or subnational groups will almost certainly seek special measures to ensure security and operational control of nuclear weapons. These are likely to include physical security and access control technologies at minimum and may include use control. The techniques used today by the existing western nuclear weapon states represent the culmination of a half-century of evolution in highly classified military programs, and proliferators may well choose simpler solutions, perhaps by adapting physical security, access, and operational controls used in the commercial sector for high-value/high-risk assets.From the very first nuclear weapons built, safety was a consideration. The two bombs used in the war drops on Hiroshima and Nagasaki posed significant risk of accidental detonation if the B-29 strike aircraft had crashed on takeoff. As a result, critical components were removed from each bomb and installed only after takeoff and initial climb to altitude were completed. Both weapons used similar arming and fuzing components. Arming could be accomplished by removing a safety connector plug and replacing it with a distinctively colored arming connector. Fuzing used redundant systems including a primitive radar and a barometric switch. No provision was incorporated in the weapons themselves to prevent unauthorized use or to protect against misappropriation or theft.

In later years, the United States developed mechanical safing devices. These were later replaced with weapons designed to a goal of less than a 1 in a 1 million chance of the weapon delivering more than 4 pounds of nuclear yield if the high explosives were detonated at the single most critical possible point. Other nations have adopted different safety criteria and have achieved their safety goals in other ways.

In the 1950’s, to prevent unauthorized use of U.S. weapons stored abroad, permissive action links (PALs) were developed. These began as simple combination locks and evolved into the modern systems which allow only a few tries to arm the weapon and before disabling the physics package should an intruder persist in attempts to defeat the PAL.

Safing To ensure that the nuclear warhead can be stored, handled, deployed, and employed in a wide spectrum of intended and unintended environmental and threat conditions, with assurance that it will not experience a nuclear detonation. In U.S. practice, safing generally involves multiple mechanical interruptions of both power sources and pyrotechnic/explosive firing trains. The nuclear components may be designed so that an accidental detonation of the high explosives is intrinsically unable to produce a significant (>4 pounds TNT equivalent) nuclear yield; it is simpler to insert mechanical devices into the pit to prevent the assembly of a critical mass into the pit or to remove a portion of the fissile material from inside the high explosives. Mechanical safing of a gun-assembled weapon is fairly straightforward; one can simply insert a hardened steel or tungsten rod across a diameter of the gun barrel, disrupting the projectile. All U.S. weapons have been designed to be intrinsically one-point safe in the event of accidental detonation of the high explosives, but it is not anticipated that a new proliferator would take such care.

Arming Placing the nuclear warhead in a ready operational state, such that it can be initiated under specified firing conditions. Arming generally involves mechanical restoration of the safing interrupts in response to conditions that are unique to the operational environment (launch or deployment) of the system. A further feature is that the environment typically provides the energy source to drive the arming action. If a weapon is safed by inserting mechanical devices into the pit (e.g., chains, coils of wire, bearing balls) to prevent complete implosion, arming involves removal of those devices. It may not always be possible to safe a mechanically armed device once the physical barrier to implosion has been removed.

Fuzing To ensure optimum weapon effectiveness by detecting that the desired conditions for warhead detonation have been met and to provide an appropriate command signal to the firing set to initiate nuclear detonation. Fuzing generally involves devices to detect the location of the warhead with respect to the target, signal processing and logic, and an output circuit to initiate firing.

Firing To ensure nuclear detonation by delivering a precise level of precisely timed electrical or pyrotechnic energy to one or more warhead detonating devices. A variety of techniques are used, depending on the warhead design and type of detonation devices.

Depending on the specific military operations to be carried out and the specific delivery system chosen, nuclear weapons pose special technological problems in terms of primary power and power-conditioning, overall weapon integration, and operational control and security.

Not all weapons possessors will face the same problems or opt for the same levels of confidence, particularly in the inherent security of their weapons. The operational objectives will in turn dictate the technological requirements for the SAFF subsystems. Minimal requirements could be met by surface burst (including impact fuzing of relatively slow moving warhead) or crude preset height of burst based on simple timer or barometric switch or simple radar altimeter. Modest requirements could be met by more precise HOB (height of burst) based on improved radar triggering or other methods of measuring distance above ground to maxmize radius of selected weapons effects, with point-contact salvage fuzing. Parachute delivery of bombs to allow deliberate laydown and surface burst. Substantial requirements could be met by variable HOB, including low-altitude for ensured destruction of protected strategic targets, along with possible underwater or exoatmospheric capabilities.

Virtually any country or extranational group with the resources to construct a nuclear device has sufficient capability to attain the minimum SAFF capability that would be needed to meet terrorist or minimal national aims. The requirements to achieve a “modest” or “substantial” capability level are much more demanding. Both safety and protection of investment demand very low probability of failure of safing and arming mechanisms, with very high probability of proper initiation of the warhead. All of the recognized nuclear weapons states and many other countries have (or have ready access to) both the design know-how and components required to implement a significant capability.In terms of sophistication, safety, and reliability of design, past U.S. weapons programs provide a legacy of world leadership in SAFF and related technology. France and the UK follow closely in overall SAFF design and may actually hold slight leads in specific component technologies. SAFF technologies of other nuclear powers – notably Russia and China – do not compare. Japan and Germany have technological capabilities roughly on a par with the United States, UK, and France, and doubtless have the capability to design and build nuclear SAFF subsystems.Reliable fuzing and firing systems suitable for nuclear use have been built since 1945 and do not need to incorporate any modern technology. Many kinds of mechanical safing systems have been employed, and several of these require nothing more complex than removable wires or chains or the exchanging of arming/ safing connector plugs. Safing a gun-assembled system is especially simple. Arming systems range from hand insertion of critical components in flight to extremely sophisticated instruments which detect specific events in the stockpile to target sequence (STS). Fuzing and firing systems span an equally great range of technical complexity.Any country with the electronics capability to build aircraft radar altimeter equipment should have access to the capability for building a reasonably adequate, simple HOB fuze. China, India, Israel, Taiwan, South Korea, Brazil, Singapore, the Russian Federation and the Ukraine, and South Africa all have built conventional weapons with design features that could be adapted to more sophisticated designs, providing variable burst height and rudimentary Electronic Counter Counter Measure (ECCM) features. With regard to physical security measures and use control, the rapid growth in the availability and performance of low-cost, highly reliable microprocessing equipment has led to a proliferation of electronic lock and security devices suitable for protecting and controlling high-value/at-risk assets. Such technology may likely meet the needs of most proliferant organizations.

The Celestial Sphere

Humans perceive in Euclidean space -> straight lines and planes. But, when distances are not visible (i.e. very large) than the apparent shape that the mind draws is a sphere -> thus, we use a spherical coordinate system for mapping the sky with the additional advantage that we can project Earth reference points (i.e. North Pole, South Pole, equator) onto the sky. Note: the sky is not really a sphere!

From the Earth’s surface we envision a hemisphere and mark the compass points on the horizon. The circle that passes through the south point, north point and the point directly over head (zenith) is called the meridian.

This system allows one to indicate any position in the sky by two reference points, the time from the meridian and the angle from the horizon. Of course, since the Earth rotates, your coordinates will change after a few minutes.

The horizontal coordinate system (commonly referred to as the alt-az system) is the simplest coordinate system as it is based on the observer’s horizon. The celestial hemisphere viewed by an observer on the Earth is shown in the figure below. The great circle through the zenith Z and the north celestial pole P cuts the horizon NESYW at the north point (N) and the south point (S). The great circle WZE at right angles to the great circle NPZS cuts the horizon at the west point (W) and the east point (E). The arcs ZN, ZW, ZY, etc, are known as verticals.

The two numbers which specify the position of a star, X, in this system are the azimuth, A, and the altitude, a. The altitude of X is the angle measured along the vertical circle through X from the horizon at Y to X. It is measured in degrees. An often-used alternative to altitude is the zenith distance, z, of X, indicated by ZX. Clearly, z = 90 – a. Azimuth may be defined in a number of ways. For the purposes of this course, azimuth will be defined as the angle between the vertical through the north point and the vertical through the star at X, measured eastwards from the north point along the horizon from 0 to 360°. This definition applies to observers in both the northern and the southern hemispheres.

It is often useful to know how high a star is above the horizon and in what direction it can be found – this is the main advantage of the alt-az system. The main disadvantage of the alt-az system is that it is a local coordinate system – i.e. two observers at different points on the Earth’s surface will measure different altitudes and azimuths for the same star at the same time. In addition, an observer will find that the star’s alt-az coordinates changes with time as the celestial sphere appears to rotate.

Celestial Sphere:

To determine the positions of stars and planets on the sky in an absolute sense, we project the Earth’s spherical surface onto the sky, called the celestial sphere.

The celestial sphere has a north and south celestial pole as well as a celestial equator which are projected reference points to the same positions on the Earth surface. Right Ascension and Declination serve as an absolute coordinate system fixed on the sky, rather than a relative system like the zenith/horizon system. Right Ascension is the equivalent of longitude, only measured in hours, minutes and seconds (since the Earth rotates in the same units). Declination is the equivalent of latitude measured in degrees from the celestial equator (0 to 90). Any point of the celestial (i.e. the position of a star or planet) can be referenced with a unique Right Ascension and Declination.

The celestial sphere has a north and south celestial pole as well as a celestial equator which are projected from reference points from the Earth surface. Since the Earth turns on its axis once every 24 hours, the stars trace arcs through the sky parallel to the celestial equator. The appearance of this motion will vary depending on where you are located on the Earth’s surface.

Note that the daily rotation of the Earth causes each star and planet to make a daily circular path around the north celestial pole referred to as the diurnal motion.