Orbits

Planetary Configurations:

The planets outside of the Earth’s orbit (Mars, Jupiter, Saturn, Uranus, Neptune) are called superior planets

Likewise, the planets inside of the Earth’s orbit (Mercury, Venus) are called inferior planets.

Other configurations are:

  • Object at greatest western elongation = “morning star”
  • Object at greatest eastern elongation = “evening star”
  • only inferior planets have phases
  • transit = passage of an inferior planet across the Sun

Galileo’s laws of Motion:

Aside from his numerous inventions, Galileo also laid down the first accurate laws of motion for masses. Galileo realized that all bodies accelerate at the same rate regardless of their size or mass. Everyday experience tells you differently because a feather falls slower than a cannonball. Galileo’s genius lay in spotting that the differences that occur in the everyday world are in incidental complication (in this case, air friction) and are irrelevant to the real underlying properties (that is, gravity). He was able to abstract from the complexity of real-life situations the simplicity of an idealized law of gravity.

Key among his investigations are:

  • developed the concept of motion in terms of velocity (speed and direction) through the use of inclined planes.
  • developed the idea of force, as a cause for motion.
  • determined that the natural state of an object is rest or uniform motion, i.e. objects always have a velocity, sometimes that velocity has a magnitude of zero = rest.
  • objects resist change in motion, which is called inertia.

Galileo also showed that objects fall with the same speed regardless of their mass. The fact that a feather falls slowly than a steel ball is due to amount of air resistence that a feather experiences (alot) versus the steel ball (very little).

Hammer and Feather on Moon


Kepler’s laws of Planetary Motion:

Kepler developed, using Tycho Brahe’s observations, the first kinematic description of orbits, Newton will develop a dynamic description that involves the underlying influence (gravity)

  • 1st law (law of elliptic orbits): Each planet moves in an elliptical orbit with the Sun at one focus.

Ellipses

      that are highly flattened have high eccentricity. Ellipses that are close to a circle have low eccentricity.
  • 2nd law (law of equal areas): a line connection the Sun and a planet (called the radius vector) sweeps out equal areas in equal timesObjects travel fastest at the low point of their orbit, and travel slowest at the high point of their orbit.
  • 3rd law (law of harmonics): The square of a planet’s orbital period is proportional to its mean distance from the Sun cubed.The mathematical way to describe Kepler’s 3rd law is:P2 α R3where the α symbol means `proportional to’. Proportions are expressions that imply there exists some constant, k, that relates the period, P, and the radius, R, such thatP2 = kR3We can determine k by expressing the formula in units of the Earth and its orbit around the Sun, such that

    (1 yr)2 = k (1 A.U.)3so k is equal to one, as long as we use units of years and A.U.’s (the Astronomical Unit, i.e. the distance from the Earth from the Sun). With k=1, then kepler’s 3rd law becomes

    P2 = RThe 3rd law is used to develop a “yardstick” for the Solar System, expressing the distance to all the planets relative to Earth’s orbit by just knowing their period (timing how long it takes for them to go around the Sun).


Orbits:

Many years after Kepler, it was shown that orbits actually come in many flavors, ellipses, circles, parabolas and hyperbolas; a family of curves called conic sections. There are five basic types of simple orbits: radial, ballistic, stable, polar and geosynchronous.

For an escape orbit, the velocity sufficient to escape gravitation pull of the planet, i.e. the major axis is infinite, such as the Voyager spacecraft

The direction a body travels in orbit can be direct, or prograde, in which the spacecraft moves in the same direction as the planet rotates, or retrograde, going in a direction opposite the planet’s rotation.

The semi-major axis of an orbit is determined by the kinetic energy acquired by the rocket at burnout. This is equilvent to the burnout velocity. For low burnout velocities (below 25,000 ft/sec) the orbit is ballistic, meaning it does not escape the surface of the Earth. Burnout velocities above 25,000 ft/sec achieve stable orbit. At 35,000 ft/sec, the orbit reaches the distance of the Moon.

The amount of burnout velocity also determines the orbit type, an ellipse, a parabola or a hyperbolic path.

Satellites use a wide variety of orbits to fullfil their missions. The orbit chosen for a satellite is a comprimise between the mission requirements, the capabilities of the rocket used to launch the satellite and orbital mechanics.

  • The orbital period. This increases with the mean altitude of the orbit, so a satellite in a low earth orbit moves faster than a satellite in a geostationary orbit. Also the velocity of a satellite in an eccentric orbit varies along the orbit, being fastest at perigee & slowest at apogee (Keplers second law of equal areas).
  • Inclination. The angle between the plane (major axis) of the satellite orbit and the equator
  • Eccentricity: A perfectly circular orbit has an eccentricity of zero, an elliptical orbit an eccentricity of 0< to <1, a parabolic orbit an eccentricity of 1 and a hyperbolic orbit of >1. The low point of an orbit is known as perigee, whilst the high point is apogee. The major axis is the vector connecting the perigee to the apogee.
  • The ascending node is where the orbit crosses the equator in a northbound direction (ie. the direction of the satellite motion). Likewise, the descending node is where the orbit crosses the equator in a southbound direction.

Low Earth Orbit:

Weather and spy satellites use over pole orbits so that Earth turns under them once per day, i.e. total coverage of the Earth’s surface

Landsat 7 is an earth resources spacecraft which images the earth’s surface in visible and infrared light. Therefore this satellite orbit is optimised for earth observation. For this reason a near polar orbit of 700km, 98.8 inclination, 98 minute period is used which ensures that the satellite can (at least in theory) observe the entire globe. Several other features of this orbit make it especially useful for remote sensing satellites.

  • Circle of visibility = yellow circle around satellite indicating the region of the earth visible from the satellite.
  • Part of orbit in sunlight = yellow.
  • Part of orbit in shadow = red.
  • Dayside of earth = light blue.
  • Nightside of earth = dark blue, after the terminator three lines indicate the boundaries of civil, nautical & astronomical twilight.

General view of Landsat 7 orbit.

Left: View perpendicular to plane of orbit
Right: View of orbit from ascending node

In theory an orbit should remain fixed in space whilst the earth rotates beneath the satellite. In reality the earth is slightly bulged and the effect of this bulge is to shift the point of perigee and the ascending node for any orbit which has an inclination other than 90. This effect is known as nodal regression, the result of which is that the plane of the orbit rotates or precesses.

Ground tracks. Red dots along the ground track show the position of the satellite at regular intervals. Closely spaced dots indicate slow speed, widely spaced dots high speed.

However, this effect is used to advantage here to shift the orbit at exactly the same rate as the daily change in position of the sun over any point of the earth. So the satellite always passes over the earth on the sunlit part of its orbit at the same local time of day (for example at 9 am local time). This ensures that lighting conditions are similar (ignoring seasonal differences) for images taken of the same spot on the earth at different times. Additionally the orbit is resonant with the rotation period of the earth, meaning that the satellite passes over the same point on the earth at the same time of day at regular intervals (which may be daily or every 2 or more days depending on the resonance). In the case of Landsat there are 14.5 orbits per day or 29 orbits every 2 days.


Geosynchronous Orbits (GEO):

Communication satellites use geosynchronous orbits for continuous coverage of one region of the globe, i.e. the orbital period is exactly one day. This turns out to be approximately 24,000 miles up.

A geosynchronous orbit is an orbit which has an orbital period close to that of the earths rotation. A geostationary orbit is a special case of the geosynchronous orbit where inclination = 0 and the period is equal to the rotation period of the earth (approx 1436 minutes), corresponding to a cricular orbit of approx. 35,700km altitude. A satellite in this orbit appears essentially stationary in the sky, which is why this orbit is used extensively for telecommunications & weather satellites. In reality lunar & solar gravitational influences perturb the satellites orbit, so that through the day the satellites position shifts slightly.

Below is shown the orbit of the TDRS-7 satellite, one of a series of NASA satellites which used to provide a near continous communications link with the Space Shuttle, International Space Station & other spacecraft such as the Hubble Space Telescope.

General view of TDRS-7 orbit

View of orbit from ascending node

Compared with the LEO orbit of Mir a much larger portion of the earth’s surface is visible from the TDRS-7 spacecraft. The zone of visibility of the spacecraft has been highlighted by a cone. Approximately 40% of the earths surface can be viewed at any one time from geostationary altitude. Additionally, the spacecraft orbit is sunlight apart from a small zone which passes into the earths shadow. Actually, geostationary satellites only experience eclipses at two periods of the year – for a few weeks at a time at the spring and autumn equinoxes. The reason for this is simple. The earths rotation axis is inclined with respect to the ecliptic, hence the earth’s shadow cone misses the plane of a zero inclination geostationary orbit apart from the times when the suns declination is close to zero. This occurs twice a year, once at the spring equinox and once at the autumn equinox.

Ground tracks. Red dots along the ground track show the position of the satellite at regular intervals. Closely spaced dots indicate slow speed, widely spaced dots high speed.

As can be seen from this graphic a perfectly geostationary satellite stays over the same spot on the equator all day. However, if we were to look closely we would see that the satellite does appear to change position, generally describing a small figure of 8 or an arc due to the effect of lunar / solar pertubations dragging the satellite into a slightly elliptical, slightly inclined orbit. There are many non operational satellites in “graveyard” orbits slightly above or below a true geostationary orbit. Since the orbital period is slightly more or less than the earths rotation period these satellites appear to drift slowly around the earth.


Anti-de Sitter/Conformal Field Theory

The AdS/CFT correspondence is one of the largest areas of research in string theory. AdS/CFT stands for Anti-de Sitter/Conformal Field Theory, an expression that’s not particularly elucidating.

AdS/CFT is a particular, and deeply surprising, example of a duality. It relates two very different theories and at first sight seems obviously wrong. It states that there is a duality between theories of gravity in five dimensions and quantum field theories (QFTs) in four dimensions. This correspondence was first formulated by Juan Maldacena in 1997, and is generally thought to be the single most important result in string theory in the last twenty years.

The original example of AdS/CFT linked two very special theories. The gravitational side involved a particular extension of gravity (type IIB supergravity) on a particular geometry (5-dimensional Anti-de-Sitter space). The QFT was the unique theory with the largest possible amount of supersymmetry. There’s a specific dictionary that translates between the theories.

This relationship has no formal mathematical proof. However a very large number of checks have been performed. These checks involve two calculations, using different techniques and methods, of quantities related by the dictionary. Continual agreement of these calculations constitutes strong evidence for the correspondence.

The first example has by now been extended to many other cases, and AdS/CFT is more generally referred to as the gauge-gravity correspondence. Formally this is the statement that gravitational theories in (N+1) dimensions can be entirely and completely equivalent to non-gravitational quantum field theories in N dimensions.

The AdS/CFT correspondence has a very useful property. When the gravitational theory is hard to solve, the QFT is easy to solve, and vice-versa! This opens the door to previously intractable problems in QFT through simple calculations in gravity theories.

Moreover AdS/CFT allows a conceptual reworking of the classic problems of general relativity. Indeed if general relativity can be equivalent to a QFT, then neither one is deeper than the other. Finally physicists can use it to develop new intuitions for both QFT and general relativity.

Nuclear Fission/Fusion And Anti-Matter

Fission/Fusion:

  • since quantum events do not have a “cause”, this also means that all possible quantum events must and will happen
  • without cause and effect, conservation laws can be violated, although only on very short timescales (things have to add up in the end)
  • violation of mass/energy allowed for the understanding of the source of nuclear power in the Universe, fission and fusion
One of the surprising results of quantum physics is that if a physical event is not specifically forbidden by a quantum rule, than it can and will happen. While this may strange, it is a direct result of the uncertainty principle. Things that are strict laws in the macroscopic world, such as the conversation of mass and energy, can be broken in the quantum world with the caveat that they can only broken for very small intervals of time (less than a Planck time). The violation of conservation laws led to the one of the greatest breakthroughs of the early 20th century, the understanding of radioactivity decay (fission) and the source of the power in stars (fusion).Nuclear fission is the breakdown of large atomic nuclei into smaller elements. This can happen spontaneously (radioactive decay) or induced by the collision with a free neutron. Spontaneously fission is due to the fact that the wave function of a large nuclei is ‘fuzzier’ than the wave function of a small particle like the alpha particle. The uncertainty principle states that, sometimes, an alpha particle (2 protons and 2 neutrons) can tunnel outside the nucleus and escape.

  • fission is the splitting of atomic nuclei, either spontaneously or by collision (induced)
  • fusion is the merger of atomic particles to form new particles
Induced fission occurs when a free neutron strikes a nucleus and deforms it. Under classical physics, the nucleus would just reform. However, under quantum physics there is a finite probability that the deformed nucleus will tunnel into two new nuclei and release some neutrons in the process, to produce a chain reaction.Fusion is the production of heavier elements by the fusing of lighter elements. The process requires high temperatures in order to produce sufficiently high velocities for the two light elements to overcome each others electrostatic barriers.

  • quantum tunneling and uncertainty are required for these processes
  • and quantum physics, even though centered on probabilities, is our most accurate science in its predictions
Even for the high temperatures in the center of a star, fusion requires the quantum tunneling of a neutron or proton to overcome the repulsive electrostatic forces of an atomic nuclei. Notice that both fission and fusion release energy by converting some of the nuclear mass into gamma-rays, this is the famous formulation by Einstein that E=mc2.Although it deals with probabilities and uncertainties, the quantum mechanics has been spectacularly successful in explaining otherwise inaccessible atomic phenomena and in meeting every experimental test. Its predictions are the most precise and the best checked of any in physics; some of them have been tested and found accurate to better than one part per billion.

Antimatter:

  • symmetry in quantum physics lead to the prediction of opposite matter, or antimatter
  • matter and antimatter can combine to form pure energy, and the opposite is true, energy can combine to form matter/antimatter pairs
A combination of quantum mechanics and relativity allows us to examine subatomic processes in a new light. Symmetry is very important to physical theories. Thus, the existence of a type of `opposite’ matter was hypothesized soon after the development of quantum physics. `Opposite’ matter is called antimatter. Particles of antimatter has the same mass and characteristics of regular matter, but opposite in charge. When matter and antimatter come in contact they are both instantaneously converted into pure energy, in the form of photons.Antimatter is produced all the time by the collision of high energy photons, a process called pair production, where an electron and its antimatter twin (the positron) are created from energy (E=mc2). A typical spacetime diagram of pair production looks like the following:

  • spacetime diagrams provide a backwards time interpretation for antimatter, symmetry in space and time
Positrons only survive for a short time since they are attracted to other electrons and disintegrate. Since quantum mechanics states that energy, time and space can be violated, another way of looking at pair production is to state that the positron does not exist, but rather it is an electron traveling backwards in time. Since it is going backwards in time, its charge would be reversed and its spacetime diagram would look like the following:

  • the quantum world leads to new ways of looking at existence and reality
In this interpretation, the collision of an electron and two photons causes the electron to go backward in time till it meets another pair of photons, then reverses itself again. The world of quantum physics allows for many such strange views of subatomic interactions.

Superposition and Schrodinger’s Equation+Cat

Quantum Mechanics:

  • quantum mechanics is to the microscopic world what classic mechanics and calculus is to the macroscopic world
  • it is the operational process of calculating quantum physics phenomenon
  • its primary task is to bring order and prediction to the uncertainty of the quantum world, its main tool is Schrodinger’s equation
The field of quantum mechanics concerns the description of phenomenon on small scales where classical physics breaks down. The biggest difference between the classical and microscopic realm, is that the quantum world can be not be perceived directly, but rather through the use of instruments. And a key assumption to an quantum physics is that quantum mechanical principles must reduce to Newtonian principles at the macroscopic level (there is a continuity between quantum and Newtonian mechanics).Quantum mechanics was capable of bringing order to the uncertainty of the microscopic world by treatment of the wave function with new mathematics. Key to this idea was the fact that relative probabilities of different possible states are still determined by laws. Thus, there is a difference between the role of chance in quantum mechanics and the unrestricted chaos of a lawless Universe.

Every quantum particle is characterized by a wave function. In 1925 Erwin Schrodinger developed the differential equation which describes the evolution of those wave functions. By using Schrodinger equation, scientists can find the wave function which solves a particular problem in quantum mechanics. Unfortunately, it is usually impossible to find an exact solution to the equation, so certain assumptions are used in order to obtain an approximate answer for the particular problem.

  • the key difference between quantum and classical mechanics is the role of probability and chance
  • quantum objects are described by probability fields, however, this does not mean they are indeterminit, only uncertain
The difference between quantum mechanics and newtonian mechanics is the role of probability and statistics. While the uncertainty principle means that quantum objects have to be described by probability fields, this doesn’t mean that the microscopic world fails to conform to deterministic laws. In fact it does. And measurement is an act by which the measurer and the measured interact to produce a result. Although this is not simply the determination of a preexisting property.The quantum description of reality is objective (weak form) in the sense that everyone armed with a quantum physics education can do the same experiments and come to the same conclusions. Strong objectivity, as in classical physics, requires that the picture of the world yielded by the sum total of all experimental results to be not just a picture or model, but identical with the objective world, something that exists outside of us and prior to any measurement we might have of it. Quantum physics does not have this characteristic due to its built-in indeterminacy.

For centuries, scientists have gotten used to the idea that something like strong objectivity is the foundation of knowledge. So much so that we have come to believe that it is an essential part of the scientific method and that without this most solid kind of objectivity science would be pointless and arbitrary. However, the Copenhagen interpretation of quantum physics (see below) denies that there is any such thing as a true and unambiguous reality at the bottom of everything. Reality is what you measure it to be, and no more. No matter how uncomfortable science is with this viewpoint, quantum physics is extremely accurate and is the foundation of modern physics (perhaps then an objective view of reality is not essential to the conduct of physics). And concepts, such as cause and effect, survive only as a consequence of the collective behavior of large quantum systems.


Schrodinger’s Cat and Quantum Reality:

  • an example of the weirdness of the quantum world is given by the famous Schrodinger cat paradox
In 1935 Schrodinger, who was responsible for formulating much of the wave mechanics in quantum physics, published an essay describing the conceptual problems in quantum mechanics. A brief paragraph in this essay described the, now famous, cat paradox.
  • the paradox is phrased such that a quantum event determines if a cat is killed or not
  • from a quantum perspective, the whole system state is tied to the wave function of the quantum event, i.e. the cat is both dead and alive at the same time
One can even set up quite ridiculous cases where quantum physics rebells against common sense. For example, consider a cat is penned up in a steel chamber, along with the following diabolical device (which must be secured against direct interference by the cat). In the device is a Geiger counter with a tiny bit of radioactive substance, so small that perhaps in the course of one hour only one of the atoms decays, but also, with equal probability, perhaps none. If the decay happens, the counter tube discharges and through a relay releases a hammer which shatters a small flask of hydrocyanic acid. If one has left this entire system to itself for an hour, one would say that the cat still lives if meanwhile no atom has decayed. The first atomic decay would have poisoned it. The wave function for the entire system would express this by having in it the living and the dead cat mixed or smeared out in equal parts.

  • the paradox in some sense is not a paradox, but instead points out the tension between the microscopic and macroscopic worlds and the importance of the observer in a quantum scenario
  • quantum objects exist in superposition, many states, as shown by interference
  • the observer collapses the wave function
It is typical of these cases that an indeterminacy originally restricted to the atomic domain becomes transformed into macroscopic indeterminacy, which can then be resolved by direct observation. That prevents us from so naively accepting as valid a “blurred model” for representing reality. In itself it would not embody anything unclear or contradictory. There is a difference between a shaky or out-of-focus photograph and a snapshot of clouds and fog banks. We know that superposition of possible outcomes must exist simultaneously at a microscopic level because we can observe interference effects from these. We know (at least most of us know) that the cat in the box is dead, alive or dying and not in a smeared out state between the alternatives. When and how does the model of many microscopic possibilities resolve itself into a particular macroscopic state? When and how does the fog bank of microscopic possibilities transform itself to the blurred picture we have of a definite macroscopic state. That is the collapse of the wave function problem and Schrodinger’s cat is a simple and elegant explanation of that problem.

Macroscopic/Microscopic World Interface:

  • events in the microscopic world can happen *without* cause = indeterminacy
  • phenomenon such as tunneling shows that quantum physics leaks into the macroscopic world
The macroscopic world is Newtonian and deterministic for local events (note however that even the macroscopic world suffers from chaos). On the other hand, the microscopic quantum world radical indeterminacy limits any certainty surrounding the unfolding of physical events. Many things in the Newtonian world are unpredictable since we can never obtain all the factors effecting a physical system. But, quantum theory is much more unsettling in that events often happen without cause (e.g. radioactive decay).Note that the indeterminacy of the microscopic world has little effect on macroscopic objects. This is due to the fact that wave function for large objects is extremely small compared to the size of the macroscopic world. Your personal wave function is much smaller than any currently measurable sizes. And the indeterminacy of the quantum world is not complete because it is possible to assign probabilities to the wave function.

But, as Schrodinger’s Cat paradox show us, the probability rules of the microscopic world can leak into the macroscopic world. The paradox of Schrodinger’s cat has provoked a great deal of debate among theoretical physicists and philosophers. Although some thinkers have argued that the cat actually does exist in two superposed states, most contend that superposition only occurs when a quantum system is isolated from the rest of its environment. Various explanations have been advanced to account for this paradox–including the idea that the cat, or simply the animal’s physical environment (such as the photons in the box), can act as an observer.

The question is, at what point, or scale, do the probabilistic rules of the quantum realm give way to the deterministic laws that govern the macroscopic world? This question has been brought into vivid relief by the recent work where an NIST group confined a charged beryllium atom in a tiny electromagnetic cage and then cooled it with a laser to its lowest energy state. In this state the position of the atom and its “spin” (a quantum property that is only metaphorically analogous to spin in the ordinary sense) could be ascertained to within a very high degree of accuracy, limited by Heisenberg’s uncertainty principle.

  • decoherence prevents a macroscopic Schrodinger cat paradox
  • new technology allows the manipulation of objects at the quantum level
  • future research will investigate areas such as quantum teleportation and quantum computing
The workers then stimulated the atom with a laser just enough to change its wave function; according to the new wave function of the atom, it now had a 50 percent probability of being in a “spin-up” state in its initial position and an equal probability of being in a “spin-down” state in a position as much as 80 nanometers away, a vast distance indeed for the atomic realm. In effect, the atom was in two different places, as well as two different spin states, at the same time–an atomic analog of a cat both living and dead.The clinching evidence that the NIST researchers had achieved their goal came from their observation of an interference pattern; that phenomenon is a telltale sign that a single beryllium atom produced two distinct wave functions that interfered with each other.

The modern view of quantum mechanics states that Schrodinger’s cat, or any macroscopic object, does not exist as superpositions of existence due to decoherence. A pristine wave function is coherent, i.e. undisturbed by observation. But Schrodinger’s cat is not a pristine wave function, its is constantly interacting with other objects, such as air molecules in the box, or the box itself. Thus a macroscopic object becomes decoherent by many atomic interactions with its surrounding environment.

Decoherence explains why we do not routinely see quantum superpositions in the world around us. It is not because quantum mechanics intrinsically stops working for objects larger than some magic size. Instead, macroscopic objects such as cats and cards are almost impossible to keep isolated to the extent needed to prevent decoherence. Microscopic objects, in contrast, are more easily isolated from their surroundings so that they retain their quantum secrets and quantum behavior.

 

Uncerntainty Principle

  • the uncertainty principle states that the position and velocity cannot both be measured,exactly, at the same time (actually pairs of position, energy and time)
  • uncertainty principle derives from the measurement problem, the intimate connection between the wave and particle nature of quantum objects
  • the change in a velocity of a particle becomes more ill defined as the wave function is confined to a smaller region
Classical physics was on loose footing with problems of wave/particle duality, but was caught completely off-guard with the discovery of the uncertainty principle.The uncertainty principle also called the Heisenberg Uncertainty Principle, or Indeterminacy Principle, articulated (1927) by the German physicist Werner Heisenberg, that the position and the velocity of an object cannot both be measured exactly, at the same time, even in theory. The very concepts of exact position and exact velocity together, in fact, have no meaning in nature.

Ordinary experience provides no clue of this principle. It is easy to measure both the position and the velocity of, say, an automobile, because the uncertainties implied by this principle for ordinary objects are too small to be observed. The complete rule stipulates that the product of the uncertainties in position and velocity is equal to or greater than a tiny physical quantity, or constant (about 10-34 joule-second, the value of the quantity h (where h is Planck’s constant). Only for the exceedingly small masses of atoms and subatomic particles does the product of the uncertainties become significant.

Any attempt to measure precisely the velocity of a subatomic particle, such as an electron, will knock it about in an unpredictable way, so that a simultaneous measurement of its position has no validity. This result has nothing to do with inadequacies in the measuring instruments, the technique, or the observer; it arises out of the intimate connection in nature between particles and waves in the realm of subatomic dimensions.

Every particle has a wave associated with it; each particle actually exhibits wavelike behavior. The particle is most likely to be found in those places where the undulations of the wave are greatest, or most intense. The more intense the undulations of the associated wave become, however, the more ill defined becomes the wavelength, which in turn determines the momentum of the particle. So a strictly localized wave has an indeterminate wavelength; its associated particle, while having a definite position, has no certain velocity. A particle wave having a well-defined wavelength, on the other hand, is spread out; the associated particle, while having a rather precise velocity, may be almost anywhere. A quite accurate measurement of one observable involves a relatively large uncertainty in the measurement of the other.

The uncertainty principle is alternatively expressed in terms of a particle’s momentum and position. The momentum of a particle is equal to the product of its mass times its velocity. Thus, the product of the uncertainties in the momentum and the position of a particle equals h/(2) or more. The principle applies to other related (conjugate) pairs of observables, such as energy and time: the product of the uncertainty in an energy measurement and the uncertainty in the time interval during which the measurement is made also equals h/(2) or more. The same relation holds, for an unstable atom or nucleus, between the uncertainty in the quantity of energy radiated and the uncertainty in the lifetime of the unstable system as it makes a transition to a more stable state.

  • the wave nature to particles means a particle is a wave packet, the composite of many waves
  • many waves = many momentums, observation makes one momentum out of many
  • exact knowledge of complementarity pairs (position, energy, time) is impossible
The uncertainty principle, developed by W. Heisenberg, is a statement of the effects of wave-particle duality on the properties of subatomic objects. Consider the concept of momentum in the wave-like microscopic world. The momentum of wave is given by its wavelength. A wave packet like a photon or electron is a composite of many waves. Therefore, it must be made of many momentums. But how can an object have many momentums?Of course, once a measurement of the particle is made, a single momentum is observed. But, like fuzzy position, momentum before the observation is intrinsically uncertain. This is what is know as the uncertainty principle, that certain quantities, such as position, energy and time, are unknown, except by probabilities. In its purest form, the uncertainty principle states that accurate knowledge of complementarity pairs is impossible. For example, you can measure the location of an electron, but not its momentum (energy) at the same time.

  • complementarity also means that different experiments yield different results (e.g. the two slit experiment)
  • therefore, a single reality can not be applied at the quantum level
A characteristic feature of quantum physics is the principle of complementarity, which “implies the impossibility of any sharp separation between the behavior of atomic objects and the interaction with the measuring instruments which serve to define the conditions under which the phenomena appear.” As a result, “evidence obtained under different experimental conditions cannot be comprehended within a single picture, but must be regarded as complementary in the sense that only the totality of the phenomena exhausts the possible information about the objects.” This interpretation of the meaning of quantum physics, which implied an altered view of the meaning of physical explanation, gradually came to be accepted by the majority of physicists during the 1930’s.Mathematically we describe the uncertainty principle as the following, where `x’ is position and `p’ is momentum:

  • the mathematical form of the uncertainty principle relates complementary to Planck’s constant
  • knowledge is not unlimited, built-in indeterminacy exists, but only in the microscopic world, all collapses to determinism in the macroscopic world
This is perhaps the most famous equation next to E=mc2 in physics. It basically says that the combination of the error in position times the error in momentum must always be greater than Planck’s constant. So, you can measure the position of an electron to some accuracy, but then its momentum will be inside a very large range of values. Likewise, you can measure the momentum precisely, but then its position is unknown.Notice that this is not the measurement problem in another form, the combination of position, energy (momentum) and time are actually undefined for a quantum particle until a measurement is made (then the wave function collapses).

Also notice that the uncertainty principle is unimportant to macroscopic objects since Planck’s constant, h, is so small (10-34). For example, the uncertainty in position of a thrown baseball is 10-30 millimeters.

The depth of the uncertainty principle is realized when we ask the question; is our knowledge of reality unlimited? The answer is no, because the uncertainty principle states that there is a built-in uncertainty, indeterminacy, unpredictability to Nature.


   It is often stated that of all the theories proposed in this 
   century, the silliest is quantum theory.  Some say the the only 
   thing that quantum theory has going for it, in fact, is that it 
   is unquestionably correct. 

  - R. Feynman

 

Atom and Wave Particle Duality

Bohr Atom:

  • classical physics fails to describe the properties of atoms, Planck’s constant served to bridge the gap between the classical world and the new physics
  • Bohr proposed a quantized shell model for the atom using the same basic structure as Rutherford, but restricting the behavior of electrons to quantized orbits
Perhaps the foremost scientists of the 20th century was Niels Bohr, the first to apply Planck’s quantum idea to problems in atomic physics. In the early 1900’s, Bohr proposed a quantum mechanical description of the atom to replace the early model of Rutherford.In 1913 Bohr proposed his quantized shell model of the atom to explain how electrons can have stable orbits around the nucleus. The motion of the electrons in the Rutherford model was unstable because, according to classical mechanics and electromagnetic theory, any charged particle moving on a curved path emits electromagnetic radiation; thus, the electrons would lose energy and spiral into the nucleus. To remedy the stability problem, Bohr modified the Rutherford model by requiring that the electrons move in orbits of fixed size and energy. The energy of an electron depends on the size of the orbit and is lower for smaller orbits. Radiation can occur only when the electron jumps from one orbit to another. The atom will be completely stable in the state with the smallest orbit, since there is no orbit of lower energy into which the electron can jump.

Bohr’s starting point was to realize that classical mechanics by itself could never explain the atom’s stability. A stable atom has a certain size so that any equation describing it must contain some fundamental constant or combination of constants with a dimension of length. The classical fundamental constants–namely, the charges and the masses of the electron and the nucleus–cannot be combined to make a length. Bohr noticed, however, that the quantum constant formulated by the German physicist Max Planck has dimensions which, when combined with the mass and charge of the electron, produce a measure of length. Numerically, the measure is close to the known size of atoms. This encouraged Bohr to use Planck’s constant in searching for a theory of the atom.

  • Bohr’s calculation produce an accurate map of the hydrogen atom energy levels
  • changes in electron orbits requires the release or gain of energy in the form of photons
  • Bohr’s atom perfectly explains the spectra in stars as gaps due to the absorption of photons of particular wavelengths that match the electron orbits of the various elements
  • larger formulations explain all the properties outlined by Kirchoff’s laws
Planck had introduced his constant in 1900 in a formula explaining the light radiation emitted from heated bodies. According to classical theory, comparable amounts of light energy should be produced at all frequencies. This is not only contrary to observation but also implies the absurd result that the total energy radiated by a heated body should be infinite. Planck postulated that energy can only be emitted or absorbed in discrete amounts, which he called quanta (the Latin word for “how much”). The energy quantum is related to the frequency of the light by a new fundamental constant, h. When a body is heated, its radiant energy in a particular frequency range is, according to classical theory, proportional to the temperature of the body. With Planck’s hypothesis, however, the radiation can occur only in quantum amounts of energy. If the radiant energy is less than the quantum of energy, the amount of light in that frequency range will be reduced. Planck’s formula correctly describes radiation from heated bodies. Planck’s constant has the dimensions of action, which may be expressed as units of energy multiplied by time, units of momentum multiplied by length, or units of angular momentum. For example, Planck’s constant can be written as h = 6.6×10-34 joule seconds.Using Planck’s constant, Bohr obtained an accurate formula for the energy levels of the hydrogen atom. He postulated that the angular momentum of the electron is quantized–i.e., it can have only discrete values. He assumed that otherwise electrons obey the laws of classical mechanics by traveling around the nucleus in circular orbits. Because of the quantization, the electron orbits have fixed sizes and energies. The orbits are labeled by an integer, the quantum number n.

With his model, Bohr explained how electrons could jump from one orbit to another only by emitting or absorbing energy in fixed quanta. For example, if an electron jumps one orbit closer to the nucleus, it must emit energy equal to the difference of the energies of the two orbits. Conversely, when the electron jumps to a larger orbit, it must absorb a quantum of light equal in energy to the difference in orbits. The Bohr model basically assigned discrete orbits for the electron, multiples of Planck’s constant, rather than allowing a continuum of energies as allowed by classical physics.

The power in the Bohr model was its ability to predict the spectra of light emitted by atoms. In particular, its ability to explain the spectral lines of atoms as the absorption and emission of photons by the electrons in quantized orbits.

  • Heisenberg and Schroedinger formalize Bohr’s model and produce quantum mechanics
  • quantum mechanics is an all encompassing science that crosses over into many fields
Our current understanding of atomic structure was formalized by Heisenberg and Schroedinger in the mid-1920’s where the discreteness of the allowed energy states emerges from more general aspects, rather than imposed as in Bohr’s model. The Heisenberg/Schroedinger quantum mechanics have consistent fundamental principles, such as the wave character of matter and the incorporation of the uncertainty principle.In principle, all of atomic and molecular physics, including the structure of atoms and their dynamics, the periodic table of elements and their chemical behavior, as well as the spectroscopic, electrical, and other physical properties of atoms and molecules, can be accounted for by quantum mechanics => fundamental science.

de Broglie Matter Waves:

 

  • early quantum physics did not ask the question of `why’ quantum effects are found in the microscopic world
Perhaps one of the key questions when Bohr offered his quantized orbits as an explanation to the UV catastrophe and spectral lines is, why does an electron follow quantized orbits? The response to this question arrived from the Ph.D. thesis of Louis de Broglie in 1923. de Broglie argued that since light can display wave and particle properties, then perhaps matter can also be a particle and a wave too.
 

  • One way of thinking of a matter wave (or a photon) is to think of a wave packet. Normal waves look with this:
 

  • having no beginning and no end. A composition of several waves of different wavelength can produce a wave packet that looks like this:
  • the wave packet interpretation requires the particle to have no set position
  • momentum of a particle is proportional to the wavelength of the particle
So a photon, or a free moving electron, can be thought of as a wave packet, having both wave-like properties and also the single position and size we associate with a particle. There are some slight problems, such as the wave packet doesn’t really stop at a finite distance from its peak, it also goes on for every and every. Does this mean an electron exists at all places in its trajectory?de Broglie also produced a simple formula that the wavelength of a matter particle is related to the momentum of the particle. So energy is also connected to the wave property of matter.
  • Lastly, the wave nature of the electron makes for an elegant explanation to quantized orbits around the atom. Consider what a wave looks like around an orbit, as shown below
  • only certain wavelengths will fit into orbit, so quantiziation is due to wavelike nature of particles
The electron matter wave is both finite and unbounded (remember the 1st lecture on math). But only certain wavelengths will `fit’ into an orbit. If the wavelength is longer or shorter, then the ends do not connect. Thus, de Broglie explains the Bohr atom in that on certain orbits can exist to match the natural wavelength of the electron. If an electron is in some sense a wave, then in order to fit into an orbit around a nucleus, the size of the orbit must correspond to a whole number of wavelengths.
 

  • wavelike nature also means that a particles existence is spread out, a probability field
Notice also that this means the electron does not exist at one single spot in its orbit, it has a wave nature and exists at all places in the allowed orbit. Thus, a physicist speaks of allowed orbits and allowed transitions to produce particular photons (that make up the fingerprint pattern of spectral lines). And the Bohr atom really looks like the following diagram:

  • the idea of atoms being solid billiard ball type objects fails with quantum physics
  • quantum effects fade on larger scales since macroscopic objects have high momentum values and therefore small wavelengths
While de Broglie waves were difficult to accept after centuries of thinking of particles are solid things with definite size and positions, electron waves were confirmed in the laboratory by running electron beams through slits and demonstrating that interference patterns formed.How does the de Broglie idea fit into the macroscopic world? The length of the wave diminishes in proportion to the momentum of the object. So the greater the mass of the object involved, the shorter the waves. The wavelength of a person, for example, is only one millionth of a centimeter, much to short to be measured. This is why people don’t `tunnel’ through chairs when they sit down.

Probability Fields:

  • wave interpretation requires a statistical or probability mathematical description of the position of a particle
  • where wave represents the probability of finding the particle at a particular point
The idea that an electron is a wave around the atom, instead of a particle in orbit begs the question of `where’ the electron is at any particular moment. The answer, by experimentation, is that the electron can be anywhere around the atom. But ‘where’ is not evenly distributed. The electron as a wave has a maximum chance of being observed where the wave has the highest amplitude. Thus, the electron has the highest probability to exist at a certain orbit.Where probability is often used in physics to describe the behavior of many objects, this is the first instance of an individual object, an electron, being assigned a probability for a Newtonian characteristic such as position. Thus, an accurate description of an electron orbit is one where we have a probability field that surrounds the nucleus, as shown below:

 

  • for higher orbits the probability field becomes distorted
For more complicated orbits, and higher electron shells, the probability field becomes distorted by other electrons and their fields, like the following example:

 

  • meaning of existence has an elusive nature in the quantum world
Thus, for the first time, the concept of existence begins to take on an elusive character at the subatomic level.

The Birth Of Quantum Mechanics

  • accelerating electron produces EM radiation (light), loses energy and spirals into nucleus, i.e. atom should not work
The UV catastrophe and the dilemma of spectral lines were already serious problems for attempts to understand how light and matter interact. Planck also noticed another fatal flaw in our physics by demonstrating that the electron in orbit around the nucleus accelerates. Acceleration means a changing electric field (the electron has charge), when means photons should be emitted. But, then the electron would lose energy and fall into the nucleus. Therefore, atoms shouldn’t exist!

  • Planck makes `quantum’ assumption to resolve this problem
  • a quantum is a discrete, and smallest, unit of energy
  • all forms of energy are transfered in quantums, not continuous
To resolve this problem, Planck made a wild assumption that energy, at the sub-atomic level, can only be transfered in small units, called quanta. Due to his insight, we call this unit Planck’s constant (h). The word quantum derives from quantity and refers to a small packet of action or process, the smallest unit of either that can be associated with a single event in the microscopic world.Quantum, in physics, discrete natural unit, or packet, of energy, charge, angular momentum, or other physical property. Light, for example, appearing in some respects as a continuous electromagnetic wave, on the submicroscopic level is emitted and absorbed in discrete amounts, or quanta; and for light of a given wavelength, the magnitude of all the quanta emitted or absorbed is the same in both energy and momentum. These particle-like packets of light are called photons, a term also applicable to quanta of other forms of electromagnetic energy such as X rays and gamma rays.

All phenomena in submicroscopic systems (the realm of quantum mechanics) exhibit quantization: observable quantities are restricted to a natural set of discrete values. When the values are multiples of a constant least amount, that amount is referred to as a quantum of the observable. Thus Planck’s constant h is the quantum of action, and h/ (i.e., h/2 ) is the quantum of angular momentum, or spin.

  • electron transition from orbit to orbit must be in discrete quantum jumps
  • experiments show that there is no `inbetween’ for quantum transitions = new kind of reality
  • despite strangeness, experiments confirm quantum predictions and resolves UV catastrophe
Changes of energy, such as the transition of an electron from one orbit to another around the nucleus of an atom, is done in discrete quanta. Quanta are not divisible. The term quantum leap refers to the abrupt movement from one discrete energy level to another, with no smooth transition. There is no “inbetween”.The quantization, or “jumpiness” of action as depicted in quantum physics differs sharply from classical physics which represented motion as smooth, continuous change. Quantization limits the energy to be transfered to photons and resolves the UV catastrophe problem.

Wave-Particle Dualism:

  • The wave-like nature of light explains most of its properties:
    1. reflection/refraction
    2. diffraction/interference
    3. Doppler effect
  • however, a particle description is suggested by the photoelectric effect, the release of electrons by a beam of energetic blue/UV light
  • wavelike descriptions of light fail to explain the lack of the photoelectric effect for red light
The results from spectroscopy (emission and absorption spectra) can only be explained if light has a particle nature as shown by Bohr’s atom and the photon description of light.This dualism to the nature of light is best demonstrated by the photoelectric effect, where a weak UV light produces a current flow (releases electrons) but a strong red light does not release electrons no matter how intense the red light.

An unusual phenomenon was discovered in the early 1900’s. If a beam of light is pointed at the negative end of a pair of charged plates, a current flow is measured. A current is simply a flow of electrons in a metal, such as a wire. Thus, the beam of light must be liberating electrons from one metal plate, which are attracted to the other plate by electrostatic forces. This results in a current flow.

In classical physics, one would expect the current flow to be proportional to the strength of the beam of light (more light = more electrons liberated = more current). However, the observed phenomenon was that the current flow was basically constant with light strength, yet varied strong with the wavelength of light such that there was a sharp cutoff and no current flow for long wavelengths.

Einstein successful explained the photoelectric effect within the context of the new physics of the time, quantum physics. In his scientific paper, he showed that light was made of packets of energy quantum called photons. Each photon carries a specific energy related to its wavelength, such that photons of short wavelength (blue light) carry more energy than long wavelength (red light) photons. To release an electron from a metal plate required a minimal energy which could only be transfered by a photon of energy equal or greater than that minimal threshold energy (i.e. the wavelength of the light had to be a sufficiently short). Each photon of blue light released an electron. But all red photons were too weak. The result is no matter how much red light was shown on the metal plate, there was no current.

The photoelectric earned Einstein the Nobel Prize, and introduced the term “photon” of light into our terminology.

  • particle and wave properties to light is called wave-particle dualism and continues the strange characteristics to the new science of quantum physics
  • wave-particle dualism is extended to matter particles, i.e. electrons act as waves
Einstein explained that light exists in a particle-like state as packets of energy (quanta) called photons. The photoelectric effect occurs because the packets of energy carried by each individual red photons are too weak to knock the electrons off the atoms no matter how many red photons you beamed onto the cathode. But the individual UV photons were each strong enough to release the electron and cause a current flow.It is one of the strange, but fundamental, concepts in modern physics that light has both a wave and particle state (but not at the same time), called wave-particle dualism.

Wave/particle duality is the possession by physical entities (such as light and electrons) of both wavelike and particle-like characteristics. On the basis of experimental evidence, the German physicist Albert Einstein first showed (1905) that light, which had been considered a form of electromagnetic waves, must also be thought of as particle-like, or localized in packets of discrete energy. The French physicist Louis de Broglie proposed (1924) that electrons and other discrete bits of matter, which until then had been conceived only as material particles, also have wave properties such as wavelength and frequency. Later (1927) the wave nature of electrons was experimentally established. An understanding of the complementary relation between the wave aspects and the particle aspects of the same phenomenon was announced in 1928.

Dualism is not such a strange concept, consider the following picture, are the swirls moving or not or both?

 

 

Satellites Orbiting Earth

How a Satellite Works

Satellites are very complex machines that require precise mathematical calculations in order for them to function. The satellite has tracking systems and very sophisticated computer systems on board. Accuracy in orbit and speed are required for the satellite to keep from crashing back down to Earth. There are several different types of orbits that the satellite can take. Some orbits are stationary and some are elliptical.”Satellite Orbit”

Low Earth Orbit

A satellite is in “Low Earth Orbit” when it circles in an elliptical orbit close to Earth. Satellites in low orbit are just hundreds of miles away. These satellites travel at high speeds preventing gravity from pulling them back to Earth. Low Orbit Satellites travel approximately 17,000 miles per hour and circle the Earth in an hour and a half.

Polar Orbit

This is how a satellite travels in a polar orbit.

This is how a satellite travels in a polar orbit. These orbits eventually pass the entire surface of the Earth.

Polar Orbiting Satellites circle the planet in a north-south direction as Earth spins beneath it in an east-west direction. Polar Orbits enable satellites to scan the entire surface of the Earth. Like pealing an orange peal in a circular motion from top to bottom. Remote sensing satellites, weather satellites, and government satellites are almost always in polar orbit because of the coverage. Polar orbits cover the Earth’s surface thoroughly. The polar obit occupied by a satellite has a constant location in which it passes over. ALL POLAR ORBITING SATELLITES INTERSECT The North Pole at their same point. While one Polar orbit satellite is over America, another Polar Satellite is passing over the North Pole. So the North Pole has a constant flow of UHF and higher microwaves hitting it. The illustration shows that the common passing point for Polar Orbiting Satellites is over the North Pole.

A polar orbiting satellite will pass over the Earths equator at a different longitude on each of its orbits; however, Polar Orbiting satellites pass over the North Pole every time. Polar orbits are often used for earth mapping, earth observation, weather satellites, and reconnaissance satellites. This orbit has a disadvantage. No one spot of the Earth’s surface can be sensed continuously from a satellite in a polar orbit.

This is from U.S. Army Information Systems Engineering Command.

“In order to fulfill the military need for protected communication service, especially low probability of intercept/detection (LPI/LPD), to units operating north of 65 degree northern latitude, the space communications architecture includes the polar satellite system capability. An acceptable approach to achieving this goal is to fly a low capacity EHF system in a highly elliptical orbit, either as a hosted payload or as a “free-flyer,” to provide service during a transition period, nominally 1997-2010. A single, hosted EHF payload is already planned. Providing this service 24 hours-a-day requires a two satellite constellation at high earth orbit (HEO). Beyond 2010, the LPI/LPD polar service could continue to be provided by a high elliptical orbit HEO EHF payload, or by the future UHF systems.” (quote from www.fas.org)

THERE IS A CONSTANT 24 HOUR EHF AND HIGHER MICROWAVE TRANSMISSION PASSING OVER THE NORTH POLE!

“Geo Synchronous” Orbit

This is how a satellite travels in a Equitorial orbit

This is how a satellite travels in a “Geo Synchronous” orbit. Equatorial orbits are also called “Geostationary”. These satellites follow the rotation of the Earth.

A satellite in a “Geo Synchronous” orbit hovers over one spot and follows the Earths spin along the equator. Go to this link for more information on “Geo synchronous Orbits”. Earth takes 24 hours to spin on its axis.  In the illustration you can see that an “Geo Synchronous” Orbit follows the equator and never covers the North or South Poles. The footprints of “Geo Synchronous” orbiting satellites do not cover the polar regions, so communication satellites in “Geo Synchronous” orbits in cannot be accessed in the northern and southern polar regions.

Because the “Geo Synchronous” satellite does not move from the area that it covers, these satellites are used for telecommunications, gps trackers, television broadcasting, government, and internet. Because they are stationary, their orbits are much farther from the Earth than the Polar orbiting satellites. If a stationary satellite is too close to the Earth, it will crash back down at a faster rate. They say there are about 300 “Geo Synchronous” satellites in orbit right now. Of course, these are the satellites that the public is allowed to know about, that are not governmentally classified.

Satellite Anatomy

This is the Atatomy of a Satellite.

This is the Anatomy of a Satellite.

A satellite is made up of several instruments that work together to operate the satellite during its mission. This illustration to the left demonstrates the parts of a satellite.

The command and data system controls all of the satellite functions. This is a very complex computer system that communicates all of the satellite flight operations, where the satellite points, and any other mathematical operations.

The Pointing control directs the satellite in order for the satellite to keep a steady flight path. This system is a complex sensor instrument that keeps the satellite pointing in the same direction. The satellite uses a propulsion system called “momentum wheels” that adjusts the position of the satellite into its proper place. Scientific observation satellites have more precise propulsion systems than do communications satellites.

The Communications system has a transmitter, a receiver, and various antennas to transmit data to the Earth . On Earth, Ground control sends instructions and data to the satellite’s computer through the Antenna. Pictures, data, television, radio, and many other data is sent by the satellite back to practically everyone on Earth.

The Power system needed power and operate the satellite is an efficient solar panel array that obtains energy from the Sun’s rays. Solar arrays make electricity from the sunlight and store the electricity in rechargeable batteries.

The Payload is what a satellite needs to perform its job. A weather satellite would have a payload that consist of an Image sensor, digital camera, telescope, and other thermal and weather sensing devices.

The Thermal Control is the protection required to prevent damage to the satellite’s instrumentation and components in. Satellite are exposed to extreme temperature changes. Temperatures range from 120 degrees below zero to 180 degrees above zero. Heat distribution units and thermal blankets to protect the electronics and components from temperature damage.

Satellite Footprints

A single satellite footprint

Here you can see one footprint covers an enormous area.

Geostationary satellites have a very broad view of Earth. The footprint of one Echo Starbroadcast satellite covers almost all of North America. They stay over the Earth at same the same location so we always know where they are. Direct contact with the satellite can be made because Equatorial Satellites are fixed.

Many communications satellites travel in Equatorial orbits, including those that relay TV signals into our homes; However, the size of the footprint of one satellite covers the entire Northern America.

The multi path effect that occurs when satellite transmissions are obstructed by topographical entities also provides insight on microwave global warming. Microwaves are being bombarded upon our planet. Our planet absorbs and obstructs the waves from space. Microwaves penetrate through all of our atmosphere and bounce and echo off of the Earth. Imagine the footprint overlaps that are being produced by the thousands of satellites in orbit right now?

coverage 8 pic

Here you can see the footprint overlapping the that satellites make. Each satellite covers an enormous area.

The closer the satellite is to something the more power will be exerted on the object. The farther the waves have to go the less power they will have. Because the atmosphere is so much closer to the satellite, there is a stronger beam of energy going through the clouds and atmosphere. This stronger power causes a higher rate of warming in the atmosphere than it does on the surface of the Earth.

The illustration to the right shows how eight satellites microwave an enormous part of our Earth. When the radio signals reflect off of surrounding terrain; buildings, canyon walls, hard ground multi path issues occur due to multiple waves doubling over themselves. These delayed signals can cause poor signals. Ultimately, the water, ice, and Earth are absorbing and reflecting microwaves in many different directions. Microwaves passing through Earths atmospheres are causing radio frequency heating at the molecular level.

System spectral efficiency

“In wireless networks, the system spectral efficiency is a measure of the quantity of users or services that can be simultaneously supported by a limited radio frequency bandwidth in a defined geographic area.” The capacity of a wireless network can be measured by calculating the maximum simultaneous phone calls over 1 MHz frequency spectrum. This is measured in Erlangs//MHz/cell, Erlangs/MHz/sector, Erlangs/MHz/site, or Erlangs/MHz/km measurements. Modern day cell phones take advantage of this type of transmission. These cell phones transmit a microwave transmission that is twice the frequency of a microwave oven in your home.

This is a misconception of how microwave frequencies travel.

This is a misconception of how microwave frequencies travel.

An example of a spectral efficiency can be found in the satellite RADARSAT-1. In 1995 RADARSAT-1, an Earth observation satellite from Canada, was launched in an orbit above the Earth. RADRASAT-1 provides images of the Earth, scientific and commercial, used in agriculture, geology, hydrology, arctic surveillance, oceanography, cartography, ice and ocean monitoring, forestry, detecting ocean oil slicks, and many other applications. This satellite uses continuous high microwave transmissions. A Synthetic Aperture Radar (SAR) system is a type of sensor that images the Earth at a single microwave frequency of 5.3 GHz. SAR systems transmit microwaves towards the surface of the Earthy and record the reflections from the surface. This satellite can image the Earth during any time and in any atmospheric condition.

This is how microwave frequencies travel

This is how microwave frequencies actually travel.

A Common misconception about microwave transmissions is that the transmission is directly beaming straight into the receiving antennae. (See misconception illustration) This however, is not true. Transmissions are spread into the air in a spherical direction. The waves travel in every direction until they find a receiver or some dielectric material to pass into.

When a microwave transmission is sent to a receiving satellite dish the transmission is sent in a spherical direction. (See how microwaves travel illustration) The signal passes through all parts of that sphere until it finds a connection. All microwaves, not received by an antennae, pass through the dielectric material in the earth. Dielectric material is primarily water and ice.

M-Theory ; The Grand Masterpiece

There are five different superstring theories, each ten-dimensional, all seemingly incompatible. But in 1995, Edward Witten proposed that the five theories were actually all part of a large, mysterious and uncharted framework that he dubbed M-theory.

We don’t have the full equations for M-theory, but there are many hints as to how it works. Witten showed that the five theories are linked to each other via dualities: one formulation at strong coupling is identical to another at weak coupling. M-theory is the complete skeleton whilst the five superstring models are individual bones.

M-theory doesn’t have ten spacetime dimensions, but eleven – ten space and one time! Now there isn’t a string theory in eleven dimensions, but there is a supersymmetric theory of gravity, calledsupergravity. Witten showed that there was a continuous path between the ten-dimensional string theories and the eleven-dimensional theory of supergravity; supergravity is part of the M-theory web.

Our understanding of M-theory is by no means complete. It seems to be the single unifying structure into which all string theories fit. Dualities allow us to relate some of the fringes, where interactions are very weak or very strong. But the middle of the web remains impenetrable.

Some of the duality calculations are surprising, and impressive. Nonetheless we can only see the edges of the picture, and we grasp little of its mathematics. We have yet to derive any concrete predictions and experimental evidence of the required extra dimensions remains elusive. Like an artistic masterpiece with a hole through the middle, it gives us a tantalising glimpse of what might be the ultimate unifying theory

D-Branes

M-theory is not just populated by strings, but also by membranes called D-branes. These are multi-dimensional surfaces that move through the eleven dimensions of M-theory. We can have D-branes of up to nine spatial dimensions (though that’s a little hard to visualise)! A point is a D0-brane, a string a D1-brane, a sheet a D2-brane and so on.

Eleven-dimensional M-theory can look exactly like ten-dimensional string theory. This happens when one of the eleven dimensions is extremely small and circular. A two-dimensional D-brane wrapped around this extra dimension will look like a cylinder. But if the circular dimension is tiny then this cylinder will be very thin. As a result the D-brane will appear to be a one-dimensional string moving in ten dimensions (see picture).

In recent years D-branes have become increasingly important to research. They are natural places for fixed endpoints of open strings to live. And strings living on D-branes give rise to the same kind of forces that appear in the Standard Model.

But there is an even more potent reason driving interest in D-branes: they are non-perturbative objects. D-branes allow physicists to do calculations that transcend the approximate methods of perturbation theory. Thus we can uncover elements of the theory in regimes where interactions are strong. Historically this was uncharted terrain.

D-branes are a central ingredient in modern research. They can be used to construct cosmological models within string theory. Researchers in brane cosmology build models of inflation based on brane collisions. And the study of D-branes has shed light on some of the most elusive elements in the universe, black holes. Finally, D-branes played an essential role in formulating the AdS/CFT correspondence.

Nuclear Bombs : History, Creation, Ingredients, Chemical Composition, Fusion, Types and Detonation

American nuclear technology evolved rapidly between 1944 and 1950, moving from the primitive Fat Man and Little Boy to more sophisticated, lighter, more powerful, and more efficient designs. Much design effort shifted from fission to thermonuclear weapons after President Truman decided that the United States should proceed to develop a hydrogen bomb, a task which occupied the Los Alamos Laboratory from 1950 through 1952. The “George” shot of Operation Greenhouse (May 9, 1951) confirmed for the first time that a fission device could produce the conditions needed to ignite a thermonuclear reaction. The “Mike” test of Operation Ivy, 1 November, 1952, was the first explosion of a true two-stage thermonuclear device.

From 1952 until the early years of the ICBM era [roughly to the development of the first multiple independently targeted reentry vehicles (MIRVs) in the late 1960’s], new concepts in both fission primary and fusion secondary design were developed rapidly. However, after the introduction of the principal families of weapons in the modern stockpile (approximately the mid 1970’s), the rate of design innovations and truly new concepts slowed as nuclear weapon technology became a mature science. It is believed that other nations’ experiences have been roughly similar, although the United States probably has the greatest breadth of experience with innovative designs simply because of the more than 1,100 nuclear detonations it has conducted. The number of useful variations on the themes of primary and secondary design is finite, and designers’ final choices are frequently constrained by considerations of weapon size, weight, safety, and the availability of special materials.

Nuclear weaponry has advanced considerably since 1945, as can be seen at an unclassified level by comparing the size and weight of “Fat Man” with the far smaller, lighter, and more powerful weapons carried by modern ballistic missiles. Most nations of the world, including those of proliferation interest, have subscribed to the 1963 Limited Test Ban Treaty, which requires that nuclear explosions only take place underground. Underground testing can be detected by seismic means and by observing radioactive effluent in the atmosphere. It is probably easier to detect and identify a small nuclear test in the atmosphere than it is to detect and identify a similarly sized underground test. In either case, highly specialized instrumentation is required if a nuclear test explosion is to yield useful data to the nation carrying out the experiment.

US nuclear weapons technology is mature and might not have shown many more qualitative advances over the long haul, even absent a test ban. The same is roughly true for Russia, the UK, and possibly for France. The design of the nuclear device for a specific nuclear weapon is constrained by several factors. The most important of these are the weight the delivery vehicle can carry plus the size of the space available in which to carry the weapon (e.g., the diameter and length of a nosecone or the length and width of a bomb bay). The required yield of the device is established by the target vulnerability. The possible yield is set by the state of nuclear weapon technology and by the availability of special materials. Finally, the choices of specific design details of the device are determined by the taste of its designers, who will be influenced by their experience and the traditions of their organization.

Fission Weapons

An ordinary “atomic” bomb of the kinds used in World War II uses the process of nuclear fission to release the binding energy in certain nuclei. The energy release is rapid and, because of the large amounts of energy locked in nuclei, violent. The principal materials used for fission weapons are U-235 and Pu-239, which are termed fissile because they can be split into two roughly equal-mass fragments when struck by a neutron of even low energies. When a large enough mass of either material is assembled, a self-sustaining chain reaction results after the first fission is produced.The minimum mass of fissile material that can sustain a nuclear chain reaction is called a critical mass and depends on the density, shape, and type of fissile material, as well as the effectiveness of any surrounding material (called a reflector or tamper) at reflecting neutrons back into the fissioning mass. Critical masses in spherical geometry for weapon-grade materials are as follows:

Uranium-235      Plutonium-239

Bare sphere: 56 kg 11 kg
Thick Tamper: 15 kg 5 kg

The critical mass of compressed fissile material decreases as the inverse square of the density achieved. Since critical mass decreases rapidly as density increases, the implosion technique can make do with substantially less nuclear material than the gun-assembly method. The “Fat Man” atomic bomb that destroyed Nagasaki in 1945 used 6.2 kilograms of plutonium and produced an explosive yield of 21-23 kilotons [a 1987 reassessment of the Japanese bombings placed the yield at 21 Kt]. Until January 1994, the Department of Energy (DOE) estimated that 8 kilograms would typically be needed to make a small nuclear weapon. Subsequently, however, DOE reduced the estimate of the amount of plutonium needed to 4 kilograms. Some US scientists believe that 1 kilogram of plutonium will suffice.

If any more material is added to a critical mass a condition of supercriticality results. The chain reaction in a supercritical mass increases rapidly in intensity until the heat generated by the nuclear reactions causes the mass to expand so greatly that the assembly is no longer critical.

Fission weapons require a system to assemble a supercritical mass from a sub-critical mass in a very short time. Two classic assembly systems have been used, gun and implosion. In the simpler gun-type device, two subcritical masses are brought together by using a mechanism similar to an artillery gun to shoot one mass (the projectile) at the other mass (the target). The Hiroshima weapon was gun-assembled and used 235 U as a fuel. Gun-assembled weapons using highly enriched uranium are considered the easiest of all nuclear devices to construct and the most foolproof.

Gun-Device

In the gun device, two pieces of fissionable material, each less than a critical mass, are brought together very rapidly to forma single supercritical one. This gun-type assembly may be achieved in a tubular device in which a high explosive is used to blow one subcritical piece of fissionable material from one end of the tube into another subcritical piece held at the opposite end of the tube.

Manhattan Project scientists were so confident in the performance of the “Little Boy” uranium bomb that the device was not even tested before it was used. This 15-kt weapon was airdropped on 06 August 1945 at Hiroshima, Japan. The device contained 64.1 kg of highly enriched uranium, with an average enrichment of 80%. The six bombs built by the Republic of South Africa were gun-assembled and used 50kg of uranium enriched to between 80 percent and 93 percent in the isotope U-235.Compared with the implosion approach, this method assembles the masses relatively slowly and at normal densities; it is practical only with highly enriched uranium. If plutonium – even weapon-grade — were used in a gun-assembly design, neutrons released from spontaneous fission of its even-numbered isotopes would likely trigger the nuclear chain reaction too soon, resulting in a “fizzle” of dramatically reduced yield.

Implosion-Device

Because of the short time interval between spontaneous neutron emissions (and, therefore, the large number of background neutrons) found in plutonium because of the decay by spontaneous fission of the isotope Pu-240, Manhattan Project scientists devised the implosion method of assembly in which high explosives are arranged to form an imploding shock wave which compresses the fissile material to supercriticality.

The core of fissile material that is formed into a super-critical mass by chemical high explosives (HE) or propellants. When the high explosive is detonated, an inwardly directed implosion wave is produced. This wave compresses the sphere of fissionable material. The decrease in surface to volume ratio of this compressed mass plus its increased density is then such as to make the mass supercritical. The HE is exploded by detonators timed electronically by a fuzing system, which may use altitude sensors or other means of control.

The nuclear chain-reaction is normally started by an initiator that injects a burst of neutrons into the fissile core at an appropriate moment. The timing of the initiation of the chain reaction is important and must be carefully designed for the weapon to have a predictable yield. A neutron generator emits a burst of neutrons to initiate the chain reaction at the proper moment  near the point of maximum compression in an implosion design or of full assembly in the gun-barrel design.

A surrounding tamper may help keep the nuclear material assembled for a longer time before it blows itself apart, thus increasing the yield. The tamper often doubles as a neutron reflector.

Implosion systems can be built using either Pu-239 or U-235 but the gun assembly only works for uranium. Implosion weapons are more difficult to build than gun weapons, but they are also more efficient, requiring less SNM and producing larger yields. Iraq attempted to build an implosion bomb using U-235. In contrast, North Korea chose to use 239 Pu produced in a nuclear reactor.

Boosted Weapons

To fission more of a given amount of fissile material, a small amount of material that can undergo fusion, deuterium and tritium (D-T) gas, can be placed inside the core of a fission device. Here, just as the fission chain reaction gets underway, the D-T gas undergoes fusion, releasing an intense burst of high-energy neutrons (along with a small amount of fusion energy as well) that fissions the surrounding material more completely. This approach, called boosting, is used in most modem nuclear weapons to maintain their yields while greatly decreas-ing their overall size and weight.

Enhanced Radiation Weapons

An enhanced radiation (ER) weapon, by special design techniques, has an output in which neutrons and x-rays are made to constitute a substantial portion of the total energy released. For example, a standard fission weapon’s total energy output would be partitioned as follows: 50% as blast; 35% as thermal energy; and 15% as nuclear radiation. An ER weapon’s total energy would be partitioned as follows: 30% as blast; 20% as thermal; and 50% as nuclear radiation. Thus, a 3-kiloton ER weapon will produce the nuclear radiation of a 10-kiloton fission weapon and the blast and thermal radiation of a 1-kiloton fission device. However, the energy distribution percentages of nuclear weapons are a function of yield.

Fusion Weapons

A more powerful but more complex weapon uses the fusion of heavy isotopes of hydrogen, deuterium, and tritium to release large numbers of neutrons when the fusile (sometimes termed “fusionable”) material is compressed by the energy released by a fission device called a primary. Fusion (or thermonuclear’ weapons derive a significant amount of their total energy from fusion reactions. The intense temperatures and pressures generated by a fission explosion overcome the strong electrical repulsion that would otherwise keep the positively charged nuclei of the fusion fuel from reacting. The fusion part of the weapon is called a secondary.In general, the x-rays from a fission primary heat and compress material surrounding a secondary fusion stage.

It is inconvenient to carry deuterium and tritium as gases in a thermonuclear weapon, and certainly impractical to carry them as liquefied gases, which requires high pressures and cryogenic temperatures. Instead, one can make a “dry” device in which 6Li is combined with deuterium to form the compound 6Li D (lithium-6 deuteride). Neutrons from a fission “primary” device bombard the 6 Li in the compound, liberating tritium, which quickly fuses with the nearby deuterium. The a particles, being electrically charged and at high temperatures, contribute directly to forming the nuclear fireball. The neutrons can bombard additional 6Li nuclei or cause the remaining uranium and plutonium in the weapon to undergo fission. This two-stage thermonuclear weapon has explosive yields far greater than can be achieved with one point safe designs of pure fission weapons, and thermonuclear fusion stages can be ignited in sequence to deliver any desired yield. Such bombs, in theory, can be designed with arbitrarily large yields: the Soviet Union once tested a device with a yield of about 59 megatons.

In a relatively crude sense, 6 Li can be thought of as consisting of an alpha particle ( 4He) and a deuteron ( 2H) bound together. When bombarded by neutrons, 6 Li disintegrates into a triton ( 3 H) and an alpha:

6 Li + Neutron = 3 H + 3 He + Energy.This is the key to its importance in nuclear weapons physics. The nuclear fusion reaction which ignites most readily is

2 H + 3 H =
4 He + n + 17.6 MeV,or, phrased in other terms, deuterium plus tritium produces 4He plus a neutron plus 17.6 MeV of free energy:

D + T = 4 He + n + 17.6 MeV.Lithium-7 also contributes to the production of tritium in a thermonuclear secondary, albeit at a lower rate than 6Li. The fusion reactions derived from tritium produced from 7 Li contributed many unexpected neutrons (and hence far more energy release than planned) to the final stage of the infamous 1953 Castle/BRAVO atmospheric test, nearly doubling its expected yield.

Safing, Arming, Fuzing, and Firing (SAFF)

The ability to make effective use of a nuclear weapon is limited unless the device can be handled safely, taken safely from storage when required, delivered to its intended target, and then detonated at the correct point in space and time to achieve the desired goal. Although the intended scenarios for use of its weapons will strongly influence specific weaponization concepts and approaches, functional capabilities for safing, arming, fuzing, and firing (SAFF) will be fundamental.Nuclear weapons are particularly destructive, with immediate effects including blast and thermal radiation and delayed effects produced by ionizing radiation, neutrons, and radioactive fallout. They are expensive to build, maintain, and employ, requiring a significant fraction of the total defense resources of a small nation. In a totalitarian state the leader must always worry that they will be used against the government; in a democracy the possibility of an unauthorized or accidental use must never be discounted. A nuclear detonation as the result of an accident would be a local catastrophe.Because of their destructiveness, nuclear weapons require precautions to prevent accidental detonation during any part of their manufacture and lifetime. And because of their value, the weapons require reliable arming and fuzing mechanisms to ensure that they explode when delivered to target. Therefore, any nuclear power is likely to pay some attention to the issues of safing and safety, arming, fuzing, and firing of its nuclear weapons. The solutions adopted depend upon the level of technology in the proliferant state, the number of weapons in its stockpile, and the political consequences of an accidental detonation.Whether to protect their investment in nuclear arms or to deny potential access to and use of the weapons by unauthorized persons, proliferators or subnational groups will almost certainly seek special measures to ensure security and operational control of nuclear weapons. These are likely to include physical security and access control technologies at minimum and may include use control. The techniques used today by the existing western nuclear weapon states represent the culmination of a half-century of evolution in highly classified military programs, and proliferators may well choose simpler solutions, perhaps by adapting physical security, access, and operational controls used in the commercial sector for high-value/high-risk assets.From the very first nuclear weapons built, safety was a consideration. The two bombs used in the war drops on Hiroshima and Nagasaki posed significant risk of accidental detonation if the B-29 strike aircraft had crashed on takeoff. As a result, critical components were removed from each bomb and installed only after takeoff and initial climb to altitude were completed. Both weapons used similar arming and fuzing components. Arming could be accomplished by removing a safety connector plug and replacing it with a distinctively colored arming connector. Fuzing used redundant systems including a primitive radar and a barometric switch. No provision was incorporated in the weapons themselves to prevent unauthorized use or to protect against misappropriation or theft.

In later years, the United States developed mechanical safing devices. These were later replaced with weapons designed to a goal of less than a 1 in a 1 million chance of the weapon delivering more than 4 pounds of nuclear yield if the high explosives were detonated at the single most critical possible point. Other nations have adopted different safety criteria and have achieved their safety goals in other ways.

In the 1950’s, to prevent unauthorized use of U.S. weapons stored abroad, permissive action links (PALs) were developed. These began as simple combination locks and evolved into the modern systems which allow only a few tries to arm the weapon and before disabling the physics package should an intruder persist in attempts to defeat the PAL.

Safing To ensure that the nuclear warhead can be stored, handled, deployed, and employed in a wide spectrum of intended and unintended environmental and threat conditions, with assurance that it will not experience a nuclear detonation. In U.S. practice, safing generally involves multiple mechanical interruptions of both power sources and pyrotechnic/explosive firing trains. The nuclear components may be designed so that an accidental detonation of the high explosives is intrinsically unable to produce a significant (>4 pounds TNT equivalent) nuclear yield; it is simpler to insert mechanical devices into the pit to prevent the assembly of a critical mass into the pit or to remove a portion of the fissile material from inside the high explosives. Mechanical safing of a gun-assembled weapon is fairly straightforward; one can simply insert a hardened steel or tungsten rod across a diameter of the gun barrel, disrupting the projectile. All U.S. weapons have been designed to be intrinsically one-point safe in the event of accidental detonation of the high explosives, but it is not anticipated that a new proliferator would take such care.

Arming Placing the nuclear warhead in a ready operational state, such that it can be initiated under specified firing conditions. Arming generally involves mechanical restoration of the safing interrupts in response to conditions that are unique to the operational environment (launch or deployment) of the system. A further feature is that the environment typically provides the energy source to drive the arming action. If a weapon is safed by inserting mechanical devices into the pit (e.g., chains, coils of wire, bearing balls) to prevent complete implosion, arming involves removal of those devices. It may not always be possible to safe a mechanically armed device once the physical barrier to implosion has been removed.

Fuzing To ensure optimum weapon effectiveness by detecting that the desired conditions for warhead detonation have been met and to provide an appropriate command signal to the firing set to initiate nuclear detonation. Fuzing generally involves devices to detect the location of the warhead with respect to the target, signal processing and logic, and an output circuit to initiate firing.

Firing To ensure nuclear detonation by delivering a precise level of precisely timed electrical or pyrotechnic energy to one or more warhead detonating devices. A variety of techniques are used, depending on the warhead design and type of detonation devices.

Depending on the specific military operations to be carried out and the specific delivery system chosen, nuclear weapons pose special technological problems in terms of primary power and power-conditioning, overall weapon integration, and operational control and security.

Not all weapons possessors will face the same problems or opt for the same levels of confidence, particularly in the inherent security of their weapons. The operational objectives will in turn dictate the technological requirements for the SAFF subsystems. Minimal requirements could be met by surface burst (including impact fuzing of relatively slow moving warhead) or crude preset height of burst based on simple timer or barometric switch or simple radar altimeter. Modest requirements could be met by more precise HOB (height of burst) based on improved radar triggering or other methods of measuring distance above ground to maxmize radius of selected weapons effects, with point-contact salvage fuzing. Parachute delivery of bombs to allow deliberate laydown and surface burst. Substantial requirements could be met by variable HOB, including low-altitude for ensured destruction of protected strategic targets, along with possible underwater or exoatmospheric capabilities.

Virtually any country or extranational group with the resources to construct a nuclear device has sufficient capability to attain the minimum SAFF capability that would be needed to meet terrorist or minimal national aims. The requirements to achieve a “modest” or “substantial” capability level are much more demanding. Both safety and protection of investment demand very low probability of failure of safing and arming mechanisms, with very high probability of proper initiation of the warhead. All of the recognized nuclear weapons states and many other countries have (or have ready access to) both the design know-how and components required to implement a significant capability.In terms of sophistication, safety, and reliability of design, past U.S. weapons programs provide a legacy of world leadership in SAFF and related technology. France and the UK follow closely in overall SAFF design and may actually hold slight leads in specific component technologies. SAFF technologies of other nuclear powers – notably Russia and China – do not compare. Japan and Germany have technological capabilities roughly on a par with the United States, UK, and France, and doubtless have the capability to design and build nuclear SAFF subsystems.Reliable fuzing and firing systems suitable for nuclear use have been built since 1945 and do not need to incorporate any modern technology. Many kinds of mechanical safing systems have been employed, and several of these require nothing more complex than removable wires or chains or the exchanging of arming/ safing connector plugs. Safing a gun-assembled system is especially simple. Arming systems range from hand insertion of critical components in flight to extremely sophisticated instruments which detect specific events in the stockpile to target sequence (STS). Fuzing and firing systems span an equally great range of technical complexity.Any country with the electronics capability to build aircraft radar altimeter equipment should have access to the capability for building a reasonably adequate, simple HOB fuze. China, India, Israel, Taiwan, South Korea, Brazil, Singapore, the Russian Federation and the Ukraine, and South Africa all have built conventional weapons with design features that could be adapted to more sophisticated designs, providing variable burst height and rudimentary Electronic Counter Counter Measure (ECCM) features. With regard to physical security measures and use control, the rapid growth in the availability and performance of low-cost, highly reliable microprocessing equipment has led to a proliferation of electronic lock and security devices suitable for protecting and controlling high-value/at-risk assets. Such technology may likely meet the needs of most proliferant organizations.