# Orbits

Planetary Configurations:

The planets outside of the Earth’s orbit (Mars, Jupiter, Saturn, Uranus, Neptune) are called superior planets

Likewise, the planets inside of the Earth’s orbit (Mercury, Venus) are called inferior planets.

Other configurations are:

• Object at greatest western elongation = “morning star”
• Object at greatest eastern elongation = “evening star”
• only inferior planets have phases
• transit = passage of an inferior planet across the Sun

Galileo’s laws of Motion:

Aside from his numerous inventions, Galileo also laid down the first accurate laws of motion for masses. Galileo realized that all bodies accelerate at the same rate regardless of their size or mass. Everyday experience tells you differently because a feather falls slower than a cannonball. Galileo’s genius lay in spotting that the differences that occur in the everyday world are in incidental complication (in this case, air friction) and are irrelevant to the real underlying properties (that is, gravity). He was able to abstract from the complexity of real-life situations the simplicity of an idealized law of gravity.

Key among his investigations are:

• developed the concept of motion in terms of velocity (speed and direction) through the use of inclined planes.
• developed the idea of force, as a cause for motion.
• determined that the natural state of an object is rest or uniform motion, i.e. objects always have a velocity, sometimes that velocity has a magnitude of zero = rest.
• objects resist change in motion, which is called inertia.

Galileo also showed that objects fall with the same speed regardless of their mass. The fact that a feather falls slowly than a steel ball is due to amount of air resistence that a feather experiences (alot) versus the steel ball (very little).

Hammer and Feather on Moon

Kepler’s laws of Planetary Motion:

Kepler developed, using Tycho Brahe’s observations, the first kinematic description of orbits, Newton will develop a dynamic description that involves the underlying influence (gravity)

• 1st law (law of elliptic orbits): Each planet moves in an elliptical orbit with the Sun at one focus.

Ellipses

that are highly flattened have high eccentricity. Ellipses that are close to a circle have low eccentricity.
• 2nd law (law of equal areas): a line connection the Sun and a planet (called the radius vector) sweeps out equal areas in equal timesObjects travel fastest at the low point of their orbit, and travel slowest at the high point of their orbit.
• 3rd law (law of harmonics): The square of a planet’s orbital period is proportional to its mean distance from the Sun cubed.The mathematical way to describe Kepler’s 3rd law is:P2 α R3where the α symbol means `proportional to’. Proportions are expressions that imply there exists some constant, k, that relates the period, P, and the radius, R, such thatP2 = kR3We can determine k by expressing the formula in units of the Earth and its orbit around the Sun, such that

(1 yr)2 = k (1 A.U.)3so k is equal to one, as long as we use units of years and A.U.’s (the Astronomical Unit, i.e. the distance from the Earth from the Sun). With k=1, then kepler’s 3rd law becomes

P2 = RThe 3rd law is used to develop a “yardstick” for the Solar System, expressing the distance to all the planets relative to Earth’s orbit by just knowing their period (timing how long it takes for them to go around the Sun).

Orbits:

Many years after Kepler, it was shown that orbits actually come in many flavors, ellipses, circles, parabolas and hyperbolas; a family of curves called conic sections. There are five basic types of simple orbits: radial, ballistic, stable, polar and geosynchronous.

For an escape orbit, the velocity sufficient to escape gravitation pull of the planet, i.e. the major axis is infinite, such as the Voyager spacecraft

The direction a body travels in orbit can be direct, or prograde, in which the spacecraft moves in the same direction as the planet rotates, or retrograde, going in a direction opposite the planet’s rotation.

The semi-major axis of an orbit is determined by the kinetic energy acquired by the rocket at burnout. This is equilvent to the burnout velocity. For low burnout velocities (below 25,000 ft/sec) the orbit is ballistic, meaning it does not escape the surface of the Earth. Burnout velocities above 25,000 ft/sec achieve stable orbit. At 35,000 ft/sec, the orbit reaches the distance of the Moon.

The amount of burnout velocity also determines the orbit type, an ellipse, a parabola or a hyperbolic path.

Satellites use a wide variety of orbits to fullfil their missions. The orbit chosen for a satellite is a comprimise between the mission requirements, the capabilities of the rocket used to launch the satellite and orbital mechanics.

• The orbital period. This increases with the mean altitude of the orbit, so a satellite in a low earth orbit moves faster than a satellite in a geostationary orbit. Also the velocity of a satellite in an eccentric orbit varies along the orbit, being fastest at perigee & slowest at apogee (Keplers second law of equal areas).
• Inclination. The angle between the plane (major axis) of the satellite orbit and the equator
• Eccentricity: A perfectly circular orbit has an eccentricity of zero, an elliptical orbit an eccentricity of 0< to <1, a parabolic orbit an eccentricity of 1 and a hyperbolic orbit of >1. The low point of an orbit is known as perigee, whilst the high point is apogee. The major axis is the vector connecting the perigee to the apogee.
• The ascending node is where the orbit crosses the equator in a northbound direction (ie. the direction of the satellite motion). Likewise, the descending node is where the orbit crosses the equator in a southbound direction.

Low Earth Orbit:

Weather and spy satellites use over pole orbits so that Earth turns under them once per day, i.e. total coverage of the Earth’s surface

Landsat 7 is an earth resources spacecraft which images the earth’s surface in visible and infrared light. Therefore this satellite orbit is optimised for earth observation. For this reason a near polar orbit of 700km, 98.8 inclination, 98 minute period is used which ensures that the satellite can (at least in theory) observe the entire globe. Several other features of this orbit make it especially useful for remote sensing satellites.

• Circle of visibility = yellow circle around satellite indicating the region of the earth visible from the satellite.
• Part of orbit in sunlight = yellow.
• Part of orbit in shadow = red.
• Dayside of earth = light blue.
• Nightside of earth = dark blue, after the terminator three lines indicate the boundaries of civil, nautical & astronomical twilight.

General view of Landsat 7 orbit.

Left: View perpendicular to plane of orbit
Right: View of orbit from ascending node

In theory an orbit should remain fixed in space whilst the earth rotates beneath the satellite. In reality the earth is slightly bulged and the effect of this bulge is to shift the point of perigee and the ascending node for any orbit which has an inclination other than 90. This effect is known as nodal regression, the result of which is that the plane of the orbit rotates or precesses.

Ground tracks. Red dots along the ground track show the position of the satellite at regular intervals. Closely spaced dots indicate slow speed, widely spaced dots high speed.

However, this effect is used to advantage here to shift the orbit at exactly the same rate as the daily change in position of the sun over any point of the earth. So the satellite always passes over the earth on the sunlit part of its orbit at the same local time of day (for example at 9 am local time). This ensures that lighting conditions are similar (ignoring seasonal differences) for images taken of the same spot on the earth at different times. Additionally the orbit is resonant with the rotation period of the earth, meaning that the satellite passes over the same point on the earth at the same time of day at regular intervals (which may be daily or every 2 or more days depending on the resonance). In the case of Landsat there are 14.5 orbits per day or 29 orbits every 2 days.

Geosynchronous Orbits (GEO):

Communication satellites use geosynchronous orbits for continuous coverage of one region of the globe, i.e. the orbital period is exactly one day. This turns out to be approximately 24,000 miles up.

A geosynchronous orbit is an orbit which has an orbital period close to that of the earths rotation. A geostationary orbit is a special case of the geosynchronous orbit where inclination = 0 and the period is equal to the rotation period of the earth (approx 1436 minutes), corresponding to a cricular orbit of approx. 35,700km altitude. A satellite in this orbit appears essentially stationary in the sky, which is why this orbit is used extensively for telecommunications & weather satellites. In reality lunar & solar gravitational influences perturb the satellites orbit, so that through the day the satellites position shifts slightly.

Below is shown the orbit of the TDRS-7 satellite, one of a series of NASA satellites which used to provide a near continous communications link with the Space Shuttle, International Space Station & other spacecraft such as the Hubble Space Telescope.

General view of TDRS-7 orbit

View of orbit from ascending node

Compared with the LEO orbit of Mir a much larger portion of the earth’s surface is visible from the TDRS-7 spacecraft. The zone of visibility of the spacecraft has been highlighted by a cone. Approximately 40% of the earths surface can be viewed at any one time from geostationary altitude. Additionally, the spacecraft orbit is sunlight apart from a small zone which passes into the earths shadow. Actually, geostationary satellites only experience eclipses at two periods of the year – for a few weeks at a time at the spring and autumn equinoxes. The reason for this is simple. The earths rotation axis is inclined with respect to the ecliptic, hence the earth’s shadow cone misses the plane of a zero inclination geostationary orbit apart from the times when the suns declination is close to zero. This occurs twice a year, once at the spring equinox and once at the autumn equinox.

Ground tracks. Red dots along the ground track show the position of the satellite at regular intervals. Closely spaced dots indicate slow speed, widely spaced dots high speed.

As can be seen from this graphic a perfectly geostationary satellite stays over the same spot on the equator all day. However, if we were to look closely we would see that the satellite does appear to change position, generally describing a small figure of 8 or an arc due to the effect of lunar / solar pertubations dragging the satellite into a slightly elliptical, slightly inclined orbit. There are many non operational satellites in “graveyard” orbits slightly above or below a true geostationary orbit. Since the orbital period is slightly more or less than the earths rotation period these satellites appear to drift slowly around the earth.

# Anti-de Sitter/Conformal Field Theory

The AdS/CFT correspondence is one of the largest areas of research in string theory. AdS/CFT stands for Anti-de Sitter/Conformal Field Theory, an expression that’s not particularly elucidating.

AdS/CFT is a particular, and deeply surprising, example of a duality. It relates two very different theories and at first sight seems obviously wrong. It states that there is a duality between theories of gravity in five dimensions and quantum field theories (QFTs) in four dimensions. This correspondence was first formulated by Juan Maldacena in 1997, and is generally thought to be the single most important result in string theory in the last twenty years.

The original example of AdS/CFT linked two very special theories. The gravitational side involved a particular extension of gravity (type IIB supergravity) on a particular geometry (5-dimensional Anti-de-Sitter space). The QFT was the unique theory with the largest possible amount of supersymmetry. There’s a specific dictionary that translates between the theories.

This relationship has no formal mathematical proof. However a very large number of checks have been performed. These checks involve two calculations, using different techniques and methods, of quantities related by the dictionary. Continual agreement of these calculations constitutes strong evidence for the correspondence.

The first example has by now been extended to many other cases, and AdS/CFT is more generally referred to as the gauge-gravity correspondence. Formally this is the statement that gravitational theories in (N+1) dimensions can be entirely and completely equivalent to non-gravitational quantum field theories in N dimensions.

The AdS/CFT correspondence has a very useful property. When the gravitational theory is hard to solve, the QFT is easy to solve, and vice-versa! This opens the door to previously intractable problems in QFT through simple calculations in gravity theories.

Moreover AdS/CFT allows a conceptual reworking of the classic problems of general relativity. Indeed if general relativity can be equivalent to a QFT, then neither one is deeper than the other. Finally physicists can use it to develop new intuitions for both QFT and general relativity.

# Nuclear Fission/Fusion And Anti-Matter

Fission/Fusion:

 since quantum events do not have a “cause”, this also means that all possible quantum events must and will happen without cause and effect, conservation laws can be violated, although only on very short timescales (things have to add up in the end) violation of mass/energy allowed for the understanding of the source of nuclear power in the Universe, fission and fusion One of the surprising results of quantum physics is that if a physical event is not specifically forbidden by a quantum rule, than it can and will happen. While this may strange, it is a direct result of the uncertainty principle. Things that are strict laws in the macroscopic world, such as the conversation of mass and energy, can be broken in the quantum world with the caveat that they can only broken for very small intervals of time (less than a Planck time). The violation of conservation laws led to the one of the greatest breakthroughs of the early 20th century, the understanding of radioactivity decay (fission) and the source of the power in stars (fusion).Nuclear fission is the breakdown of large atomic nuclei into smaller elements. This can happen spontaneously (radioactive decay) or induced by the collision with a free neutron. Spontaneously fission is due to the fact that the wave function of a large nuclei is ‘fuzzier’ than the wave function of a small particle like the alpha particle. The uncertainty principle states that, sometimes, an alpha particle (2 protons and 2 neutrons) can tunnel outside the nucleus and escape.

 fission is the splitting of atomic nuclei, either spontaneously or by collision (induced) fusion is the merger of atomic particles to form new particles Induced fission occurs when a free neutron strikes a nucleus and deforms it. Under classical physics, the nucleus would just reform. However, under quantum physics there is a finite probability that the deformed nucleus will tunnel into two new nuclei and release some neutrons in the process, to produce a chain reaction.Fusion is the production of heavier elements by the fusing of lighter elements. The process requires high temperatures in order to produce sufficiently high velocities for the two light elements to overcome each others electrostatic barriers.

 quantum tunneling and uncertainty are required for these processes and quantum physics, even though centered on probabilities, is our most accurate science in its predictions Even for the high temperatures in the center of a star, fusion requires the quantum tunneling of a neutron or proton to overcome the repulsive electrostatic forces of an atomic nuclei. Notice that both fission and fusion release energy by converting some of the nuclear mass into gamma-rays, this is the famous formulation by Einstein that E=mc2.Although it deals with probabilities and uncertainties, the quantum mechanics has been spectacularly successful in explaining otherwise inaccessible atomic phenomena and in meeting every experimental test. Its predictions are the most precise and the best checked of any in physics; some of them have been tested and found accurate to better than one part per billion.

Antimatter:

 symmetry in quantum physics lead to the prediction of opposite matter, or antimatter matter and antimatter can combine to form pure energy, and the opposite is true, energy can combine to form matter/antimatter pairs A combination of quantum mechanics and relativity allows us to examine subatomic processes in a new light. Symmetry is very important to physical theories. Thus, the existence of a type of `opposite’ matter was hypothesized soon after the development of quantum physics. `Opposite’ matter is called antimatter. Particles of antimatter has the same mass and characteristics of regular matter, but opposite in charge. When matter and antimatter come in contact they are both instantaneously converted into pure energy, in the form of photons.Antimatter is produced all the time by the collision of high energy photons, a process called pair production, where an electron and its antimatter twin (the positron) are created from energy (E=mc2). A typical spacetime diagram of pair production looks like the following:

 spacetime diagrams provide a backwards time interpretation for antimatter, symmetry in space and time Positrons only survive for a short time since they are attracted to other electrons and disintegrate. Since quantum mechanics states that energy, time and space can be violated, another way of looking at pair production is to state that the positron does not exist, but rather it is an electron traveling backwards in time. Since it is going backwards in time, its charge would be reversed and its spacetime diagram would look like the following:

 the quantum world leads to new ways of looking at existence and reality In this interpretation, the collision of an electron and two photons causes the electron to go backward in time till it meets another pair of photons, then reverses itself again. The world of quantum physics allows for many such strange views of subatomic interactions.

# Superposition and Schrodinger’s Equation+Cat

Quantum Mechanics:

 quantum mechanics is to the microscopic world what classic mechanics and calculus is to the macroscopic world it is the operational process of calculating quantum physics phenomenon its primary task is to bring order and prediction to the uncertainty of the quantum world, its main tool is Schrodinger’s equation The field of quantum mechanics concerns the description of phenomenon on small scales where classical physics breaks down. The biggest difference between the classical and microscopic realm, is that the quantum world can be not be perceived directly, but rather through the use of instruments. And a key assumption to an quantum physics is that quantum mechanical principles must reduce to Newtonian principles at the macroscopic level (there is a continuity between quantum and Newtonian mechanics).Quantum mechanics was capable of bringing order to the uncertainty of the microscopic world by treatment of the wave function with new mathematics. Key to this idea was the fact that relative probabilities of different possible states are still determined by laws. Thus, there is a difference between the role of chance in quantum mechanics and the unrestricted chaos of a lawless Universe. Every quantum particle is characterized by a wave function. In 1925 Erwin Schrodinger developed the differential equation which describes the evolution of those wave functions. By using Schrodinger equation, scientists can find the wave function which solves a particular problem in quantum mechanics. Unfortunately, it is usually impossible to find an exact solution to the equation, so certain assumptions are used in order to obtain an approximate answer for the particular problem.

 the key difference between quantum and classical mechanics is the role of probability and chance quantum objects are described by probability fields, however, this does not mean they are indeterminit, only uncertain The difference between quantum mechanics and newtonian mechanics is the role of probability and statistics. While the uncertainty principle means that quantum objects have to be described by probability fields, this doesn’t mean that the microscopic world fails to conform to deterministic laws. In fact it does. And measurement is an act by which the measurer and the measured interact to produce a result. Although this is not simply the determination of a preexisting property.The quantum description of reality is objective (weak form) in the sense that everyone armed with a quantum physics education can do the same experiments and come to the same conclusions. Strong objectivity, as in classical physics, requires that the picture of the world yielded by the sum total of all experimental results to be not just a picture or model, but identical with the objective world, something that exists outside of us and prior to any measurement we might have of it. Quantum physics does not have this characteristic due to its built-in indeterminacy. For centuries, scientists have gotten used to the idea that something like strong objectivity is the foundation of knowledge. So much so that we have come to believe that it is an essential part of the scientific method and that without this most solid kind of objectivity science would be pointless and arbitrary. However, the Copenhagen interpretation of quantum physics (see below) denies that there is any such thing as a true and unambiguous reality at the bottom of everything. Reality is what you measure it to be, and no more. No matter how uncomfortable science is with this viewpoint, quantum physics is extremely accurate and is the foundation of modern physics (perhaps then an objective view of reality is not essential to the conduct of physics). And concepts, such as cause and effect, survive only as a consequence of the collective behavior of large quantum systems.

Schrodinger’s Cat and Quantum Reality:

 an example of the weirdness of the quantum world is given by the famous Schrodinger cat paradox In 1935 Schrodinger, who was responsible for formulating much of the wave mechanics in quantum physics, published an essay describing the conceptual problems in quantum mechanics. A brief paragraph in this essay described the, now famous, cat paradox.
 the paradox is phrased such that a quantum event determines if a cat is killed or not from a quantum perspective, the whole system state is tied to the wave function of the quantum event, i.e. the cat is both dead and alive at the same time One can even set up quite ridiculous cases where quantum physics rebells against common sense. For example, consider a cat is penned up in a steel chamber, along with the following diabolical device (which must be secured against direct interference by the cat). In the device is a Geiger counter with a tiny bit of radioactive substance, so small that perhaps in the course of one hour only one of the atoms decays, but also, with equal probability, perhaps none. If the decay happens, the counter tube discharges and through a relay releases a hammer which shatters a small flask of hydrocyanic acid. If one has left this entire system to itself for an hour, one would say that the cat still lives if meanwhile no atom has decayed. The first atomic decay would have poisoned it. The wave function for the entire system would express this by having in it the living and the dead cat mixed or smeared out in equal parts.

 the paradox in some sense is not a paradox, but instead points out the tension between the microscopic and macroscopic worlds and the importance of the observer in a quantum scenario quantum objects exist in superposition, many states, as shown by interference the observer collapses the wave function It is typical of these cases that an indeterminacy originally restricted to the atomic domain becomes transformed into macroscopic indeterminacy, which can then be resolved by direct observation. That prevents us from so naively accepting as valid a “blurred model” for representing reality. In itself it would not embody anything unclear or contradictory. There is a difference between a shaky or out-of-focus photograph and a snapshot of clouds and fog banks. We know that superposition of possible outcomes must exist simultaneously at a microscopic level because we can observe interference effects from these. We know (at least most of us know) that the cat in the box is dead, alive or dying and not in a smeared out state between the alternatives. When and how does the model of many microscopic possibilities resolve itself into a particular macroscopic state? When and how does the fog bank of microscopic possibilities transform itself to the blurred picture we have of a definite macroscopic state. That is the collapse of the wave function problem and Schrodinger’s cat is a simple and elegant explanation of that problem.

Macroscopic/Microscopic World Interface:

 events in the microscopic world can happen *without* cause = indeterminacy phenomenon such as tunneling shows that quantum physics leaks into the macroscopic world The macroscopic world is Newtonian and deterministic for local events (note however that even the macroscopic world suffers from chaos). On the other hand, the microscopic quantum world radical indeterminacy limits any certainty surrounding the unfolding of physical events. Many things in the Newtonian world are unpredictable since we can never obtain all the factors effecting a physical system. But, quantum theory is much more unsettling in that events often happen without cause (e.g. radioactive decay).Note that the indeterminacy of the microscopic world has little effect on macroscopic objects. This is due to the fact that wave function for large objects is extremely small compared to the size of the macroscopic world. Your personal wave function is much smaller than any currently measurable sizes. And the indeterminacy of the quantum world is not complete because it is possible to assign probabilities to the wave function. But, as Schrodinger’s Cat paradox show us, the probability rules of the microscopic world can leak into the macroscopic world. The paradox of Schrodinger’s cat has provoked a great deal of debate among theoretical physicists and philosophers. Although some thinkers have argued that the cat actually does exist in two superposed states, most contend that superposition only occurs when a quantum system is isolated from the rest of its environment. Various explanations have been advanced to account for this paradox–including the idea that the cat, or simply the animal’s physical environment (such as the photons in the box), can act as an observer. The question is, at what point, or scale, do the probabilistic rules of the quantum realm give way to the deterministic laws that govern the macroscopic world? This question has been brought into vivid relief by the recent work where an NIST group confined a charged beryllium atom in a tiny electromagnetic cage and then cooled it with a laser to its lowest energy state. In this state the position of the atom and its “spin” (a quantum property that is only metaphorically analogous to spin in the ordinary sense) could be ascertained to within a very high degree of accuracy, limited by Heisenberg’s uncertainty principle.

 decoherence prevents a macroscopic Schrodinger cat paradox new technology allows the manipulation of objects at the quantum level future research will investigate areas such as quantum teleportation and quantum computing The workers then stimulated the atom with a laser just enough to change its wave function; according to the new wave function of the atom, it now had a 50 percent probability of being in a “spin-up” state in its initial position and an equal probability of being in a “spin-down” state in a position as much as 80 nanometers away, a vast distance indeed for the atomic realm. In effect, the atom was in two different places, as well as two different spin states, at the same time–an atomic analog of a cat both living and dead.The clinching evidence that the NIST researchers had achieved their goal came from their observation of an interference pattern; that phenomenon is a telltale sign that a single beryllium atom produced two distinct wave functions that interfered with each other. The modern view of quantum mechanics states that Schrodinger’s cat, or any macroscopic object, does not exist as superpositions of existence due to decoherence. A pristine wave function is coherent, i.e. undisturbed by observation. But Schrodinger’s cat is not a pristine wave function, its is constantly interacting with other objects, such as air molecules in the box, or the box itself. Thus a macroscopic object becomes decoherent by many atomic interactions with its surrounding environment. Decoherence explains why we do not routinely see quantum superpositions in the world around us. It is not because quantum mechanics intrinsically stops working for objects larger than some magic size. Instead, macroscopic objects such as cats and cards are almost impossible to keep isolated to the extent needed to prevent decoherence. Microscopic objects, in contrast, are more easily isolated from their surroundings so that they retain their quantum secrets and quantum behavior.

# Uncerntainty Principle

 the uncertainty principle states that the position and velocity cannot both be measured,exactly, at the same time (actually pairs of position, energy and time) uncertainty principle derives from the measurement problem, the intimate connection between the wave and particle nature of quantum objects the change in a velocity of a particle becomes more ill defined as the wave function is confined to a smaller region Classical physics was on loose footing with problems of wave/particle duality, but was caught completely off-guard with the discovery of the uncertainty principle.The uncertainty principle also called the Heisenberg Uncertainty Principle, or Indeterminacy Principle, articulated (1927) by the German physicist Werner Heisenberg, that the position and the velocity of an object cannot both be measured exactly, at the same time, even in theory. The very concepts of exact position and exact velocity together, in fact, have no meaning in nature. Ordinary experience provides no clue of this principle. It is easy to measure both the position and the velocity of, say, an automobile, because the uncertainties implied by this principle for ordinary objects are too small to be observed. The complete rule stipulates that the product of the uncertainties in position and velocity is equal to or greater than a tiny physical quantity, or constant (about 10-34 joule-second, the value of the quantity h (where h is Planck’s constant). Only for the exceedingly small masses of atoms and subatomic particles does the product of the uncertainties become significant. Any attempt to measure precisely the velocity of a subatomic particle, such as an electron, will knock it about in an unpredictable way, so that a simultaneous measurement of its position has no validity. This result has nothing to do with inadequacies in the measuring instruments, the technique, or the observer; it arises out of the intimate connection in nature between particles and waves in the realm of subatomic dimensions. Every particle has a wave associated with it; each particle actually exhibits wavelike behavior. The particle is most likely to be found in those places where the undulations of the wave are greatest, or most intense. The more intense the undulations of the associated wave become, however, the more ill defined becomes the wavelength, which in turn determines the momentum of the particle. So a strictly localized wave has an indeterminate wavelength; its associated particle, while having a definite position, has no certain velocity. A particle wave having a well-defined wavelength, on the other hand, is spread out; the associated particle, while having a rather precise velocity, may be almost anywhere. A quite accurate measurement of one observable involves a relatively large uncertainty in the measurement of the other. The uncertainty principle is alternatively expressed in terms of a particle’s momentum and position. The momentum of a particle is equal to the product of its mass times its velocity. Thus, the product of the uncertainties in the momentum and the position of a particle equals h/(2) or more. The principle applies to other related (conjugate) pairs of observables, such as energy and time: the product of the uncertainty in an energy measurement and the uncertainty in the time interval during which the measurement is made also equals h/(2) or more. The same relation holds, for an unstable atom or nucleus, between the uncertainty in the quantity of energy radiated and the uncertainty in the lifetime of the unstable system as it makes a transition to a more stable state.
 the wave nature to particles means a particle is a wave packet, the composite of many waves many waves = many momentums, observation makes one momentum out of many exact knowledge of complementarity pairs (position, energy, time) is impossible The uncertainty principle, developed by W. Heisenberg, is a statement of the effects of wave-particle duality on the properties of subatomic objects. Consider the concept of momentum in the wave-like microscopic world. The momentum of wave is given by its wavelength. A wave packet like a photon or electron is a composite of many waves. Therefore, it must be made of many momentums. But how can an object have many momentums?Of course, once a measurement of the particle is made, a single momentum is observed. But, like fuzzy position, momentum before the observation is intrinsically uncertain. This is what is know as the uncertainty principle, that certain quantities, such as position, energy and time, are unknown, except by probabilities. In its purest form, the uncertainty principle states that accurate knowledge of complementarity pairs is impossible. For example, you can measure the location of an electron, but not its momentum (energy) at the same time.

 complementarity also means that different experiments yield different results (e.g. the two slit experiment) therefore, a single reality can not be applied at the quantum level A characteristic feature of quantum physics is the principle of complementarity, which “implies the impossibility of any sharp separation between the behavior of atomic objects and the interaction with the measuring instruments which serve to define the conditions under which the phenomena appear.” As a result, “evidence obtained under different experimental conditions cannot be comprehended within a single picture, but must be regarded as complementary in the sense that only the totality of the phenomena exhausts the possible information about the objects.” This interpretation of the meaning of quantum physics, which implied an altered view of the meaning of physical explanation, gradually came to be accepted by the majority of physicists during the 1930’s.Mathematically we describe the uncertainty principle as the following, where `x’ is position and `p’ is momentum:

 the mathematical form of the uncertainty principle relates complementary to Planck’s constant knowledge is not unlimited, built-in indeterminacy exists, but only in the microscopic world, all collapses to determinism in the macroscopic world This is perhaps the most famous equation next to E=mc2 in physics. It basically says that the combination of the error in position times the error in momentum must always be greater than Planck’s constant. So, you can measure the position of an electron to some accuracy, but then its momentum will be inside a very large range of values. Likewise, you can measure the momentum precisely, but then its position is unknown.Notice that this is not the measurement problem in another form, the combination of position, energy (momentum) and time are actually undefined for a quantum particle until a measurement is made (then the wave function collapses). Also notice that the uncertainty principle is unimportant to macroscopic objects since Planck’s constant, h, is so small (10-34). For example, the uncertainty in position of a thrown baseball is 10-30 millimeters. The depth of the uncertainty principle is realized when we ask the question; is our knowledge of reality unlimited? The answer is no, because the uncertainty principle states that there is a built-in uncertainty, indeterminacy, unpredictability to Nature.

 ``` It is often stated that of all the theories proposed in this century, the silliest is quantum theory. Some say the the only thing that quantum theory has going for it, in fact, is that it is unquestionably correct. - R. Feynman ```

# Atom and Wave Particle Duality

Bohr Atom:

 classical physics fails to describe the properties of atoms, Planck’s constant served to bridge the gap between the classical world and the new physics Bohr proposed a quantized shell model for the atom using the same basic structure as Rutherford, but restricting the behavior of electrons to quantized orbits Perhaps the foremost scientists of the 20th century was Niels Bohr, the first to apply Planck’s quantum idea to problems in atomic physics. In the early 1900’s, Bohr proposed a quantum mechanical description of the atom to replace the early model of Rutherford.In 1913 Bohr proposed his quantized shell model of the atom to explain how electrons can have stable orbits around the nucleus. The motion of the electrons in the Rutherford model was unstable because, according to classical mechanics and electromagnetic theory, any charged particle moving on a curved path emits electromagnetic radiation; thus, the electrons would lose energy and spiral into the nucleus. To remedy the stability problem, Bohr modified the Rutherford model by requiring that the electrons move in orbits of fixed size and energy. The energy of an electron depends on the size of the orbit and is lower for smaller orbits. Radiation can occur only when the electron jumps from one orbit to another. The atom will be completely stable in the state with the smallest orbit, since there is no orbit of lower energy into which the electron can jump. Bohr’s starting point was to realize that classical mechanics by itself could never explain the atom’s stability. A stable atom has a certain size so that any equation describing it must contain some fundamental constant or combination of constants with a dimension of length. The classical fundamental constants–namely, the charges and the masses of the electron and the nucleus–cannot be combined to make a length. Bohr noticed, however, that the quantum constant formulated by the German physicist Max Planck has dimensions which, when combined with the mass and charge of the electron, produce a measure of length. Numerically, the measure is close to the known size of atoms. This encouraged Bohr to use Planck’s constant in searching for a theory of the atom.

 Bohr’s calculation produce an accurate map of the hydrogen atom energy levels changes in electron orbits requires the release or gain of energy in the form of photons Bohr’s atom perfectly explains the spectra in stars as gaps due to the absorption of photons of particular wavelengths that match the electron orbits of the various elements larger formulations explain all the properties outlined by Kirchoff’s laws Planck had introduced his constant in 1900 in a formula explaining the light radiation emitted from heated bodies. According to classical theory, comparable amounts of light energy should be produced at all frequencies. This is not only contrary to observation but also implies the absurd result that the total energy radiated by a heated body should be infinite. Planck postulated that energy can only be emitted or absorbed in discrete amounts, which he called quanta (the Latin word for “how much”). The energy quantum is related to the frequency of the light by a new fundamental constant, h. When a body is heated, its radiant energy in a particular frequency range is, according to classical theory, proportional to the temperature of the body. With Planck’s hypothesis, however, the radiation can occur only in quantum amounts of energy. If the radiant energy is less than the quantum of energy, the amount of light in that frequency range will be reduced. Planck’s formula correctly describes radiation from heated bodies. Planck’s constant has the dimensions of action, which may be expressed as units of energy multiplied by time, units of momentum multiplied by length, or units of angular momentum. For example, Planck’s constant can be written as h = 6.6×10-34 joule seconds.Using Planck’s constant, Bohr obtained an accurate formula for the energy levels of the hydrogen atom. He postulated that the angular momentum of the electron is quantized–i.e., it can have only discrete values. He assumed that otherwise electrons obey the laws of classical mechanics by traveling around the nucleus in circular orbits. Because of the quantization, the electron orbits have fixed sizes and energies. The orbits are labeled by an integer, the quantum number n. With his model, Bohr explained how electrons could jump from one orbit to another only by emitting or absorbing energy in fixed quanta. For example, if an electron jumps one orbit closer to the nucleus, it must emit energy equal to the difference of the energies of the two orbits. Conversely, when the electron jumps to a larger orbit, it must absorb a quantum of light equal in energy to the difference in orbits. The Bohr model basically assigned discrete orbits for the electron, multiples of Planck’s constant, rather than allowing a continuum of energies as allowed by classical physics. The power in the Bohr model was its ability to predict the spectra of light emitted by atoms. In particular, its ability to explain the spectral lines of atoms as the absorption and emission of photons by the electrons in quantized orbits.

 Heisenberg and Schroedinger formalize Bohr’s model and produce quantum mechanics quantum mechanics is an all encompassing science that crosses over into many fields Our current understanding of atomic structure was formalized by Heisenberg and Schroedinger in the mid-1920’s where the discreteness of the allowed energy states emerges from more general aspects, rather than imposed as in Bohr’s model. The Heisenberg/Schroedinger quantum mechanics have consistent fundamental principles, such as the wave character of matter and the incorporation of the uncertainty principle.In principle, all of atomic and molecular physics, including the structure of atoms and their dynamics, the periodic table of elements and their chemical behavior, as well as the spectroscopic, electrical, and other physical properties of atoms and molecules, can be accounted for by quantum mechanics => fundamental science.

de Broglie Matter Waves:

 early quantum physics did not ask the question of `why’ quantum effects are found in the microscopic world Perhaps one of the key questions when Bohr offered his quantized orbits as an explanation to the UV catastrophe and spectral lines is, why does an electron follow quantized orbits? The response to this question arrived from the Ph.D. thesis of Louis de Broglie in 1923. de Broglie argued that since light can display wave and particle properties, then perhaps matter can also be a particle and a wave too.
 One way of thinking of a matter wave (or a photon) is to think of a wave packet. Normal waves look with this:
 having no beginning and no end. A composition of several waves of different wavelength can produce a wave packet that looks like this:
 the wave packet interpretation requires the particle to have no set position momentum of a particle is proportional to the wavelength of the particle So a photon, or a free moving electron, can be thought of as a wave packet, having both wave-like properties and also the single position and size we associate with a particle. There are some slight problems, such as the wave packet doesn’t really stop at a finite distance from its peak, it also goes on for every and every. Does this mean an electron exists at all places in its trajectory?de Broglie also produced a simple formula that the wavelength of a matter particle is related to the momentum of the particle. So energy is also connected to the wave property of matter.
 Lastly, the wave nature of the electron makes for an elegant explanation to quantized orbits around the atom. Consider what a wave looks like around an orbit, as shown below only certain wavelengths will fit into orbit, so quantiziation is due to wavelike nature of particles The electron matter wave is both finite and unbounded (remember the 1st lecture on math). But only certain wavelengths will `fit’ into an orbit. If the wavelength is longer or shorter, then the ends do not connect. Thus, de Broglie explains the Bohr atom in that on certain orbits can exist to match the natural wavelength of the electron. If an electron is in some sense a wave, then in order to fit into an orbit around a nucleus, the size of the orbit must correspond to a whole number of wavelengths.
 wavelike nature also means that a particles existence is spread out, a probability field Notice also that this means the electron does not exist at one single spot in its orbit, it has a wave nature and exists at all places in the allowed orbit. Thus, a physicist speaks of allowed orbits and allowed transitions to produce particular photons (that make up the fingerprint pattern of spectral lines). And the Bohr atom really looks like the following diagram:

 the idea of atoms being solid billiard ball type objects fails with quantum physics quantum effects fade on larger scales since macroscopic objects have high momentum values and therefore small wavelengths While de Broglie waves were difficult to accept after centuries of thinking of particles are solid things with definite size and positions, electron waves were confirmed in the laboratory by running electron beams through slits and demonstrating that interference patterns formed.How does the de Broglie idea fit into the macroscopic world? The length of the wave diminishes in proportion to the momentum of the object. So the greater the mass of the object involved, the shorter the waves. The wavelength of a person, for example, is only one millionth of a centimeter, much to short to be measured. This is why people don’t `tunnel’ through chairs when they sit down.

Probability Fields:

 wave interpretation requires a statistical or probability mathematical description of the position of a particle where wave represents the probability of finding the particle at a particular point The idea that an electron is a wave around the atom, instead of a particle in orbit begs the question of `where’ the electron is at any particular moment. The answer, by experimentation, is that the electron can be anywhere around the atom. But ‘where’ is not evenly distributed. The electron as a wave has a maximum chance of being observed where the wave has the highest amplitude. Thus, the electron has the highest probability to exist at a certain orbit.Where probability is often used in physics to describe the behavior of many objects, this is the first instance of an individual object, an electron, being assigned a probability for a Newtonian characteristic such as position. Thus, an accurate description of an electron orbit is one where we have a probability field that surrounds the nucleus, as shown below:

 for higher orbits the probability field becomes distorted For more complicated orbits, and higher electron shells, the probability field becomes distorted by other electrons and their fields, like the following example:

 meaning of existence has an elusive nature in the quantum world Thus, for the first time, the concept of existence begins to take on an elusive character at the subatomic level.

# The Birth Of Quantum Mechanics

 accelerating electron produces EM radiation (light), loses energy and spirals into nucleus, i.e. atom should not work The UV catastrophe and the dilemma of spectral lines were already serious problems for attempts to understand how light and matter interact. Planck also noticed another fatal flaw in our physics by demonstrating that the electron in orbit around the nucleus accelerates. Acceleration means a changing electric field (the electron has charge), when means photons should be emitted. But, then the electron would lose energy and fall into the nucleus. Therefore, atoms shouldn’t exist!

 Planck makes `quantum’ assumption to resolve this problem a quantum is a discrete, and smallest, unit of energy all forms of energy are transfered in quantums, not continuous To resolve this problem, Planck made a wild assumption that energy, at the sub-atomic level, can only be transfered in small units, called quanta. Due to his insight, we call this unit Planck’s constant (h). The word quantum derives from quantity and refers to a small packet of action or process, the smallest unit of either that can be associated with a single event in the microscopic world.Quantum, in physics, discrete natural unit, or packet, of energy, charge, angular momentum, or other physical property. Light, for example, appearing in some respects as a continuous electromagnetic wave, on the submicroscopic level is emitted and absorbed in discrete amounts, or quanta; and for light of a given wavelength, the magnitude of all the quanta emitted or absorbed is the same in both energy and momentum. These particle-like packets of light are called photons, a term also applicable to quanta of other forms of electromagnetic energy such as X rays and gamma rays. All phenomena in submicroscopic systems (the realm of quantum mechanics) exhibit quantization: observable quantities are restricted to a natural set of discrete values. When the values are multiples of a constant least amount, that amount is referred to as a quantum of the observable. Thus Planck’s constant h is the quantum of action, and h/ (i.e., h/2 ) is the quantum of angular momentum, or spin.

 electron transition from orbit to orbit must be in discrete quantum jumps experiments show that there is no `inbetween’ for quantum transitions = new kind of reality despite strangeness, experiments confirm quantum predictions and resolves UV catastrophe Changes of energy, such as the transition of an electron from one orbit to another around the nucleus of an atom, is done in discrete quanta. Quanta are not divisible. The term quantum leap refers to the abrupt movement from one discrete energy level to another, with no smooth transition. There is no “inbetween”.The quantization, or “jumpiness” of action as depicted in quantum physics differs sharply from classical physics which represented motion as smooth, continuous change. Quantization limits the energy to be transfered to photons and resolves the UV catastrophe problem.

Wave-Particle Dualism:

 The wave-like nature of light explains most of its properties: reflection/refraction diffraction/interference Doppler effect however, a particle description is suggested by the photoelectric effect, the release of electrons by a beam of energetic blue/UV light wavelike descriptions of light fail to explain the lack of the photoelectric effect for red light The results from spectroscopy (emission and absorption spectra) can only be explained if light has a particle nature as shown by Bohr’s atom and the photon description of light.This dualism to the nature of light is best demonstrated by the photoelectric effect, where a weak UV light produces a current flow (releases electrons) but a strong red light does not release electrons no matter how intense the red light. An unusual phenomenon was discovered in the early 1900’s. If a beam of light is pointed at the negative end of a pair of charged plates, a current flow is measured. A current is simply a flow of electrons in a metal, such as a wire. Thus, the beam of light must be liberating electrons from one metal plate, which are attracted to the other plate by electrostatic forces. This results in a current flow. In classical physics, one would expect the current flow to be proportional to the strength of the beam of light (more light = more electrons liberated = more current). However, the observed phenomenon was that the current flow was basically constant with light strength, yet varied strong with the wavelength of light such that there was a sharp cutoff and no current flow for long wavelengths. Einstein successful explained the photoelectric effect within the context of the new physics of the time, quantum physics. In his scientific paper, he showed that light was made of packets of energy quantum called photons. Each photon carries a specific energy related to its wavelength, such that photons of short wavelength (blue light) carry more energy than long wavelength (red light) photons. To release an electron from a metal plate required a minimal energy which could only be transfered by a photon of energy equal or greater than that minimal threshold energy (i.e. the wavelength of the light had to be a sufficiently short). Each photon of blue light released an electron. But all red photons were too weak. The result is no matter how much red light was shown on the metal plate, there was no current. The photoelectric earned Einstein the Nobel Prize, and introduced the term “photon” of light into our terminology.

 particle and wave properties to light is called wave-particle dualism and continues the strange characteristics to the new science of quantum physics wave-particle dualism is extended to matter particles, i.e. electrons act as waves Einstein explained that light exists in a particle-like state as packets of energy (quanta) called photons. The photoelectric effect occurs because the packets of energy carried by each individual red photons are too weak to knock the electrons off the atoms no matter how many red photons you beamed onto the cathode. But the individual UV photons were each strong enough to release the electron and cause a current flow.It is one of the strange, but fundamental, concepts in modern physics that light has both a wave and particle state (but not at the same time), called wave-particle dualism. Wave/particle duality is the possession by physical entities (such as light and electrons) of both wavelike and particle-like characteristics. On the basis of experimental evidence, the German physicist Albert Einstein first showed (1905) that light, which had been considered a form of electromagnetic waves, must also be thought of as particle-like, or localized in packets of discrete energy. The French physicist Louis de Broglie proposed (1924) that electrons and other discrete bits of matter, which until then had been conceived only as material particles, also have wave properties such as wavelength and frequency. Later (1927) the wave nature of electrons was experimentally established. An understanding of the complementary relation between the wave aspects and the particle aspects of the same phenomenon was announced in 1928. Dualism is not such a strange concept, consider the following picture, are the swirls moving or not or both?

# How a Satellite Works

Satellites are very complex machines that require precise mathematical calculations in order for them to function. The satellite has tracking systems and very sophisticated computer systems on board. Accuracy in orbit and speed are required for the satellite to keep from crashing back down to Earth. There are several different types of orbits that the satellite can take. Some orbits are stationary and some are elliptical.”Satellite Orbit”

### Low Earth Orbit

A satellite is in “Low Earth Orbit” when it circles in an elliptical orbit close to Earth. Satellites in low orbit are just hundreds of miles away. These satellites travel at high speeds preventing gravity from pulling them back to Earth. Low Orbit Satellites travel approximately 17,000 miles per hour and circle the Earth in an hour and a half.

### Polar Orbit

This is how a satellite travels in a polar orbit. These orbits eventually pass the entire surface of the Earth.

Polar Orbiting Satellites circle the planet in a north-south direction as Earth spins beneath it in an east-west direction. Polar Orbits enable satellites to scan the entire surface of the Earth. Like pealing an orange peal in a circular motion from top to bottom. Remote sensing satellites, weather satellites, and government satellites are almost always in polar orbit because of the coverage. Polar orbits cover the Earth’s surface thoroughly. The polar obit occupied by a satellite has a constant location in which it passes over. ALL POLAR ORBITING SATELLITES INTERSECT The North Pole at their same point. While one Polar orbit satellite is over America, another Polar Satellite is passing over the North Pole. So the North Pole has a constant flow of UHF and higher microwaves hitting it. The illustration shows that the common passing point for Polar Orbiting Satellites is over the North Pole.

A polar orbiting satellite will pass over the Earths equator at a different longitude on each of its orbits; however, Polar Orbiting satellites pass over the North Pole every time. Polar orbits are often used for earth mapping, earth observation, weather satellites, and reconnaissance satellites. This orbit has a disadvantage. No one spot of the Earth’s surface can be sensed continuously from a satellite in a polar orbit.

This is from U.S. Army Information Systems Engineering Command.

“In order to fulfill the military need for protected communication service, especially low probability of intercept/detection (LPI/LPD), to units operating north of 65 degree northern latitude, the space communications architecture includes the polar satellite system capability. An acceptable approach to achieving this goal is to fly a low capacity EHF system in a highly elliptical orbit, either as a hosted payload or as a “free-flyer,” to provide service during a transition period, nominally 1997-2010. A single, hosted EHF payload is already planned. Providing this service 24 hours-a-day requires a two satellite constellation at high earth orbit (HEO). Beyond 2010, the LPI/LPD polar service could continue to be provided by a high elliptical orbit HEO EHF payload, or by the future UHF systems.” (quote from www.fas.org)

## THERE IS A CONSTANT 24 HOUR EHF AND HIGHER MICROWAVE TRANSMISSION PASSING OVER THE NORTH POLE!

### “Geo Synchronous” Orbit

This is how a satellite travels in a “Geo Synchronous” orbit. Equatorial orbits are also called “Geostationary”. These satellites follow the rotation of the Earth.

A satellite in a “Geo Synchronous” orbit hovers over one spot and follows the Earths spin along the equator. Go to this link for more information on “Geo synchronous Orbits”. Earth takes 24 hours to spin on its axis.  In the illustration you can see that an “Geo Synchronous” Orbit follows the equator and never covers the North or South Poles. The footprints of “Geo Synchronous” orbiting satellites do not cover the polar regions, so communication satellites in “Geo Synchronous” orbits in cannot be accessed in the northern and southern polar regions.

Because the “Geo Synchronous” satellite does not move from the area that it covers, these satellites are used for telecommunications, gps trackers, television broadcasting, government, and internet. Because they are stationary, their orbits are much farther from the Earth than the Polar orbiting satellites. If a stationary satellite is too close to the Earth, it will crash back down at a faster rate. They say there are about 300 “Geo Synchronous” satellites in orbit right now. Of course, these are the satellites that the public is allowed to know about, that are not governmentally classified.

### Satellite Anatomy

This is the Anatomy of a Satellite.

A satellite is made up of several instruments that work together to operate the satellite during its mission. This illustration to the left demonstrates the parts of a satellite.

The command and data system controls all of the satellite functions. This is a very complex computer system that communicates all of the satellite flight operations, where the satellite points, and any other mathematical operations.

The Pointing control directs the satellite in order for the satellite to keep a steady flight path. This system is a complex sensor instrument that keeps the satellite pointing in the same direction. The satellite uses a propulsion system called “momentum wheels” that adjusts the position of the satellite into its proper place. Scientific observation satellites have more precise propulsion systems than do communications satellites.

The Communications system has a transmitter, a receiver, and various antennas to transmit data to the Earth . On Earth, Ground control sends instructions and data to the satellite’s computer through the Antenna. Pictures, data, television, radio, and many other data is sent by the satellite back to practically everyone on Earth.

The Power system needed power and operate the satellite is an efficient solar panel array that obtains energy from the Sun’s rays. Solar arrays make electricity from the sunlight and store the electricity in rechargeable batteries.

The Payload is what a satellite needs to perform its job. A weather satellite would have a payload that consist of an Image sensor, digital camera, telescope, and other thermal and weather sensing devices.

The Thermal Control is the protection required to prevent damage to the satellite’s instrumentation and components in. Satellite are exposed to extreme temperature changes. Temperatures range from 120 degrees below zero to 180 degrees above zero. Heat distribution units and thermal blankets to protect the electronics and components from temperature damage.

### Satellite Footprints

Here you can see one footprint covers an enormous area.

Geostationary satellites have a very broad view of Earth. The footprint of one Echo Starbroadcast satellite covers almost all of North America. They stay over the Earth at same the same location so we always know where they are. Direct contact with the satellite can be made because Equatorial Satellites are fixed.

Many communications satellites travel in Equatorial orbits, including those that relay TV signals into our homes; However, the size of the footprint of one satellite covers the entire Northern America.

The multi path effect that occurs when satellite transmissions are obstructed by topographical entities also provides insight on microwave global warming. Microwaves are being bombarded upon our planet. Our planet absorbs and obstructs the waves from space. Microwaves penetrate through all of our atmosphere and bounce and echo off of the Earth. Imagine the footprint overlaps that are being produced by the thousands of satellites in orbit right now?

Here you can see the footprint overlapping the that satellites make. Each satellite covers an enormous area.

The closer the satellite is to something the more power will be exerted on the object. The farther the waves have to go the less power they will have. Because the atmosphere is so much closer to the satellite, there is a stronger beam of energy going through the clouds and atmosphere. This stronger power causes a higher rate of warming in the atmosphere than it does on the surface of the Earth.

The illustration to the right shows how eight satellites microwave an enormous part of our Earth. When the radio signals reflect off of surrounding terrain; buildings, canyon walls, hard ground multi path issues occur due to multiple waves doubling over themselves. These delayed signals can cause poor signals. Ultimately, the water, ice, and Earth are absorbing and reflecting microwaves in many different directions. Microwaves passing through Earths atmospheres are causing radio frequency heating at the molecular level.

### System spectral efficiency

“In wireless networks, the system spectral efficiency is a measure of the quantity of users or services that can be simultaneously supported by a limited radio frequency bandwidth in a defined geographic area.” The capacity of a wireless network can be measured by calculating the maximum simultaneous phone calls over 1 MHz frequency spectrum. This is measured in Erlangs//MHz/cell, Erlangs/MHz/sector, Erlangs/MHz/site, or Erlangs/MHz/km measurements. Modern day cell phones take advantage of this type of transmission. These cell phones transmit a microwave transmission that is twice the frequency of a microwave oven in your home.

This is a misconception of how microwave frequencies travel.

An example of a spectral efficiency can be found in the satellite RADARSAT-1. In 1995 RADARSAT-1, an Earth observation satellite from Canada, was launched in an orbit above the Earth. RADRASAT-1 provides images of the Earth, scientific and commercial, used in agriculture, geology, hydrology, arctic surveillance, oceanography, cartography, ice and ocean monitoring, forestry, detecting ocean oil slicks, and many other applications. This satellite uses continuous high microwave transmissions. A Synthetic Aperture Radar (SAR) system is a type of sensor that images the Earth at a single microwave frequency of 5.3 GHz. SAR systems transmit microwaves towards the surface of the Earthy and record the reflections from the surface. This satellite can image the Earth during any time and in any atmospheric condition.

This is how microwave frequencies actually travel.

A Common misconception about microwave transmissions is that the transmission is directly beaming straight into the receiving antennae. (See misconception illustration) This however, is not true. Transmissions are spread into the air in a spherical direction. The waves travel in every direction until they find a receiver or some dielectric material to pass into.

When a microwave transmission is sent to a receiving satellite dish the transmission is sent in a spherical direction. (See how microwaves travel illustration) The signal passes through all parts of that sphere until it finds a connection. All microwaves, not received by an antennae, pass through the dielectric material in the earth. Dielectric material is primarily water and ice.

# The Building Blocks Of Nature

 particle physics is the search for the fundamental building blocks of Nature, a reductionist goal elementary particles should be structureless, resulting in simple interactions One of the primary goals in modern physics is to answer the question “What is the Universe made of?” Often that question reduces to “What is matter and what holds it together?” This continues the line of investigation started by Democritus, Dalton and Rutherford.Modern physics speaks of fundamental building blocks of Nature, where fundamental takes on a reductionist meaning of simple and structureless. Many of the particles we have discussed so far appear simple in their properties. All electrons have the exact same characteristics (mass, charge, etc.), so we call an electron fundamental because they are all non-unique. The search for the origin of matter means the understanding of elementary particles. And with the advent of holism, the understanding of elementary particles requires an understanding of not only their characteristics, but how they interact and relate to other particles and forces of Nature, the field of physics called particle physics.

 more advanced technology lead to the discovery of hundreds of new particles, forcing the search for some underlying principles to unite the chain of particles to something simpler The study of particles is also a story of advanced technology begins with the search for the primary constituent. More than 200 subatomic particles have been discovered so far, all detected in sophisticated particle accelerators. However, most are not fundamental, most are composed of other, simpler particles. For example, Rutherford showed that the atom was composed of a nucleus and orbiting electrons. Later physicists showed that the nucleus was composed of neutrons and protons. More recent work has shown that protons and neutrons are composed of quarks.Short History of Elementary Particles

Generations of Matter:

 the two most fundamental types of particles are quarks and leptons the quarks and leptons are divided into 6 flavors corresponding to three generations of matter quarks (and antiquarks) have electric charges in units of 1/3 or 2/3’s A quark is any of a group of subatomic particles believed to be among the fundamental constituents of matter. In much the same way that protons and neutrons make up atomic nuclei, these particles themselves are thought to consist of quarks. Quarks constitute all hadrons (baryons and mesons)–i.e., all particles that interact by means of the strong force, the force that binds the components of the nucleus.According to prevailing theory, quarks have mass and exhibit a spin (i.e., type of intrinsic angular momentum corresponding to a rotation around an axis through the particle). Quarks appear to be truly fundamental. They have no apparent structure; that is, they cannot be resolved into something smaller. Quarks always seem to occur in combination with other quarks or antiquarks, never alone. For years physicists have attempted to knock a quark out of a baryon in experiments with particle accelerators to observe it in a free state but have not yet succeeded in doing so. Throughout the 1960s theoretical physicists, trying to account for the ever-growing number of subatomic particles observed in experiments, considered the possibility that protons and neutrons were composed of smaller units of matter. In 1961 two physicists, Murray Gell-Mann of the United States and Yuval Ne`eman of Israel, proposed a particle classification scheme called the Eightfold Way, based on the mathematical symmetry group SU(3), that described strongly interacting particles in terms of building blocks. In 1964 Gell-Mann introduced the concept of quarks as a physical basis for the scheme, adopting the fanciful term from a passage in James Joyce’s novel Finnegans Wake. (The American physicist George Zweig developed a similar theory independently that same year and called his fundamental particles “aces.”) Gell-Mann’s model provided a simple picture in which all mesons are shown as consisting of a quark and an antiquark and all baryons as composed of three quarks. It postulated the existence of three types of quarks, distinguished by distinctive “flavours.” These three quark types are now commonly designated as “up” (u), “down” (d), and “strange” (s). Each carries a fractional electric charge (i.e., a charge less than that of the electron). The up and down quarks are thought to make up protons and neutrons and are thus the ones observed in ordinary matter. Strange quarks occur as components of K mesons and various other extremely short-lived subatomic particles that were first observed in cosmic rays but that play no part in ordinary matter. Most problems with quarks were resolved by the introduction of the concept of color, as formulated in quantum chromodynamics (QCD). In this theory of strong interactions, developed in 1977, the term color has nothing to do with the colors of the everyday world but rather represents a special quantum property of quarks. The colors red, green, and blue are ascribed to quarks, and their opposites, minus-red, minus-green, and minus-blue, to antiquarks. According to QCD, all combinations of quarks must contain equal mixtures of these imaginary colors so that they will cancel out one another, with the resulting particle having no net color. A baryon, for example, always consists of a combination of one red, one green, and one blue quark. The property of color in strong interactions plays a role analogous to an electric charge in electromagnetic interactions. Charge implies the exchange of photons between charged particles. Similarly, color involves the exchange of massless particles called gluons among quarks. Just as photons carry electromagnetic force, gluons transmit the forces that bind quarks together. Quarks change their color as they emit and absorb gluons, and the exchange of gluons maintains proper quark color distribution.

 leptons are a separate class since they do not interact with quarks by the strong force leptons have charges in units of 1 or 0 Leptons are any member of a class of fermions that respond only to electromagnetic, weak, and gravitational forces and do not take part in strong interactions. Like all fermions, leptons have a half-integral spin. (In quantum-mechanical terms, spin constitutes the property of intrinsic angular momentum.) Leptons obey the Pauli exclusion principle, which prohibits any two identical fermions in a given population from occupying the same quantum state. Leptons are said to be fundamental particles; that is, they do not appear to be made up of smaller units of matter.Leptons can either carry one unit of electric charge or be neutral. The charged leptons are the electrons, muons, and taus. Each of these types has a negative charge and a distinct mass. Electrons, the lightest leptons, have a mass only 0.0005 that of a proton. Muons are heavier, having more than 200 times as much mass as electrons. Taus, in turn, are approximately 3,700 times more massive than electrons. Each charged lepton has an associated neutral partner, or neutrino (i.e., electron-, muon-, and tau-neutrino), that has no electric charge and no significant mass. Moreover, all leptons, including the neutrinos, have antiparticles called antileptons. The mass of the antileptons is identical to that of the leptons, but all of the other properties are reversed.

 the up and down quark, electron and neutrino (leptons) work together to form normal, everyday matter note that for every quark or lepton there is a corresponding antiparticle. For example, there is an up antiquark, an anti-electron (called a positron) and an anti-neutrino The electron is the lightest stable subatomic particle known. It carries a negative charge which is considered the basic charge of electricity.An electron is nearly massless. It has a rest mass of 9.1×10-28 gram, which is only 0.0005 the mass of a proton. The electron reacts only by the electromagnetic, weak, and gravitational forces; it does not respond to the short-range strong nuclear force that acts between quarks and binds protons and neutrons in the atomic nucleus. The electron has an antimatter counterpart called the positron. This antiparticle has precisely the same mass and spin, but it carries a positive charge. If it meets an electron, both are annihilated in a burst of energy. Positrons are rare on the Earth, being produced only in high-energy processes (e.g., by cosmic rays) and live only for brief intervals before annihilation by electrons that abound everywhere. The electron was the first subatomic particle discovered. It was identified in 1897 by the British physicist J.J. Thomson during investigations of cathode rays. His discovery of electrons, which he initially called corpuscles, played a pivotal role in revolutionizing knowledge of atomic structure. Under ordinary conditions, electrons are bound to the positively charged nuclei of atoms by the attraction between opposite electric charges. In a neutral atom the number of electrons is identical to the number of positive charges on the nucleus. Any atom, however, may have more or fewer electrons than positive charges and thus be negatively or positively charged as a whole; these charged atoms are known as ions. Not all electrons are associated with atoms. Some occur in a free state with ions in the form of matter known as plasma.

# Copenhagen Interpretation And Quantum Multiverses

 wave-particle duality is a manifestation of quantum entities Wave-particle duality does not mean that a photon or subatomic particle is both a wave and particle simultaneously, but that it could manifest either a wave or a particle aspect depending on circumstances. Complementarity, uncertainty, and the statistical interpretation of Schroedinger’s wave function were all related. Together they formed a logical interpretation of the physical meaning of quantum mechanics known as the “Copenhagen interpretation.
 The Copenhagen Interpretation has three primary parts: The wave function is a complete description of a wave/particle. Any information that cannot be derived from the wave function does not exist. For example, a wave is spread over a broad region, therefore does not have a specific location. When a measurement of the wave/particle is made, its wave function collapses. In the case of momentum, a wave packet is made of many waves each with its own momentum value. Measurement reduced the wave packet to a single wave and a single momentum. If two properties are related by an uncertainty relation, no measurement can simultaneously determine both properties to a precision greater than the uncertainty relation allows. So, if we measure a wave/particles position, its momentum becomes uncertain. Central to the Copenhagen Interpretation is the principle known as complementarity. That the wave and particle nature of objects can be regarded as complementary aspects of a single reality, like the two sides of a coin. An electron, for example, can behave sometimes as a wave and sometimes as a particle, but never both together, just as a tossed coin may fall either heads or tails up, but not both at once.One must resist the temptation to regard matter or photon waves as waves of some material substance like sound or water waves. The correct interpretation, proposed by Born in the 1920’s, is that the waves are measures of probability. Waves of probability relate to the uncertainty principle in that it cannot be certain what any given particle will do. Only betting odds can be given. This fundamental limitation represents a breakdown of determinism in nature. It means that identical electrons in identical experiments may do different things. But, statistically, the outcome of the experiment is predictable. Bohr, the leader of the Copenhagen Interpretation, admonished those who would ask what an electron really is, a wave or a particle. He denounced the question as meaningless or without context (such as `what is north of the north pole?’). To observe the properties of an electron is to conduct some sort of measurement. Experiments designed to measure waves will see the wave aspect of electrons. Those experiments designed to measure particle properties will see electrons as particles. No experiment can ever measure both aspects simultaneously and so we never see a mixture of wave and particle.
 probabilities in the macroscopic world reflect a lack of knowledge the quantum world is pure probability The adoption of the Copenhagen Interpretation for quantum phenomenon poses a sharp divide between classical or macroscopic physics and quantum or microscopic physics. In the macroscopic world events appear to be deterministic. Every event has a cause. Often, the cause is difficult to directly determine, for example an apple falls from a tree because its stem weakens. We cannot tell exactly when it will fall, but we know some direct mechanical action is the cause and if we had precise knowledge of the state of its fibers we would know when and why. Thus, we resort to probabilities as a substitute for exact knowledge of the acting causes.However, the conceptual abyss seems to separate classical from quantum physics. In the quantum world, probabilities are not a substitute for detailed knowledge of hidden, relevant details; there are no relevant details, just pure chance. The classical world is determinism, the quantum world is pure probabilist. And, the probabilism nature to quantum physics has been confirmed by numerous experiments.

Hidden Variables Hypothesis:

 macroscopic physics states that all variables are there, just hard to measure Copenhagen Interpretation states that variables are not there, randomness is fundamental In general, quantum theory predicts only the probability of a certain result. Consider the case of radioactivity. Imagine a box of atoms with identical nuclei that can undergo decay with the emission of an alpha particle. In a given time interval, a certain fraction will decay. The theory may tell precisely what that fraction will be, but it cannot predict which particular nuclei will decay. The theory asserts that, at the beginning of the time interval, all the nuclei are in an identical state and that the decay is a completely random process.Even in classical physics, many processes appear random. For example, one says that, when a roulette wheel is spun, the ball will drop at random into one of the numbered compartments in the wheel. Based on this belief, the casino owner and the players give and accept identical odds against each number for each throw. However, the fact is that the winning number could be predicted if one noted the exact location of the wheel when the croupier released the ball, the initial speed of the wheel, and various other physical parameters. It is only ignorance of the initial conditions and the difficulty of doing the calculations that makes the outcome appear to be random. In quantum mechanics, on the other hand, the randomness is asserted to be absolutely fundamental. The theory says that, though one nucleus decayed and the other did not, they were previously in the identical state.

 indeterminacy was unpopular (not platonic) Bell hypothesis is that quantum variables exist, but are hidden, special forces required hidden variables are not testable, poor science Many eminent physicists, including Einstein, could not accept this indeterminacy. They have rejected the notion that the nuclei were initially in the identical state. Instead, they postulated that there must be some other property–presently unknown, but existing nonetheless–that is different for the two nuclei. This type of unknown property is termed a hidden variable; if it existed, it would restore determinacy to physics.If the initial values of the hidden variables were known, it would be possible to predict which nuclei would decay. Such a theory would, of course, also have to account for the wealth of experimental data which conventional quantum mechanics explains from a few simple assumptions. For example, the electron would definitely have to go through only one slit in the two-slit experiment. To explain that interference occurs only when the other slit is open, it is necessary to postulate a special force on the electron which exists only when that slit is open. Such artificial additions make hidden variable theories unattractive, and there is little support for them among physicists. The Copenhagen view of understanding the physical world stresses the importance of basing theory on what can be observed and measured experimentally. It therefore rejects the idea of hidden variables as quantities that cannot be measured. The Copenhagen view is that the indeterminacy observed in nature is fundamental and does not reflect an inadequacy in present scientific knowledge. One should therefore accept the indeterminacy without trying to “explain” it and see what consequences come from it.

Many-Worlds Hypothesis :

 collapse of the wave function still presents a problem for deterministic physics solution is to not collapse the wave function, rather split reality many worlds hypothesis is allows for the existence of all quantum states, observation splits the worlds containing the states The many possibilities carried by quantum superpositions are spread out over space and time. However, Newtonian physics is an accurate description of ordinary experience. What is the relationship between the strange quantum world and the classical world of common sense? Clearly the difference occurs when we measure or observe a quantum system. Whatever the process, it occurs at that time. The “how and why” of this process is unsolved and many believe modern physics will be incomplete until it is resolved.By the 1950’s, the ongoing parade of successes had made it abundantly clear that quantum theory was far more than a short-lived temporary fix. And so, in the mid 1950’s, a Princeton graduate student named Hugh Everett III decided to revisit the collapse postulate in his Ph.D. thesis. Everett’s idea is known as the relative-state, many-histories or many-universes interpretation or metatheory of quantum theory. Dr Hugh Everett, III, its originator, called it the “relative-state metatheory” or the “theory of the universal wavefunction”, but it is generally called “many-worlds”. Many-worlds is a re-formulation of quantum theory which treats the process of observation or measurement entirely within the wave-mechanics of quantum theory, rather than an input as additional assumption, as in the Copenhagen interpretation. Everett considered the wavefunction a real object. Many-worlds is a return to the classical, pre-quantum view of the universe in which all the mathematical entities of a physical theory are real. For example the electromagnetic fields of James Clark Maxwell or the atoms of Dalton were considered as real objects in classical physics. Everett treats the wavefunction in a similar fashion. Everett also assumed that the wavefunction obeyed the same wave equation during observation or measurement as at all other times. This is the central assumption of many-worlds: that the wave equation is obeyed universally and at all times. Quantum systems, like particles, that interact become entangled. If one of the systems is an observer and the interaction an observation then the effect of the observation is to split the observer into a number of copies, each copy observing just one of the possible results of a measurement and unaware of the other results and all its observer copies. Interactions between systems and their environments, including communication between different observers in the same world, transmits the correlations that induce local splitting or decoherence into non-interfering branches of the universal wavefunction. Thus the entire world is split, quite rapidly, into a host of mutually unobservable but equally real worlds. According to many-worlds all the possible outcomes of a quantum interaction are realised. The wavefunction, instead of collapsing at the moment of observation, carries on evolving in a deterministic fashion, embracing all possibilities embedded within it. All outcomes exist simultaneously but do not interfere further with each other, each single prior world having split into mutually unobservable but equally real worlds.

 macroscopic systems exhibit irreversible behavior (entropy) that prevents the reconnection of past worlds and present the observed world as real to individuals many worlds does not allow communication between the worlds, but their existence can be tested in two slit experiments (the other worlds are doing the interfering) and with reversible mind experiments (nano-AI’s) Worlds, or branches of the universal wavefunction, split when different components of a quantum superposition “decohere” from each other. Decoherence refers to the loss of coherency or absence of interference effects between the elements of the superposition. For two branches or worlds to interfere with each other all the atoms, subatomic particles, photons and other degrees of freedom in each world have to be in the same state, which usually means they all must be in the same place or significantly overlap in both worlds, simultaneously.For small microscopic systems it is quite possible for all their atomic components to overlap at some future point. In the double slit experiment, for instance, it only requires that the divergent paths of the diffracted particle overlap again at some space-time point for an interference pattern to form, because only the single particle has been split. Such future coincidence of positions in all the components is virtually impossible in more complex, macroscopic systems because all the constituent particles have to overlap with their counterparts simultaneously. Any system complex enough to be described by thermodynamics and exhibit irreversible behaviour is a system complex enough to exclude, for all practical purposes, any possibility of future interference between its decoherent branches. An irreversible process is one in, or linked to, a system with a large number of internal, unconstrained degrees of freedom. Once the irreversible process has started then alterations of the values of the many degrees of freedom leaves an imprint which can’t be removed. If we try to intervene to restore the original status quo the intervention causes more disruption elsewhere. Ms Kitty exampleThere is no “where” for cat, not even both true, wave funtcion is the description of the cat. The worlds already exist, there is no spliting.