Satellites Orbiting Earth

How a Satellite Works

Satellites are very complex machines that require precise mathematical calculations in order for them to function. The satellite has tracking systems and very sophisticated computer systems on board. Accuracy in orbit and speed are required for the satellite to keep from crashing back down to Earth. There are several different types of orbits that the satellite can take. Some orbits are stationary and some are elliptical.”Satellite Orbit”

Low Earth Orbit

A satellite is in “Low Earth Orbit” when it circles in an elliptical orbit close to Earth. Satellites in low orbit are just hundreds of miles away. These satellites travel at high speeds preventing gravity from pulling them back to Earth. Low Orbit Satellites travel approximately 17,000 miles per hour and circle the Earth in an hour and a half.

Polar Orbit

This is how a satellite travels in a polar orbit.

This is how a satellite travels in a polar orbit. These orbits eventually pass the entire surface of the Earth.

Polar Orbiting Satellites circle the planet in a north-south direction as Earth spins beneath it in an east-west direction. Polar Orbits enable satellites to scan the entire surface of the Earth. Like pealing an orange peal in a circular motion from top to bottom. Remote sensing satellites, weather satellites, and government satellites are almost always in polar orbit because of the coverage. Polar orbits cover the Earth’s surface thoroughly. The polar obit occupied by a satellite has a constant location in which it passes over. ALL POLAR ORBITING SATELLITES INTERSECT The North Pole at their same point. While one Polar orbit satellite is over America, another Polar Satellite is passing over the North Pole. So the North Pole has a constant flow of UHF and higher microwaves hitting it. The illustration shows that the common passing point for Polar Orbiting Satellites is over the North Pole.

A polar orbiting satellite will pass over the Earths equator at a different longitude on each of its orbits; however, Polar Orbiting satellites pass over the North Pole every time. Polar orbits are often used for earth mapping, earth observation, weather satellites, and reconnaissance satellites. This orbit has a disadvantage. No one spot of the Earth’s surface can be sensed continuously from a satellite in a polar orbit.

This is from U.S. Army Information Systems Engineering Command.

“In order to fulfill the military need for protected communication service, especially low probability of intercept/detection (LPI/LPD), to units operating north of 65 degree northern latitude, the space communications architecture includes the polar satellite system capability. An acceptable approach to achieving this goal is to fly a low capacity EHF system in a highly elliptical orbit, either as a hosted payload or as a “free-flyer,” to provide service during a transition period, nominally 1997-2010. A single, hosted EHF payload is already planned. Providing this service 24 hours-a-day requires a two satellite constellation at high earth orbit (HEO). Beyond 2010, the LPI/LPD polar service could continue to be provided by a high elliptical orbit HEO EHF payload, or by the future UHF systems.” (quote from www.fas.org)

THERE IS A CONSTANT 24 HOUR EHF AND HIGHER MICROWAVE TRANSMISSION PASSING OVER THE NORTH POLE!

“Geo Synchronous” Orbit

This is how a satellite travels in a Equitorial orbit

This is how a satellite travels in a “Geo Synchronous” orbit. Equatorial orbits are also called “Geostationary”. These satellites follow the rotation of the Earth.

A satellite in a “Geo Synchronous” orbit hovers over one spot and follows the Earths spin along the equator. Go to this link for more information on “Geo synchronous Orbits”. Earth takes 24 hours to spin on its axis.  In the illustration you can see that an “Geo Synchronous” Orbit follows the equator and never covers the North or South Poles. The footprints of “Geo Synchronous” orbiting satellites do not cover the polar regions, so communication satellites in “Geo Synchronous” orbits in cannot be accessed in the northern and southern polar regions.

Because the “Geo Synchronous” satellite does not move from the area that it covers, these satellites are used for telecommunications, gps trackers, television broadcasting, government, and internet. Because they are stationary, their orbits are much farther from the Earth than the Polar orbiting satellites. If a stationary satellite is too close to the Earth, it will crash back down at a faster rate. They say there are about 300 “Geo Synchronous” satellites in orbit right now. Of course, these are the satellites that the public is allowed to know about, that are not governmentally classified.

Satellite Anatomy

This is the Atatomy of a Satellite.

This is the Anatomy of a Satellite.

A satellite is made up of several instruments that work together to operate the satellite during its mission. This illustration to the left demonstrates the parts of a satellite.

The command and data system controls all of the satellite functions. This is a very complex computer system that communicates all of the satellite flight operations, where the satellite points, and any other mathematical operations.

The Pointing control directs the satellite in order for the satellite to keep a steady flight path. This system is a complex sensor instrument that keeps the satellite pointing in the same direction. The satellite uses a propulsion system called “momentum wheels” that adjusts the position of the satellite into its proper place. Scientific observation satellites have more precise propulsion systems than do communications satellites.

The Communications system has a transmitter, a receiver, and various antennas to transmit data to the Earth . On Earth, Ground control sends instructions and data to the satellite’s computer through the Antenna. Pictures, data, television, radio, and many other data is sent by the satellite back to practically everyone on Earth.

The Power system needed power and operate the satellite is an efficient solar panel array that obtains energy from the Sun’s rays. Solar arrays make electricity from the sunlight and store the electricity in rechargeable batteries.

The Payload is what a satellite needs to perform its job. A weather satellite would have a payload that consist of an Image sensor, digital camera, telescope, and other thermal and weather sensing devices.

The Thermal Control is the protection required to prevent damage to the satellite’s instrumentation and components in. Satellite are exposed to extreme temperature changes. Temperatures range from 120 degrees below zero to 180 degrees above zero. Heat distribution units and thermal blankets to protect the electronics and components from temperature damage.

Satellite Footprints

A single satellite footprint

Here you can see one footprint covers an enormous area.

Geostationary satellites have a very broad view of Earth. The footprint of one Echo Starbroadcast satellite covers almost all of North America. They stay over the Earth at same the same location so we always know where they are. Direct contact with the satellite can be made because Equatorial Satellites are fixed.

Many communications satellites travel in Equatorial orbits, including those that relay TV signals into our homes; However, the size of the footprint of one satellite covers the entire Northern America.

The multi path effect that occurs when satellite transmissions are obstructed by topographical entities also provides insight on microwave global warming. Microwaves are being bombarded upon our planet. Our planet absorbs and obstructs the waves from space. Microwaves penetrate through all of our atmosphere and bounce and echo off of the Earth. Imagine the footprint overlaps that are being produced by the thousands of satellites in orbit right now?

coverage 8 pic

Here you can see the footprint overlapping the that satellites make. Each satellite covers an enormous area.

The closer the satellite is to something the more power will be exerted on the object. The farther the waves have to go the less power they will have. Because the atmosphere is so much closer to the satellite, there is a stronger beam of energy going through the clouds and atmosphere. This stronger power causes a higher rate of warming in the atmosphere than it does on the surface of the Earth.

The illustration to the right shows how eight satellites microwave an enormous part of our Earth. When the radio signals reflect off of surrounding terrain; buildings, canyon walls, hard ground multi path issues occur due to multiple waves doubling over themselves. These delayed signals can cause poor signals. Ultimately, the water, ice, and Earth are absorbing and reflecting microwaves in many different directions. Microwaves passing through Earths atmospheres are causing radio frequency heating at the molecular level.

System spectral efficiency

“In wireless networks, the system spectral efficiency is a measure of the quantity of users or services that can be simultaneously supported by a limited radio frequency bandwidth in a defined geographic area.” The capacity of a wireless network can be measured by calculating the maximum simultaneous phone calls over 1 MHz frequency spectrum. This is measured in Erlangs//MHz/cell, Erlangs/MHz/sector, Erlangs/MHz/site, or Erlangs/MHz/km measurements. Modern day cell phones take advantage of this type of transmission. These cell phones transmit a microwave transmission that is twice the frequency of a microwave oven in your home.

This is a misconception of how microwave frequencies travel.

This is a misconception of how microwave frequencies travel.

An example of a spectral efficiency can be found in the satellite RADARSAT-1. In 1995 RADARSAT-1, an Earth observation satellite from Canada, was launched in an orbit above the Earth. RADRASAT-1 provides images of the Earth, scientific and commercial, used in agriculture, geology, hydrology, arctic surveillance, oceanography, cartography, ice and ocean monitoring, forestry, detecting ocean oil slicks, and many other applications. This satellite uses continuous high microwave transmissions. A Synthetic Aperture Radar (SAR) system is a type of sensor that images the Earth at a single microwave frequency of 5.3 GHz. SAR systems transmit microwaves towards the surface of the Earthy and record the reflections from the surface. This satellite can image the Earth during any time and in any atmospheric condition.

This is how microwave frequencies travel

This is how microwave frequencies actually travel.

A Common misconception about microwave transmissions is that the transmission is directly beaming straight into the receiving antennae. (See misconception illustration) This however, is not true. Transmissions are spread into the air in a spherical direction. The waves travel in every direction until they find a receiver or some dielectric material to pass into.

When a microwave transmission is sent to a receiving satellite dish the transmission is sent in a spherical direction. (See how microwaves travel illustration) The signal passes through all parts of that sphere until it finds a connection. All microwaves, not received by an antennae, pass through the dielectric material in the earth. Dielectric material is primarily water and ice.

Nuclear Bombs : History, Creation, Ingredients, Chemical Composition, Fusion, Types and Detonation

American nuclear technology evolved rapidly between 1944 and 1950, moving from the primitive Fat Man and Little Boy to more sophisticated, lighter, more powerful, and more efficient designs. Much design effort shifted from fission to thermonuclear weapons after President Truman decided that the United States should proceed to develop a hydrogen bomb, a task which occupied the Los Alamos Laboratory from 1950 through 1952. The “George” shot of Operation Greenhouse (May 9, 1951) confirmed for the first time that a fission device could produce the conditions needed to ignite a thermonuclear reaction. The “Mike” test of Operation Ivy, 1 November, 1952, was the first explosion of a true two-stage thermonuclear device.

From 1952 until the early years of the ICBM era [roughly to the development of the first multiple independently targeted reentry vehicles (MIRVs) in the late 1960’s], new concepts in both fission primary and fusion secondary design were developed rapidly. However, after the introduction of the principal families of weapons in the modern stockpile (approximately the mid 1970’s), the rate of design innovations and truly new concepts slowed as nuclear weapon technology became a mature science. It is believed that other nations’ experiences have been roughly similar, although the United States probably has the greatest breadth of experience with innovative designs simply because of the more than 1,100 nuclear detonations it has conducted. The number of useful variations on the themes of primary and secondary design is finite, and designers’ final choices are frequently constrained by considerations of weapon size, weight, safety, and the availability of special materials.

Nuclear weaponry has advanced considerably since 1945, as can be seen at an unclassified level by comparing the size and weight of “Fat Man” with the far smaller, lighter, and more powerful weapons carried by modern ballistic missiles. Most nations of the world, including those of proliferation interest, have subscribed to the 1963 Limited Test Ban Treaty, which requires that nuclear explosions only take place underground. Underground testing can be detected by seismic means and by observing radioactive effluent in the atmosphere. It is probably easier to detect and identify a small nuclear test in the atmosphere than it is to detect and identify a similarly sized underground test. In either case, highly specialized instrumentation is required if a nuclear test explosion is to yield useful data to the nation carrying out the experiment.

US nuclear weapons technology is mature and might not have shown many more qualitative advances over the long haul, even absent a test ban. The same is roughly true for Russia, the UK, and possibly for France. The design of the nuclear device for a specific nuclear weapon is constrained by several factors. The most important of these are the weight the delivery vehicle can carry plus the size of the space available in which to carry the weapon (e.g., the diameter and length of a nosecone or the length and width of a bomb bay). The required yield of the device is established by the target vulnerability. The possible yield is set by the state of nuclear weapon technology and by the availability of special materials. Finally, the choices of specific design details of the device are determined by the taste of its designers, who will be influenced by their experience and the traditions of their organization.

Fission Weapons

An ordinary “atomic” bomb of the kinds used in World War II uses the process of nuclear fission to release the binding energy in certain nuclei. The energy release is rapid and, because of the large amounts of energy locked in nuclei, violent. The principal materials used for fission weapons are U-235 and Pu-239, which are termed fissile because they can be split into two roughly equal-mass fragments when struck by a neutron of even low energies. When a large enough mass of either material is assembled, a self-sustaining chain reaction results after the first fission is produced.The minimum mass of fissile material that can sustain a nuclear chain reaction is called a critical mass and depends on the density, shape, and type of fissile material, as well as the effectiveness of any surrounding material (called a reflector or tamper) at reflecting neutrons back into the fissioning mass. Critical masses in spherical geometry for weapon-grade materials are as follows:

Uranium-235      Plutonium-239

Bare sphere: 56 kg 11 kg
Thick Tamper: 15 kg 5 kg

The critical mass of compressed fissile material decreases as the inverse square of the density achieved. Since critical mass decreases rapidly as density increases, the implosion technique can make do with substantially less nuclear material than the gun-assembly method. The “Fat Man” atomic bomb that destroyed Nagasaki in 1945 used 6.2 kilograms of plutonium and produced an explosive yield of 21-23 kilotons [a 1987 reassessment of the Japanese bombings placed the yield at 21 Kt]. Until January 1994, the Department of Energy (DOE) estimated that 8 kilograms would typically be needed to make a small nuclear weapon. Subsequently, however, DOE reduced the estimate of the amount of plutonium needed to 4 kilograms. Some US scientists believe that 1 kilogram of plutonium will suffice.

If any more material is added to a critical mass a condition of supercriticality results. The chain reaction in a supercritical mass increases rapidly in intensity until the heat generated by the nuclear reactions causes the mass to expand so greatly that the assembly is no longer critical.

Fission weapons require a system to assemble a supercritical mass from a sub-critical mass in a very short time. Two classic assembly systems have been used, gun and implosion. In the simpler gun-type device, two subcritical masses are brought together by using a mechanism similar to an artillery gun to shoot one mass (the projectile) at the other mass (the target). The Hiroshima weapon was gun-assembled and used 235 U as a fuel. Gun-assembled weapons using highly enriched uranium are considered the easiest of all nuclear devices to construct and the most foolproof.

Gun-Device

In the gun device, two pieces of fissionable material, each less than a critical mass, are brought together very rapidly to forma single supercritical one. This gun-type assembly may be achieved in a tubular device in which a high explosive is used to blow one subcritical piece of fissionable material from one end of the tube into another subcritical piece held at the opposite end of the tube.

Manhattan Project scientists were so confident in the performance of the “Little Boy” uranium bomb that the device was not even tested before it was used. This 15-kt weapon was airdropped on 06 August 1945 at Hiroshima, Japan. The device contained 64.1 kg of highly enriched uranium, with an average enrichment of 80%. The six bombs built by the Republic of South Africa were gun-assembled and used 50kg of uranium enriched to between 80 percent and 93 percent in the isotope U-235.Compared with the implosion approach, this method assembles the masses relatively slowly and at normal densities; it is practical only with highly enriched uranium. If plutonium – even weapon-grade — were used in a gun-assembly design, neutrons released from spontaneous fission of its even-numbered isotopes would likely trigger the nuclear chain reaction too soon, resulting in a “fizzle” of dramatically reduced yield.

Implosion-Device

Because of the short time interval between spontaneous neutron emissions (and, therefore, the large number of background neutrons) found in plutonium because of the decay by spontaneous fission of the isotope Pu-240, Manhattan Project scientists devised the implosion method of assembly in which high explosives are arranged to form an imploding shock wave which compresses the fissile material to supercriticality.

The core of fissile material that is formed into a super-critical mass by chemical high explosives (HE) or propellants. When the high explosive is detonated, an inwardly directed implosion wave is produced. This wave compresses the sphere of fissionable material. The decrease in surface to volume ratio of this compressed mass plus its increased density is then such as to make the mass supercritical. The HE is exploded by detonators timed electronically by a fuzing system, which may use altitude sensors or other means of control.

The nuclear chain-reaction is normally started by an initiator that injects a burst of neutrons into the fissile core at an appropriate moment. The timing of the initiation of the chain reaction is important and must be carefully designed for the weapon to have a predictable yield. A neutron generator emits a burst of neutrons to initiate the chain reaction at the proper moment  near the point of maximum compression in an implosion design or of full assembly in the gun-barrel design.

A surrounding tamper may help keep the nuclear material assembled for a longer time before it blows itself apart, thus increasing the yield. The tamper often doubles as a neutron reflector.

Implosion systems can be built using either Pu-239 or U-235 but the gun assembly only works for uranium. Implosion weapons are more difficult to build than gun weapons, but they are also more efficient, requiring less SNM and producing larger yields. Iraq attempted to build an implosion bomb using U-235. In contrast, North Korea chose to use 239 Pu produced in a nuclear reactor.

Boosted Weapons

To fission more of a given amount of fissile material, a small amount of material that can undergo fusion, deuterium and tritium (D-T) gas, can be placed inside the core of a fission device. Here, just as the fission chain reaction gets underway, the D-T gas undergoes fusion, releasing an intense burst of high-energy neutrons (along with a small amount of fusion energy as well) that fissions the surrounding material more completely. This approach, called boosting, is used in most modem nuclear weapons to maintain their yields while greatly decreas-ing their overall size and weight.

Enhanced Radiation Weapons

An enhanced radiation (ER) weapon, by special design techniques, has an output in which neutrons and x-rays are made to constitute a substantial portion of the total energy released. For example, a standard fission weapon’s total energy output would be partitioned as follows: 50% as blast; 35% as thermal energy; and 15% as nuclear radiation. An ER weapon’s total energy would be partitioned as follows: 30% as blast; 20% as thermal; and 50% as nuclear radiation. Thus, a 3-kiloton ER weapon will produce the nuclear radiation of a 10-kiloton fission weapon and the blast and thermal radiation of a 1-kiloton fission device. However, the energy distribution percentages of nuclear weapons are a function of yield.

Fusion Weapons

A more powerful but more complex weapon uses the fusion of heavy isotopes of hydrogen, deuterium, and tritium to release large numbers of neutrons when the fusile (sometimes termed “fusionable”) material is compressed by the energy released by a fission device called a primary. Fusion (or thermonuclear’ weapons derive a significant amount of their total energy from fusion reactions. The intense temperatures and pressures generated by a fission explosion overcome the strong electrical repulsion that would otherwise keep the positively charged nuclei of the fusion fuel from reacting. The fusion part of the weapon is called a secondary.In general, the x-rays from a fission primary heat and compress material surrounding a secondary fusion stage.

It is inconvenient to carry deuterium and tritium as gases in a thermonuclear weapon, and certainly impractical to carry them as liquefied gases, which requires high pressures and cryogenic temperatures. Instead, one can make a “dry” device in which 6Li is combined with deuterium to form the compound 6Li D (lithium-6 deuteride). Neutrons from a fission “primary” device bombard the 6 Li in the compound, liberating tritium, which quickly fuses with the nearby deuterium. The a particles, being electrically charged and at high temperatures, contribute directly to forming the nuclear fireball. The neutrons can bombard additional 6Li nuclei or cause the remaining uranium and plutonium in the weapon to undergo fission. This two-stage thermonuclear weapon has explosive yields far greater than can be achieved with one point safe designs of pure fission weapons, and thermonuclear fusion stages can be ignited in sequence to deliver any desired yield. Such bombs, in theory, can be designed with arbitrarily large yields: the Soviet Union once tested a device with a yield of about 59 megatons.

In a relatively crude sense, 6 Li can be thought of as consisting of an alpha particle ( 4He) and a deuteron ( 2H) bound together. When bombarded by neutrons, 6 Li disintegrates into a triton ( 3 H) and an alpha:

6 Li + Neutron = 3 H + 3 He + Energy.This is the key to its importance in nuclear weapons physics. The nuclear fusion reaction which ignites most readily is

2 H + 3 H =
4 He + n + 17.6 MeV,or, phrased in other terms, deuterium plus tritium produces 4He plus a neutron plus 17.6 MeV of free energy:

D + T = 4 He + n + 17.6 MeV.Lithium-7 also contributes to the production of tritium in a thermonuclear secondary, albeit at a lower rate than 6Li. The fusion reactions derived from tritium produced from 7 Li contributed many unexpected neutrons (and hence far more energy release than planned) to the final stage of the infamous 1953 Castle/BRAVO atmospheric test, nearly doubling its expected yield.

Safing, Arming, Fuzing, and Firing (SAFF)

The ability to make effective use of a nuclear weapon is limited unless the device can be handled safely, taken safely from storage when required, delivered to its intended target, and then detonated at the correct point in space and time to achieve the desired goal. Although the intended scenarios for use of its weapons will strongly influence specific weaponization concepts and approaches, functional capabilities for safing, arming, fuzing, and firing (SAFF) will be fundamental.Nuclear weapons are particularly destructive, with immediate effects including blast and thermal radiation and delayed effects produced by ionizing radiation, neutrons, and radioactive fallout. They are expensive to build, maintain, and employ, requiring a significant fraction of the total defense resources of a small nation. In a totalitarian state the leader must always worry that they will be used against the government; in a democracy the possibility of an unauthorized or accidental use must never be discounted. A nuclear detonation as the result of an accident would be a local catastrophe.Because of their destructiveness, nuclear weapons require precautions to prevent accidental detonation during any part of their manufacture and lifetime. And because of their value, the weapons require reliable arming and fuzing mechanisms to ensure that they explode when delivered to target. Therefore, any nuclear power is likely to pay some attention to the issues of safing and safety, arming, fuzing, and firing of its nuclear weapons. The solutions adopted depend upon the level of technology in the proliferant state, the number of weapons in its stockpile, and the political consequences of an accidental detonation.Whether to protect their investment in nuclear arms or to deny potential access to and use of the weapons by unauthorized persons, proliferators or subnational groups will almost certainly seek special measures to ensure security and operational control of nuclear weapons. These are likely to include physical security and access control technologies at minimum and may include use control. The techniques used today by the existing western nuclear weapon states represent the culmination of a half-century of evolution in highly classified military programs, and proliferators may well choose simpler solutions, perhaps by adapting physical security, access, and operational controls used in the commercial sector for high-value/high-risk assets.From the very first nuclear weapons built, safety was a consideration. The two bombs used in the war drops on Hiroshima and Nagasaki posed significant risk of accidental detonation if the B-29 strike aircraft had crashed on takeoff. As a result, critical components were removed from each bomb and installed only after takeoff and initial climb to altitude were completed. Both weapons used similar arming and fuzing components. Arming could be accomplished by removing a safety connector plug and replacing it with a distinctively colored arming connector. Fuzing used redundant systems including a primitive radar and a barometric switch. No provision was incorporated in the weapons themselves to prevent unauthorized use or to protect against misappropriation or theft.

In later years, the United States developed mechanical safing devices. These were later replaced with weapons designed to a goal of less than a 1 in a 1 million chance of the weapon delivering more than 4 pounds of nuclear yield if the high explosives were detonated at the single most critical possible point. Other nations have adopted different safety criteria and have achieved their safety goals in other ways.

In the 1950’s, to prevent unauthorized use of U.S. weapons stored abroad, permissive action links (PALs) were developed. These began as simple combination locks and evolved into the modern systems which allow only a few tries to arm the weapon and before disabling the physics package should an intruder persist in attempts to defeat the PAL.

Safing To ensure that the nuclear warhead can be stored, handled, deployed, and employed in a wide spectrum of intended and unintended environmental and threat conditions, with assurance that it will not experience a nuclear detonation. In U.S. practice, safing generally involves multiple mechanical interruptions of both power sources and pyrotechnic/explosive firing trains. The nuclear components may be designed so that an accidental detonation of the high explosives is intrinsically unable to produce a significant (>4 pounds TNT equivalent) nuclear yield; it is simpler to insert mechanical devices into the pit to prevent the assembly of a critical mass into the pit or to remove a portion of the fissile material from inside the high explosives. Mechanical safing of a gun-assembled weapon is fairly straightforward; one can simply insert a hardened steel or tungsten rod across a diameter of the gun barrel, disrupting the projectile. All U.S. weapons have been designed to be intrinsically one-point safe in the event of accidental detonation of the high explosives, but it is not anticipated that a new proliferator would take such care.

Arming Placing the nuclear warhead in a ready operational state, such that it can be initiated under specified firing conditions. Arming generally involves mechanical restoration of the safing interrupts in response to conditions that are unique to the operational environment (launch or deployment) of the system. A further feature is that the environment typically provides the energy source to drive the arming action. If a weapon is safed by inserting mechanical devices into the pit (e.g., chains, coils of wire, bearing balls) to prevent complete implosion, arming involves removal of those devices. It may not always be possible to safe a mechanically armed device once the physical barrier to implosion has been removed.

Fuzing To ensure optimum weapon effectiveness by detecting that the desired conditions for warhead detonation have been met and to provide an appropriate command signal to the firing set to initiate nuclear detonation. Fuzing generally involves devices to detect the location of the warhead with respect to the target, signal processing and logic, and an output circuit to initiate firing.

Firing To ensure nuclear detonation by delivering a precise level of precisely timed electrical or pyrotechnic energy to one or more warhead detonating devices. A variety of techniques are used, depending on the warhead design and type of detonation devices.

Depending on the specific military operations to be carried out and the specific delivery system chosen, nuclear weapons pose special technological problems in terms of primary power and power-conditioning, overall weapon integration, and operational control and security.

Not all weapons possessors will face the same problems or opt for the same levels of confidence, particularly in the inherent security of their weapons. The operational objectives will in turn dictate the technological requirements for the SAFF subsystems. Minimal requirements could be met by surface burst (including impact fuzing of relatively slow moving warhead) or crude preset height of burst based on simple timer or barometric switch or simple radar altimeter. Modest requirements could be met by more precise HOB (height of burst) based on improved radar triggering or other methods of measuring distance above ground to maxmize radius of selected weapons effects, with point-contact salvage fuzing. Parachute delivery of bombs to allow deliberate laydown and surface burst. Substantial requirements could be met by variable HOB, including low-altitude for ensured destruction of protected strategic targets, along with possible underwater or exoatmospheric capabilities.

Virtually any country or extranational group with the resources to construct a nuclear device has sufficient capability to attain the minimum SAFF capability that would be needed to meet terrorist or minimal national aims. The requirements to achieve a “modest” or “substantial” capability level are much more demanding. Both safety and protection of investment demand very low probability of failure of safing and arming mechanisms, with very high probability of proper initiation of the warhead. All of the recognized nuclear weapons states and many other countries have (or have ready access to) both the design know-how and components required to implement a significant capability.In terms of sophistication, safety, and reliability of design, past U.S. weapons programs provide a legacy of world leadership in SAFF and related technology. France and the UK follow closely in overall SAFF design and may actually hold slight leads in specific component technologies. SAFF technologies of other nuclear powers – notably Russia and China – do not compare. Japan and Germany have technological capabilities roughly on a par with the United States, UK, and France, and doubtless have the capability to design and build nuclear SAFF subsystems.Reliable fuzing and firing systems suitable for nuclear use have been built since 1945 and do not need to incorporate any modern technology. Many kinds of mechanical safing systems have been employed, and several of these require nothing more complex than removable wires or chains or the exchanging of arming/ safing connector plugs. Safing a gun-assembled system is especially simple. Arming systems range from hand insertion of critical components in flight to extremely sophisticated instruments which detect specific events in the stockpile to target sequence (STS). Fuzing and firing systems span an equally great range of technical complexity.Any country with the electronics capability to build aircraft radar altimeter equipment should have access to the capability for building a reasonably adequate, simple HOB fuze. China, India, Israel, Taiwan, South Korea, Brazil, Singapore, the Russian Federation and the Ukraine, and South Africa all have built conventional weapons with design features that could be adapted to more sophisticated designs, providing variable burst height and rudimentary Electronic Counter Counter Measure (ECCM) features. With regard to physical security measures and use control, the rapid growth in the availability and performance of low-cost, highly reliable microprocessing equipment has led to a proliferation of electronic lock and security devices suitable for protecting and controlling high-value/at-risk assets. Such technology may likely meet the needs of most proliferant organizations.

The Celestial Sphere

Humans perceive in Euclidean space -> straight lines and planes. But, when distances are not visible (i.e. very large) than the apparent shape that the mind draws is a sphere -> thus, we use a spherical coordinate system for mapping the sky with the additional advantage that we can project Earth reference points (i.e. North Pole, South Pole, equator) onto the sky. Note: the sky is not really a sphere!

From the Earth’s surface we envision a hemisphere and mark the compass points on the horizon. The circle that passes through the south point, north point and the point directly over head (zenith) is called the meridian.

This system allows one to indicate any position in the sky by two reference points, the time from the meridian and the angle from the horizon. Of course, since the Earth rotates, your coordinates will change after a few minutes.

The horizontal coordinate system (commonly referred to as the alt-az system) is the simplest coordinate system as it is based on the observer’s horizon. The celestial hemisphere viewed by an observer on the Earth is shown in the figure below. The great circle through the zenith Z and the north celestial pole P cuts the horizon NESYW at the north point (N) and the south point (S). The great circle WZE at right angles to the great circle NPZS cuts the horizon at the west point (W) and the east point (E). The arcs ZN, ZW, ZY, etc, are known as verticals.

The two numbers which specify the position of a star, X, in this system are the azimuth, A, and the altitude, a. The altitude of X is the angle measured along the vertical circle through X from the horizon at Y to X. It is measured in degrees. An often-used alternative to altitude is the zenith distance, z, of X, indicated by ZX. Clearly, z = 90 – a. Azimuth may be defined in a number of ways. For the purposes of this course, azimuth will be defined as the angle between the vertical through the north point and the vertical through the star at X, measured eastwards from the north point along the horizon from 0 to 360°. This definition applies to observers in both the northern and the southern hemispheres.

It is often useful to know how high a star is above the horizon and in what direction it can be found – this is the main advantage of the alt-az system. The main disadvantage of the alt-az system is that it is a local coordinate system – i.e. two observers at different points on the Earth’s surface will measure different altitudes and azimuths for the same star at the same time. In addition, an observer will find that the star’s alt-az coordinates changes with time as the celestial sphere appears to rotate.


Celestial Sphere:

To determine the positions of stars and planets on the sky in an absolute sense, we project the Earth’s spherical surface onto the sky, called the celestial sphere.

The celestial sphere has a north and south celestial pole as well as a celestial equator which are projected reference points to the same positions on the Earth surface. Right Ascension and Declination serve as an absolute coordinate system fixed on the sky, rather than a relative system like the zenith/horizon system. Right Ascension is the equivalent of longitude, only measured in hours, minutes and seconds (since the Earth rotates in the same units). Declination is the equivalent of latitude measured in degrees from the celestial equator (0 to 90). Any point of the celestial (i.e. the position of a star or planet) can be referenced with a unique Right Ascension and Declination.

The celestial sphere has a north and south celestial pole as well as a celestial equator which are projected from reference points from the Earth surface. Since the Earth turns on its axis once every 24 hours, the stars trace arcs through the sky parallel to the celestial equator. The appearance of this motion will vary depending on where you are located on the Earth’s surface.

Note that the daily rotation of the Earth causes each star and planet to make a daily circular path around the north celestial pole referred to as the diurnal motion.

Some Important Geometry. (Astronomy)

Great Circles:

The shortest path between two points on a plane is a straight line. On the surface of a sphere, however, there are no straight lines. The shortest path between two points on the surface of a sphere is given by the arc of the great circle passing through the two points. A great circle is defined to be the intersection with a sphere of a plane containing the center of the sphere.

Two great circles

If the plane does not contain the center of the sphere, its intersection with the sphere is known as a small circle. In more everyday language, if we take an apple, assume it is a sphere, and cut it in half, we slice through a great circle. If we make a mistake, miss the center and hence cut the apple into two unequal parts, we will have sliced through a small circle.

Two small circles


Spherical Triangles:

If we wish to connect three points on a plane using the shortest possible route, we would draw straight lines and hence create a triangle. For a sphere, the shortest distance between two points is a great circle. By analogy, if we wish to connect three points on the surface of a sphere using the shortest possible route, we would draw arcs of great circles and hence create a spherical triangle. To avoid ambiguities, a triangle drawn on the surface of a sphere is only a spherical triangle if it has all of the following properties:

  • The three sides are all arcs of great circles.
  • Any two sides are together greater than the third side.
  • The sum of the three angles is greater than 180°.
  • Each spherical angle is less than 180°.

Hence, in figure below, triangle PAB is not a spherical triangle (as the side AB is an arc of a small circle), but triangle PCD is a spherical triangle (as the side CD is an arc of a great circle). You can see that the above definition of a spherical triangle also rules out the “triangle” PCED as a spherical triangle, as the vertex angle P is greater than 180° and the sum of the sides PC and PD is less than CED.

The figure below shows a spherical triangle, formed by three intersecting great circles, with arcs of length (a,b,c) and vertex angles of (A,B,C).

Note that the angle between two sides of a spherical triangle is defined as the angle between the tangents to the two great circle arcs, as shown in the figure below for vertex angle B.


Earth’s Surface:

The rotation of the Earth on its axis presents us with an obvious means of defining a coordinate system for the surface of the Earth. The two points where the rotation axis meets the surface of the Earth are known as the north pole and the south pole and the great circle perpendicular to the rotation axis and lying half-way between the poles is known as the equator. Great circles which pass through the two poles are known as meridians and small circles which lie parallel to the equator are known as parallels or latitude lines.

The latitude of a point is the angular distance north or south of the equator, measured along the meridian passing through the point. A related term is the co-latitude, which is defined as the angular distance between a point and the closest pole as measured along the meridian passing through the point. In other words, co-latitude = 90° – latitude.

Distance on the Earth’s surface is usually measured in nautical miles, where one nautical mile is defined as the distance subtending an angle of one minute of arc at the Earth’s center. A speed of one nautical mile per hour is known as one knot and is the unit in which the speed of a boat or an aircraft is usually measured.


Horizon System:

Humans perceive in Euclidean space -> straight lines and planes. But, when distances are not visible (i.e. very large) than the apparent shape that the mind draws is a sphere -> thus, we use a spherical coordinate system for mapping the sky with the additional advantage that we can project Earth reference points (i.e. North Pole, South Pole, equator) onto the sky. Note: the sky is not really a sphere!

From the Earth’s surface we envision a hemisphere and mark the compass points on the horizon. The circle that passes through the south point, north point and the point directly over head (zenith) is called the meridian.

This system allows one to indicate any position in the sky by two reference points, the time from the meridian and the angle from the horizon. Of course, since the Earth rotates, your coordinates will change after a few minutes.

The horizontal coordinate system (commonly referred to as the alt-az system) is the simplest coordinate system as it is based on the observer’s horizon. The celestial hemisphere viewed by an observer on the Earth is shown in the figure below. The great circle through the zenith Z and the north celestial pole P cuts the horizon NESYW at the north point (N) and the south point (S). The great circle WZE at right angles to the great circle NPZS cuts the horizon at the west point (W) and the east point (E). The arcs ZN, ZW, ZY, etc, are known as verticals.

The two numbers which specify the position of a star, X, in this system are the azimuth, A, and the altitude, a. The altitude of X is the angle measured along the vertical circle through X from the horizon at Y to X. It is measured in degrees. An often-used alternative to altitude is the zenith distance, z, of X, indicated by ZX. Clearly, z = 90 – a. Azimuth may be defined in a number of ways. For the purposes of this course, azimuth will be defined as the angle between the vertical through the north point and the vertical through the star at X, measured eastwards from the north point along the horizon from 0 to 360°. This definition applies to observers in both the northern and the southern hemispheres.

It is often useful to know how high a star is above the horizon and in what direction it can be found – this is the main advantage of the alt-az system. The main disadvantage of the alt-az system is that it is a local coordinate system – i.e. two observers at different points on the Earth’s surface will measure different altitudes and azimuths for the same star at the same time. In addition, an observer will find that the star’s alt-az coordinates changes with time as the celestial sphere appears to rotate. Despite these problems, most modern research telescopes use alt-az mounts, as shown in the figure above, owing to their lower cost and greater stability. This means that computer control systems which can transform alt-az coordinates to equatorial coordinates are required.

Dualities (String Theory)

Sometimes different mathematical theories describe the same physics. We call this situation a duality. In many cases, calculations which are very complicated in one theory become much easier in the other.

Usefully, string theory is awash with dualities. These variously offer us new perspectives on reality, improve our ability to compute hard sums and unite disparate areas of physics. Much of modern research focuses on using these dualities to better understand a broad spectrum of topics.

T-duality is the simplest to appreciate. Remember that string theory requires six extra dimensions tightly curled up in space. Naively, one would think that these dimensions could be arbitrarily big or arbitrarily small, with different physics holding in each case. However something strange happens when you make these dimensions very small. Of paramount importance is a tiny quantity known as the Planck length, which we denote by a.

How does the radius of a circular dimension affect the physics of string theory? We can appreciate this work with a thought experiment.

Set up a circular extra dimension the size of the Planck length. Start contracting the circle and measure the resulting physics. Your readings will vary depending on the size of the dimension. Now repeat the experiment, but with a crucial difference; instead of contracting your circle, expand it.

Observing the physics again, you realise that it’s exactly the same as for a contracting dimension! There is a duality between the two scenarios. Mathematically it can be proven that extra dimensions with radii r and a/r produce the same physics: they are identical theories.

An extension of T-duality produces mirror symmetry. In many string theory models, the extra dimensions form a six dimensional shape called a Calabi-Yau manifold. Sadly there are millions of different Calabi-Yau surfaces, each with a slightly different structure. The properties of the Calabi-Yau manifolds affect the expected four-dimensional physics. So we must pin down the correct possibility for the physics we observe.

This requires a lot of calculation. And maths is hard in six dimensions, as you might guess! But here’s where mirror symmetry comes in. In the late 1980s it became clear that Calabi-Yau shapes come in pairs. For any given pair, both lead to the same physics. We have a duality! Physicists could chop and change between mirror pairs, making computations more tractable.

Our third duality is more fundamental: it underpins the success of M-theory. We’ll refer to it as S-duality. All quantum field theories contain a coupling constant, which determines the strength of interactions between particles. String theory is no exception. The value of the coupling constant vastly affects the behaviour predicted.

During the First Superstring Revolution physicists realised that there were five different brands of string theory. At first it seemed like they were all completely separate. But the discovery of various S-dualities sparked a paradigm shift. These dualities related the different flavours of string theory through a framework called M-theory.

More precisely physicists paired up the different types of string model, like so. Take two distinct string theories, A and B. They each have an adjustable coupling constant. If A has a large coupling constant and B a small one, then they predict exactly the same physics. The end result was that the many different string theories were united under a single banner.

Finally we come to the hottest guy in town. The AdS-CFT correspondence is a conjectured duality which has been around for barely a decade. Subtle yet powerful, it has profound implications for string theory as a tool in research. It’s such an important idea that it requires a full explanation.

Einstiens General Relativity(FullyExplained)

Spacetime and Energy

  • relativity unifies space, time, mass and energy
Special relativity and E=mc2 led to the most powerful unification of physical concepts since the time of Newton. The previously separate ideas of space, time, energy and mass were linked by special relativity, although without a clear understanding of how they were linked.

  • explanation provided by general relativity, where a complete theory of gravity is provided by using the geometry of spacetime
The how and why remained to the domain of what is called general relativity, a complete theory of gravity using the geometry of spacetime. The origin of general relativity lies in Einstein’s attempt to apply special relativity in accelerated frames of reference. Remember that the conclusions of relativity were founded for inertial frames, i.e. ones that move only at a uniform velocity. Adding acceleration was a complication that took Einstein 10 years to formulate.

Equivalence Principle:

  • equivalence principle equates accelerating and gravity effects
The equivalence principle was Einstein’s `Newton’s apple’ insight to gravitation. His thought experiment was the following, imagine two elevators, one at rest of the Earth’s surface, one accelerating in space. To an observer inside the elevator (no windows) there is no physical experiment that he/she could perform to differentiate between the two scenarios.The equivalence principle is a fundamental law of physics that states that gravitational and inertial forces are of a similar nature and often indistinguishable. In the Newtonian form it asserts, in effect, that, within a windowless laboratory freely falling in a uniform gravitational field, experimenters would be unaware that the laboratory is in a state of nonuniform motion. All dynamical experiments yield the same results as obtained in an inertial state of uniform motion unaffected by gravity.

  • although a simple and common sense assumption, the equivalence principle has strange consequences
  • such as, photons will be effected by gravity, even though they have zero mass
An immediate consequence of the equivalence principle is that gravity bends light. To visualize why this is true imagine a photon crossing the elevator accelerating into space. As the photon crosses the elevator, the floor is accelerated upward and the photon appears to fall downward. The same must be true in a gravitational field by the equivalence principle.The principle of equivalence renders the gravitational field fundamentally different from all other force fields encountered in nature. The new theory of gravitation, the general theory of relativity, adopts this characteristic of the gravitational field as its foundation.

  • two classical tests of general relativity:
  • the first is the deflection of starlight by the Sun’s gravity as measured by the 1919 solar eclipse experiment
There were two classical test of general relativity, the first was that light should bedeflected by passing close to a massive body. The first opportunity occurred during a total eclipse of the Sun in 1919.Measurements of stellar positions near the darkened solar limb proved Einstein was right. Direct confirmation of gravitational lensing was obtained by the Hubble Space Telescope last year.


General Relativity :

  • general relativity combines special relativity with the equivalence principle
  • general relativity first resolves the problem of the instantaneous transfer of gravity under Newton’s theory by stating that gravity propagates at the speed of light
The second part of relativity is the theory of general relativity and lies on two empirical findings that he elevated to the status of basic postulates. The first postulate is the relativity principle: local physics is governed by the theory of special relativity. The second postulate is the equivalence principle: there is no way for an observer to distinguish locally between gravity and acceleration.The general theory of relativity derives its origin from the need to extend the new space and time concepts of the special theory of relativity from the domain of electric and magnetic phenomena to all of physics and, particularly, to the theory of gravitation. As space and time relations underlie all physical phenomena, it is conceptually intolerable to have to use mutually contradictory notions of space and time in dealing with different kinds of interactions, particularly in view of the fact that the same particles may interact with each other in several different ways–electromagnetically, gravitationally, and by way of so-called nuclear forces.

Newton’s explanation of gravitational interactions must be considered one of the most successful physical theories of all time. It accounts for the motions of all the constituents of the solar system with uncanny accuracy, permitting, for instance, the prediction of eclipses hundreds of years ahead. But Newton’s theory visualizes the gravitational pull that the Sun exerts on the planets and the pull that the planets in turn exert on their moons and on each other as taking place instantaneously over the vast distances of interplanetary space, whereas according to relativistic notions of space and time any and all interactions cannot spread faster than the speed of light. The difference may be unimportant, for practical reasons, as all of the members of the solar system move at relative speeds far less than 1/1,000 of the speed of light; nevertheless, relativistic space-time and Newton’s instantaneous action at a distance are fundamentally incompatible. Hence Einstein set out to develop a theory of gravitation that would be consistent with relativity.

  • remembering that mass changes with motion, and that mass causes gravity, Einstein links mass, gravity and spacetime with the geometry of spacetime
Proceeding on the basis of the experience gained from Maxwell’s theory of the electric field, Einstein postulated the existence of a gravitational field that propagates at the speed of light, c, and that will mediate an attraction as closely as possible equal to the attraction obtained from Newton’s theory. From the outset it was clear that mathematically a field theory of gravitation would be more involved than that of electricity and magnetism. Whereas the sources of the electric field, the electric charges of particles, have values independent of the state of motion of the instruments by which these charges are measured, the source of the gravitational field, the mass of a particle, varies with the speed of the particle relative to the frame of reference in which it is determined and hence will have different values in different frames of reference. This complicating factor introduces into the task of constructing a relativistic theory of the gravitational field a measure of ambiguity, which Einstein resolved eventually by invoking the principle of equivalence.Einstein discovered that there is a relationship between mass, gravity and spacetime. Mass distorts spacetime, causing it to curve. Gravity can be described as motion caused in curved spacetime .

  • gravity as geometry of spacetime returns physics to classic levels of the ancient Greeks
  • however, spacetime is not Euclidean
  • matter tells spacetime how to curve, and spacetime tells matter how to move (orbits)
Thus, the primary result from general relativity is that gravitation is a purely geometric consequence of the properties of spacetime. Special relativity destroyed classical physics view of absolute space and time, general relativity dismantles the idea that spacetime is described by Euclidean or plane geometry. In this sense, general relativity is a field theory, relating Newton’s law of gravity to the field nature of spacetime, which can be curved.Gravity in general relativity is described in terms of curved spacetime. The idea that spacetime is distorted by motion, as in special relativity, is extended to gravity by the equivalence principle. Gravity comes from matter, so the presence of matter causes distortions or warps in spacetime. Matter tells spacetime how to curve, and spacetime tells matter how to move (orbits).

  • the 2nd test was the prediction of time dilation in a gravitational field, first shown by atomic clocks in the mid-70’s (note the need of advanced technology to test general relativity)
  • the effects of general relativity require sensitive instruments under the condition of weak fields, i.e. conditions where the acceleration due to gravity is much, much less than the speed of light
  • strong fields are found in extreme situations such as near neutron stars or black holes
The second test is that general relativity predicts a time dilation in a gravitational field, so that, relative to someone outside of the field, clocks (or atomic processes) go slowly. This was confirmed with atomic clocks flying airplanes in the mid-1970’s.The general theory of relativity is constructed so that its results are approximately the same as those of Newton’s theories as long as the velocities of all bodies interacting with each other gravitationally are small compared with the speed of light–i.e., as long as the gravitational fields involved are weak. The latter requirement may be stated roughly in terms of the escape velocity. A gravitational field is considered strong if the escape velocity approaches the speed of light, weak if it is much smaller. All gravitational fields encountered in the solar system are weak in this sense.

Notice that at low speeds and weak gravitational fields, general and special relativity reduce to Newtonian physics, i.e. everyday experience.


Black Holes:

  • as gravity increases the escape velocity increases
  • when escape velocity exceeds the speed of light a black hole forms
The fact that light is bent by a gravitational field brings up the following thought experiment. Imagine adding mass to a body. As the mass increases, so does the gravitational pull and objects require more energy to reach escape velocity. When the mass is sufficiently high enough that the velocity needed to escape is greater than the speed of light we say that a black hole has been created.

  • since photons have zero mass, a better definition of a black hole is given by curvature
  • a black hole is an object of infinite curvature, a hole in spacetime
  • the Schwarzschild radius defines the event horizon, the point of no return around the black hole
Another way of defining a black hole is that for a given mass, there is a radius where if all the mass is compress within this radius the curvature of spacetime becomes infinite and the object is surrounded by an event horizon. This radius called the Schwarzschild radius and varys with the mass of the object (large mass objects have large Schwarzschild radii, small mass objects have small Schwarzschild radii).Schwarzschild radius is the radius below which the gravitational attraction between the particles of a body must cause it to undergo irreversible gravitational collapse. This phenomenon is thought to be the final fate of the more massive stars.

The gravitational radius (R) of an object of mass M is given by the following formula, in which G is the universal gravitational constant and c is the speed of light: R = 2GM/c2 . For a mass as small as a human being, the gravitational radius is of the order of 10-23cm, much smaller than the nucleus of an atom; for a typical star such as the Sun, it is about 3 km (2 miles).

  • a black hole is still visible by its distortion on local spacetime and the deflection of starlight
The Schwarzschild radius marks the point where the event horizon forms, below this radius no light escapes. The visual image of a black hole is one of a dark spot in space with no radiation emitted. Any radiation falling on the black hole is not reflected but rather absorbed, and starlight from behind the black hole is lensed.

  • the structure of a black hole contains only an event horizon and a singularity
Even though a black hole is invisible, it has properties and structure. The boundary surrounding the black hole at the Schwarzschild radius is called the event horizon, events below this limit are not observed. Since the forces of matter can not overcome the force of gravity, all the mass of a black hole compresses to infinity at the very center, called the singularity.

  • the size of a black hole is set by its mass
A black hole can come in any size. Stellar mass black holes are thought to form from supernova events, and have radii of 5 km. Galactic black hole in the cores of some galaxies, millions of solar masses and the radius of a solar system, are built up over time by cannibalizing stars. Mini black holes formed in the early Universe (due to tremendous pressures) down to masses of asteroids with radii the size of a grain of sand.

  • spacetime is severely distorted near the event horizon, and extreme effects are seen
Note that a black hole is the ultimate entropy sink since all information or objects that enter a black hole never return. If an observer entered a black hole to look for the missing information, he/she would be unable to communicate their findings outside the event horizon.

Quantum Gravity 

When it was discovered in the early twentieth century that Newtonian physics, although it had stood unchallenged for hundreds of years, failed to answer basic questions about time and space, such as ‘Is the universe infinite?’ or ‘Is time eternal?’, a new basis for physics was needed.

This lead to the development of Quantum Theory by Bohr, Schrodinger and Heisenberg and Relativity Theory by Einstein. This was the first step in the development of a new basis for physics. Both theories, however are incomplete, and are limited in their abilities to answer many questions. Quantum Physics deals with the behaviour of very small objects, such as atoms, why they do not disintegrate as Newtonian Physics wanted. The theory of Relativity, on the other hand deals with much large scales, celestial bodies and others.

Both theories fail when confronted to the other’s ‘domain’, and are therefore limited in their ability to describe the universe. One must unify these theories, make them compatible with one another. The resulting theory would be able to describe the behavior of the universe, from quarks and atoms to entire galaxies. This is the quantum theory of gravity.

There are two fundamental areas of modern physics, each describes the universe on different scales. First we have quantum mechanics which talks about atoms, molecules and fundamental particles. Then we have general relativity which tells us that gravity is the bending and warping of space-time. There has been much work on finding a theory that combines these two pillars of physics.

There are three main aproches to quantum gravity all have there problems.

1) Loop quantum gravity.
2) String Theory.
3) Others; Penrose spin networks, Connes non-commutative geometry etc.

1) Loop quantum gravity is a way to quantise space time while keeping what General Relativity taught us. It is independent of a background gravitational field or metric. So it should be if we are dealing with gravity. Also, it is formulated in 4 dimensions. The main problem is that the other forces in nature, electromagnetic, strong and weak cannot be included in the formulation. Nor it is clear how loop quantum gravity is related to general relativity.

2) Then we have string theory. String theory is a quantum theory where the fundamental objects are one dimensional strings and not point like particles. String theory is “large enough” to include the standard model and includes gravity as a must. The problems are three fold, first the theory is background dependant. The theory is formulated with a background metric. Secondly no-one knows what the physical vacuum in string theory is, so it has no predictive powers. String theory must be formulated in 11 dimensions, what happened to the other 7 we cannot see? ( Also string theory is supersymmetric and predicts a load of new particles).

3) Then we have other approches, such as non-commutative geometry. This assumes that our space-time coordinates no longer commute. i.e. x y – y x is not zero. This formulation relies heavily on operator algebras.

All the theories have several things in common which are accepted as being part of quantum gravity at about Planck scale.

i)Space-time is discrete and non-commutative ii)Holography and the Bekenstin bound.

i) This is “simply” applying quantum mechanics to space-time. In quantum mechanics all the physical observables are discrete.

ii) The holographic principle was first realised by Hawking. He realised that the entropy of a black hole was proportional to the surface area of the horizon and not the volume. That is all the information about a black hole is on the surface of the horizon. It is like a holograph, you only need to look at the 2-d surface to know everything you can about the black hole.

Bekenstin showed that there is a maximum amount of information that can pass through a surface. It is quantised in Planck units.

Einstien’s Special Relativity Fully Explained!

Special Theory of Relativity :

  • experiments with electromagnetic wave properties of light finds contradictions with Newtonian view of space and time
  • Michelson-Morley experiment shows speed of light is constant regardless of motion of observer (!)
By the late 1800’s, it was becoming obvious that there were some serious problems for Newtonian physics concerning the need for absolute space and time when referring to events or interactions (frames of reference). In particular, the newly formulated theory of electromagnetic waves required that light propagation occur in a medium.In a Newtonian Universe, there should be no difference in space or time regardless of where you are or how fast you are moving. In all places, a meter is a meter and a second is a second. And you should be able to travel as fast as you want, with enough acceleration.

In the 1890’s, two physicists (Michelson and Morley) were attempting to measure the Earth’s velocity around the Sun with respect to Newtonian Absolute space and time. This would also test how light waves propagated since all waves must move through a medium. For light, this medium was called the aether.

The results of the Michelson-Morley experiment was that the velocity of light was constant regardless of how the experiment was tilted with respect to the Earth’s motion. This implied that there was no aether and, thus, no absolute space. Thus, objects, or coordinate systems, moving with constant velocity (called inertial frames) were relative only to themselves.

In Newtonian mechanics, quantities such as speed and distance may be transformed from one frame of reference to another, provided that the frames are in uniform motion (i.e. not accelerating).

 

  • Einstein makes constant speed of light key premis to special relativity
Considering the results of the Michelson-Morley experiment led Einstein to develop thetheory of special relativity. The key premise to special relativity is that the speed of light (called c = 186,000 miles per sec) is constant in all frames of reference, regardless of their motion. What this means can be best demonstrated by the following scenario:

  • special relativity interprets light as a particle called a photon
  • photon moves at speed of light and has zero mass
  • speed of light is an absolute limit, objects with mass must move at less than speed of light
This eliminates the paradox with respect to Newtonian physics and electromagnetism of what does a light ray `look like’ when the observer is moving at the speed of light. The solution is that only massless photons can move at the speed of light, and that matter must remain below the speed of light regardless of how much acceleration is applied.In special relativity, there is a natural upper limit to velocity, the speed of light. And the speed of light the same in all directions with respect to any frame. A surprising result to the speed of light limit is that clocks can run at different rates, simply when they are traveling a different velocities.

  • space and time are variable concepts in relativity
  • time dilation = passage of time slows for objects moving close to the speed of light
This means that time (and space) vary for frames of reference moving at different velocities with respect to each other. The change in time is called time dilation, where frames moving near the speed of light have slow clocks.
 

  • Likewise, space is shorten in in high velocity frames, which is called Lorentz contraction

Space-Time Lab

  • relativity leads to some strange consequences, such as the twin paradox
  • however, all these predictions have been conferred numerous times by experimentation
Time dilation leads to the famous Twins Paradox, which is not a paradox but rather a simple fact of special relativity. Since clocks run slower in frames of reference at high velocity, then one can imagine a scenario were twins age at different rates when separated at birth due to a trip to the stars.It is important to note that all the predictions of special relativity, length contraction, time dilation and the twin paradox, have been confirmed by direct experiments, mostly using sub-atomic particles in high energy accelerators. The effects of relativity are dramatic, but only when speeds approach the speed of light. At normal velocities, the changes to clocks and rulers are too small to be measured.


Spacetime:

  • relativity links where and when (space and time) into a 4 dimensional continuum called spacetime
  • position in spacetime are events
  • trajectories through spacetime are called world lines
Special relativity demonstrated that there is a relationship between spatial coordinates and temporal coordinates. That we can no longer reference where without some reference to when. Although time remains physically distinct from space, time and the three dimensional space coordinates are so intimately bound together in their properties that it only makes sense to describe them jointly as a four dimensional continuum.Einstein introduced a new concept, that there is an inherent connection between geometry of the Universe and its temporal properties. The result is a four dimensional (three of space, one of time) continuum called spacetime which can best be demonstrated through the use of Minkowski diagrams and world lines.

  • determinism is hardened with the concept of spacetime since time now becomes tied to space
  • just as all space is `out there’, so is all time
Spacetime makes sense from special relativity since it was shown that spatial coordinates (Lorentz contraction) and temporal coordinates (time dilation) vary between frames of reference. Notice that under spacetime, time does not `happen’ as perceived by humans, but rather all time exists, stretched out like space in its entirety. Time is simply `there’.


Mass-Energy Equivalence:

  • if space and time are variable notions, the momentum must also be relative
  • in order to preserve conservation of energy, mass must be connected to momentum (i.e. energy)
Since special relativity demonstrates that space and time are variable concepts, then velocity (which is space divided by time) becomes a variable as well. If velocity changes from reference frame to reference frame, then concepts that involve velocity must also be relative. One such concept is momentum, motion energy.Momentum, as defined by Newtonian, can not be conserved from frame to frame under special relativity. A new parameter had to be defined, called relativistic momentum, which is conserved, but only if the mass of the object is added to the momentum equation.

This has a big impact on classical physics because it means there is an equivalence between mass and energy, summarized by the famous Einstein equation:

 

  • mass increases as one nears the speed of light, which explains the limit to the speed of light for material objects, you need infinite acceleration to move an infinitely increasing mass
The implications of this was not realized for many years. For example, the production of energy in nuclear reactions (i.e. fission and fusion) was shown to be the conversion of a small amount of atomic mass into energy. This led to the develop of nuclear power and weapons.As an object is accelerated close to the speed of light, relativistic effects begin to dominate. In particular, adding more energy to an object will not make it go faster since the speed of light is the limit. The energy has to go somewhere, so it is added to the mass of the object, as observed from the rest frame. Thus, we say that the observed mass of the object goes up with increased velocity. So a spaceship would appear to gain the mass of a city, then a planet, than a star, as its velocity increased.

  • mass-energy equivalence is perhaps the most fundamental discovery of the 20th century
  • photons have momentum, i.e. pressure = solar sails
Likewise, the equivalence of mass and energy allowed Einstein to predict that the photon has momentum, even though its mass is zero. This allows the development of light sails and photoelectric detectors.