Satellites Orbiting Earth

How a Satellite Works

Satellites are very complex machines that require precise mathematical calculations in order for them to function. The satellite has tracking systems and very sophisticated computer systems on board. Accuracy in orbit and speed are required for the satellite to keep from crashing back down to Earth. There are several different types of orbits that the satellite can take. Some orbits are stationary and some are elliptical.”Satellite Orbit”

Low Earth Orbit

A satellite is in “Low Earth Orbit” when it circles in an elliptical orbit close to Earth. Satellites in low orbit are just hundreds of miles away. These satellites travel at high speeds preventing gravity from pulling them back to Earth. Low Orbit Satellites travel approximately 17,000 miles per hour and circle the Earth in an hour and a half.

Polar Orbit

This is how a satellite travels in a polar orbit.

This is how a satellite travels in a polar orbit. These orbits eventually pass the entire surface of the Earth.

Polar Orbiting Satellites circle the planet in a north-south direction as Earth spins beneath it in an east-west direction. Polar Orbits enable satellites to scan the entire surface of the Earth. Like pealing an orange peal in a circular motion from top to bottom. Remote sensing satellites, weather satellites, and government satellites are almost always in polar orbit because of the coverage. Polar orbits cover the Earth’s surface thoroughly. The polar obit occupied by a satellite has a constant location in which it passes over. ALL POLAR ORBITING SATELLITES INTERSECT The North Pole at their same point. While one Polar orbit satellite is over America, another Polar Satellite is passing over the North Pole. So the North Pole has a constant flow of UHF and higher microwaves hitting it. The illustration shows that the common passing point for Polar Orbiting Satellites is over the North Pole.

A polar orbiting satellite will pass over the Earths equator at a different longitude on each of its orbits; however, Polar Orbiting satellites pass over the North Pole every time. Polar orbits are often used for earth mapping, earth observation, weather satellites, and reconnaissance satellites. This orbit has a disadvantage. No one spot of the Earth’s surface can be sensed continuously from a satellite in a polar orbit.

This is from U.S. Army Information Systems Engineering Command.

“In order to fulfill the military need for protected communication service, especially low probability of intercept/detection (LPI/LPD), to units operating north of 65 degree northern latitude, the space communications architecture includes the polar satellite system capability. An acceptable approach to achieving this goal is to fly a low capacity EHF system in a highly elliptical orbit, either as a hosted payload or as a “free-flyer,” to provide service during a transition period, nominally 1997-2010. A single, hosted EHF payload is already planned. Providing this service 24 hours-a-day requires a two satellite constellation at high earth orbit (HEO). Beyond 2010, the LPI/LPD polar service could continue to be provided by a high elliptical orbit HEO EHF payload, or by the future UHF systems.” (quote from www.fas.org)

THERE IS A CONSTANT 24 HOUR EHF AND HIGHER MICROWAVE TRANSMISSION PASSING OVER THE NORTH POLE!

“Geo Synchronous” Orbit

This is how a satellite travels in a Equitorial orbit

This is how a satellite travels in a “Geo Synchronous” orbit. Equatorial orbits are also called “Geostationary”. These satellites follow the rotation of the Earth.

A satellite in a “Geo Synchronous” orbit hovers over one spot and follows the Earths spin along the equator. Go to this link for more information on “Geo synchronous Orbits”. Earth takes 24 hours to spin on its axis.  In the illustration you can see that an “Geo Synchronous” Orbit follows the equator and never covers the North or South Poles. The footprints of “Geo Synchronous” orbiting satellites do not cover the polar regions, so communication satellites in “Geo Synchronous” orbits in cannot be accessed in the northern and southern polar regions.

Because the “Geo Synchronous” satellite does not move from the area that it covers, these satellites are used for telecommunications, gps trackers, television broadcasting, government, and internet. Because they are stationary, their orbits are much farther from the Earth than the Polar orbiting satellites. If a stationary satellite is too close to the Earth, it will crash back down at a faster rate. They say there are about 300 “Geo Synchronous” satellites in orbit right now. Of course, these are the satellites that the public is allowed to know about, that are not governmentally classified.

Satellite Anatomy

This is the Atatomy of a Satellite.

This is the Anatomy of a Satellite.

A satellite is made up of several instruments that work together to operate the satellite during its mission. This illustration to the left demonstrates the parts of a satellite.

The command and data system controls all of the satellite functions. This is a very complex computer system that communicates all of the satellite flight operations, where the satellite points, and any other mathematical operations.

The Pointing control directs the satellite in order for the satellite to keep a steady flight path. This system is a complex sensor instrument that keeps the satellite pointing in the same direction. The satellite uses a propulsion system called “momentum wheels” that adjusts the position of the satellite into its proper place. Scientific observation satellites have more precise propulsion systems than do communications satellites.

The Communications system has a transmitter, a receiver, and various antennas to transmit data to the Earth . On Earth, Ground control sends instructions and data to the satellite’s computer through the Antenna. Pictures, data, television, radio, and many other data is sent by the satellite back to practically everyone on Earth.

The Power system needed power and operate the satellite is an efficient solar panel array that obtains energy from the Sun’s rays. Solar arrays make electricity from the sunlight and store the electricity in rechargeable batteries.

The Payload is what a satellite needs to perform its job. A weather satellite would have a payload that consist of an Image sensor, digital camera, telescope, and other thermal and weather sensing devices.

The Thermal Control is the protection required to prevent damage to the satellite’s instrumentation and components in. Satellite are exposed to extreme temperature changes. Temperatures range from 120 degrees below zero to 180 degrees above zero. Heat distribution units and thermal blankets to protect the electronics and components from temperature damage.

Satellite Footprints

A single satellite footprint

Here you can see one footprint covers an enormous area.

Geostationary satellites have a very broad view of Earth. The footprint of one Echo Starbroadcast satellite covers almost all of North America. They stay over the Earth at same the same location so we always know where they are. Direct contact with the satellite can be made because Equatorial Satellites are fixed.

Many communications satellites travel in Equatorial orbits, including those that relay TV signals into our homes; However, the size of the footprint of one satellite covers the entire Northern America.

The multi path effect that occurs when satellite transmissions are obstructed by topographical entities also provides insight on microwave global warming. Microwaves are being bombarded upon our planet. Our planet absorbs and obstructs the waves from space. Microwaves penetrate through all of our atmosphere and bounce and echo off of the Earth. Imagine the footprint overlaps that are being produced by the thousands of satellites in orbit right now?

coverage 8 pic

Here you can see the footprint overlapping the that satellites make. Each satellite covers an enormous area.

The closer the satellite is to something the more power will be exerted on the object. The farther the waves have to go the less power they will have. Because the atmosphere is so much closer to the satellite, there is a stronger beam of energy going through the clouds and atmosphere. This stronger power causes a higher rate of warming in the atmosphere than it does on the surface of the Earth.

The illustration to the right shows how eight satellites microwave an enormous part of our Earth. When the radio signals reflect off of surrounding terrain; buildings, canyon walls, hard ground multi path issues occur due to multiple waves doubling over themselves. These delayed signals can cause poor signals. Ultimately, the water, ice, and Earth are absorbing and reflecting microwaves in many different directions. Microwaves passing through Earths atmospheres are causing radio frequency heating at the molecular level.

System spectral efficiency

“In wireless networks, the system spectral efficiency is a measure of the quantity of users or services that can be simultaneously supported by a limited radio frequency bandwidth in a defined geographic area.” The capacity of a wireless network can be measured by calculating the maximum simultaneous phone calls over 1 MHz frequency spectrum. This is measured in Erlangs//MHz/cell, Erlangs/MHz/sector, Erlangs/MHz/site, or Erlangs/MHz/km measurements. Modern day cell phones take advantage of this type of transmission. These cell phones transmit a microwave transmission that is twice the frequency of a microwave oven in your home.

This is a misconception of how microwave frequencies travel.

This is a misconception of how microwave frequencies travel.

An example of a spectral efficiency can be found in the satellite RADARSAT-1. In 1995 RADARSAT-1, an Earth observation satellite from Canada, was launched in an orbit above the Earth. RADRASAT-1 provides images of the Earth, scientific and commercial, used in agriculture, geology, hydrology, arctic surveillance, oceanography, cartography, ice and ocean monitoring, forestry, detecting ocean oil slicks, and many other applications. This satellite uses continuous high microwave transmissions. A Synthetic Aperture Radar (SAR) system is a type of sensor that images the Earth at a single microwave frequency of 5.3 GHz. SAR systems transmit microwaves towards the surface of the Earthy and record the reflections from the surface. This satellite can image the Earth during any time and in any atmospheric condition.

This is how microwave frequencies travel

This is how microwave frequencies actually travel.

A Common misconception about microwave transmissions is that the transmission is directly beaming straight into the receiving antennae. (See misconception illustration) This however, is not true. Transmissions are spread into the air in a spherical direction. The waves travel in every direction until they find a receiver or some dielectric material to pass into.

When a microwave transmission is sent to a receiving satellite dish the transmission is sent in a spherical direction. (See how microwaves travel illustration) The signal passes through all parts of that sphere until it finds a connection. All microwaves, not received by an antennae, pass through the dielectric material in the earth. Dielectric material is primarily water and ice.

Nuclear Bombs : History, Creation, Ingredients, Chemical Composition, Fusion, Types and Detonation

American nuclear technology evolved rapidly between 1944 and 1950, moving from the primitive Fat Man and Little Boy to more sophisticated, lighter, more powerful, and more efficient designs. Much design effort shifted from fission to thermonuclear weapons after President Truman decided that the United States should proceed to develop a hydrogen bomb, a task which occupied the Los Alamos Laboratory from 1950 through 1952. The “George” shot of Operation Greenhouse (May 9, 1951) confirmed for the first time that a fission device could produce the conditions needed to ignite a thermonuclear reaction. The “Mike” test of Operation Ivy, 1 November, 1952, was the first explosion of a true two-stage thermonuclear device.

From 1952 until the early years of the ICBM era [roughly to the development of the first multiple independently targeted reentry vehicles (MIRVs) in the late 1960’s], new concepts in both fission primary and fusion secondary design were developed rapidly. However, after the introduction of the principal families of weapons in the modern stockpile (approximately the mid 1970’s), the rate of design innovations and truly new concepts slowed as nuclear weapon technology became a mature science. It is believed that other nations’ experiences have been roughly similar, although the United States probably has the greatest breadth of experience with innovative designs simply because of the more than 1,100 nuclear detonations it has conducted. The number of useful variations on the themes of primary and secondary design is finite, and designers’ final choices are frequently constrained by considerations of weapon size, weight, safety, and the availability of special materials.

Nuclear weaponry has advanced considerably since 1945, as can be seen at an unclassified level by comparing the size and weight of “Fat Man” with the far smaller, lighter, and more powerful weapons carried by modern ballistic missiles. Most nations of the world, including those of proliferation interest, have subscribed to the 1963 Limited Test Ban Treaty, which requires that nuclear explosions only take place underground. Underground testing can be detected by seismic means and by observing radioactive effluent in the atmosphere. It is probably easier to detect and identify a small nuclear test in the atmosphere than it is to detect and identify a similarly sized underground test. In either case, highly specialized instrumentation is required if a nuclear test explosion is to yield useful data to the nation carrying out the experiment.

US nuclear weapons technology is mature and might not have shown many more qualitative advances over the long haul, even absent a test ban. The same is roughly true for Russia, the UK, and possibly for France. The design of the nuclear device for a specific nuclear weapon is constrained by several factors. The most important of these are the weight the delivery vehicle can carry plus the size of the space available in which to carry the weapon (e.g., the diameter and length of a nosecone or the length and width of a bomb bay). The required yield of the device is established by the target vulnerability. The possible yield is set by the state of nuclear weapon technology and by the availability of special materials. Finally, the choices of specific design details of the device are determined by the taste of its designers, who will be influenced by their experience and the traditions of their organization.

Fission Weapons

An ordinary “atomic” bomb of the kinds used in World War II uses the process of nuclear fission to release the binding energy in certain nuclei. The energy release is rapid and, because of the large amounts of energy locked in nuclei, violent. The principal materials used for fission weapons are U-235 and Pu-239, which are termed fissile because they can be split into two roughly equal-mass fragments when struck by a neutron of even low energies. When a large enough mass of either material is assembled, a self-sustaining chain reaction results after the first fission is produced.The minimum mass of fissile material that can sustain a nuclear chain reaction is called a critical mass and depends on the density, shape, and type of fissile material, as well as the effectiveness of any surrounding material (called a reflector or tamper) at reflecting neutrons back into the fissioning mass. Critical masses in spherical geometry for weapon-grade materials are as follows:

Uranium-235      Plutonium-239

Bare sphere: 56 kg 11 kg
Thick Tamper: 15 kg 5 kg

The critical mass of compressed fissile material decreases as the inverse square of the density achieved. Since critical mass decreases rapidly as density increases, the implosion technique can make do with substantially less nuclear material than the gun-assembly method. The “Fat Man” atomic bomb that destroyed Nagasaki in 1945 used 6.2 kilograms of plutonium and produced an explosive yield of 21-23 kilotons [a 1987 reassessment of the Japanese bombings placed the yield at 21 Kt]. Until January 1994, the Department of Energy (DOE) estimated that 8 kilograms would typically be needed to make a small nuclear weapon. Subsequently, however, DOE reduced the estimate of the amount of plutonium needed to 4 kilograms. Some US scientists believe that 1 kilogram of plutonium will suffice.

If any more material is added to a critical mass a condition of supercriticality results. The chain reaction in a supercritical mass increases rapidly in intensity until the heat generated by the nuclear reactions causes the mass to expand so greatly that the assembly is no longer critical.

Fission weapons require a system to assemble a supercritical mass from a sub-critical mass in a very short time. Two classic assembly systems have been used, gun and implosion. In the simpler gun-type device, two subcritical masses are brought together by using a mechanism similar to an artillery gun to shoot one mass (the projectile) at the other mass (the target). The Hiroshima weapon was gun-assembled and used 235 U as a fuel. Gun-assembled weapons using highly enriched uranium are considered the easiest of all nuclear devices to construct and the most foolproof.

Gun-Device

In the gun device, two pieces of fissionable material, each less than a critical mass, are brought together very rapidly to forma single supercritical one. This gun-type assembly may be achieved in a tubular device in which a high explosive is used to blow one subcritical piece of fissionable material from one end of the tube into another subcritical piece held at the opposite end of the tube.

Manhattan Project scientists were so confident in the performance of the “Little Boy” uranium bomb that the device was not even tested before it was used. This 15-kt weapon was airdropped on 06 August 1945 at Hiroshima, Japan. The device contained 64.1 kg of highly enriched uranium, with an average enrichment of 80%. The six bombs built by the Republic of South Africa were gun-assembled and used 50kg of uranium enriched to between 80 percent and 93 percent in the isotope U-235.Compared with the implosion approach, this method assembles the masses relatively slowly and at normal densities; it is practical only with highly enriched uranium. If plutonium – even weapon-grade — were used in a gun-assembly design, neutrons released from spontaneous fission of its even-numbered isotopes would likely trigger the nuclear chain reaction too soon, resulting in a “fizzle” of dramatically reduced yield.

Implosion-Device

Because of the short time interval between spontaneous neutron emissions (and, therefore, the large number of background neutrons) found in plutonium because of the decay by spontaneous fission of the isotope Pu-240, Manhattan Project scientists devised the implosion method of assembly in which high explosives are arranged to form an imploding shock wave which compresses the fissile material to supercriticality.

The core of fissile material that is formed into a super-critical mass by chemical high explosives (HE) or propellants. When the high explosive is detonated, an inwardly directed implosion wave is produced. This wave compresses the sphere of fissionable material. The decrease in surface to volume ratio of this compressed mass plus its increased density is then such as to make the mass supercritical. The HE is exploded by detonators timed electronically by a fuzing system, which may use altitude sensors or other means of control.

The nuclear chain-reaction is normally started by an initiator that injects a burst of neutrons into the fissile core at an appropriate moment. The timing of the initiation of the chain reaction is important and must be carefully designed for the weapon to have a predictable yield. A neutron generator emits a burst of neutrons to initiate the chain reaction at the proper moment  near the point of maximum compression in an implosion design or of full assembly in the gun-barrel design.

A surrounding tamper may help keep the nuclear material assembled for a longer time before it blows itself apart, thus increasing the yield. The tamper often doubles as a neutron reflector.

Implosion systems can be built using either Pu-239 or U-235 but the gun assembly only works for uranium. Implosion weapons are more difficult to build than gun weapons, but they are also more efficient, requiring less SNM and producing larger yields. Iraq attempted to build an implosion bomb using U-235. In contrast, North Korea chose to use 239 Pu produced in a nuclear reactor.

Boosted Weapons

To fission more of a given amount of fissile material, a small amount of material that can undergo fusion, deuterium and tritium (D-T) gas, can be placed inside the core of a fission device. Here, just as the fission chain reaction gets underway, the D-T gas undergoes fusion, releasing an intense burst of high-energy neutrons (along with a small amount of fusion energy as well) that fissions the surrounding material more completely. This approach, called boosting, is used in most modem nuclear weapons to maintain their yields while greatly decreas-ing their overall size and weight.

Enhanced Radiation Weapons

An enhanced radiation (ER) weapon, by special design techniques, has an output in which neutrons and x-rays are made to constitute a substantial portion of the total energy released. For example, a standard fission weapon’s total energy output would be partitioned as follows: 50% as blast; 35% as thermal energy; and 15% as nuclear radiation. An ER weapon’s total energy would be partitioned as follows: 30% as blast; 20% as thermal; and 50% as nuclear radiation. Thus, a 3-kiloton ER weapon will produce the nuclear radiation of a 10-kiloton fission weapon and the blast and thermal radiation of a 1-kiloton fission device. However, the energy distribution percentages of nuclear weapons are a function of yield.

Fusion Weapons

A more powerful but more complex weapon uses the fusion of heavy isotopes of hydrogen, deuterium, and tritium to release large numbers of neutrons when the fusile (sometimes termed “fusionable”) material is compressed by the energy released by a fission device called a primary. Fusion (or thermonuclear’ weapons derive a significant amount of their total energy from fusion reactions. The intense temperatures and pressures generated by a fission explosion overcome the strong electrical repulsion that would otherwise keep the positively charged nuclei of the fusion fuel from reacting. The fusion part of the weapon is called a secondary.In general, the x-rays from a fission primary heat and compress material surrounding a secondary fusion stage.

It is inconvenient to carry deuterium and tritium as gases in a thermonuclear weapon, and certainly impractical to carry them as liquefied gases, which requires high pressures and cryogenic temperatures. Instead, one can make a “dry” device in which 6Li is combined with deuterium to form the compound 6Li D (lithium-6 deuteride). Neutrons from a fission “primary” device bombard the 6 Li in the compound, liberating tritium, which quickly fuses with the nearby deuterium. The a particles, being electrically charged and at high temperatures, contribute directly to forming the nuclear fireball. The neutrons can bombard additional 6Li nuclei or cause the remaining uranium and plutonium in the weapon to undergo fission. This two-stage thermonuclear weapon has explosive yields far greater than can be achieved with one point safe designs of pure fission weapons, and thermonuclear fusion stages can be ignited in sequence to deliver any desired yield. Such bombs, in theory, can be designed with arbitrarily large yields: the Soviet Union once tested a device with a yield of about 59 megatons.

In a relatively crude sense, 6 Li can be thought of as consisting of an alpha particle ( 4He) and a deuteron ( 2H) bound together. When bombarded by neutrons, 6 Li disintegrates into a triton ( 3 H) and an alpha:

6 Li + Neutron = 3 H + 3 He + Energy.This is the key to its importance in nuclear weapons physics. The nuclear fusion reaction which ignites most readily is

2 H + 3 H =
4 He + n + 17.6 MeV,or, phrased in other terms, deuterium plus tritium produces 4He plus a neutron plus 17.6 MeV of free energy:

D + T = 4 He + n + 17.6 MeV.Lithium-7 also contributes to the production of tritium in a thermonuclear secondary, albeit at a lower rate than 6Li. The fusion reactions derived from tritium produced from 7 Li contributed many unexpected neutrons (and hence far more energy release than planned) to the final stage of the infamous 1953 Castle/BRAVO atmospheric test, nearly doubling its expected yield.

Safing, Arming, Fuzing, and Firing (SAFF)

The ability to make effective use of a nuclear weapon is limited unless the device can be handled safely, taken safely from storage when required, delivered to its intended target, and then detonated at the correct point in space and time to achieve the desired goal. Although the intended scenarios for use of its weapons will strongly influence specific weaponization concepts and approaches, functional capabilities for safing, arming, fuzing, and firing (SAFF) will be fundamental.Nuclear weapons are particularly destructive, with immediate effects including blast and thermal radiation and delayed effects produced by ionizing radiation, neutrons, and radioactive fallout. They are expensive to build, maintain, and employ, requiring a significant fraction of the total defense resources of a small nation. In a totalitarian state the leader must always worry that they will be used against the government; in a democracy the possibility of an unauthorized or accidental use must never be discounted. A nuclear detonation as the result of an accident would be a local catastrophe.Because of their destructiveness, nuclear weapons require precautions to prevent accidental detonation during any part of their manufacture and lifetime. And because of their value, the weapons require reliable arming and fuzing mechanisms to ensure that they explode when delivered to target. Therefore, any nuclear power is likely to pay some attention to the issues of safing and safety, arming, fuzing, and firing of its nuclear weapons. The solutions adopted depend upon the level of technology in the proliferant state, the number of weapons in its stockpile, and the political consequences of an accidental detonation.Whether to protect their investment in nuclear arms or to deny potential access to and use of the weapons by unauthorized persons, proliferators or subnational groups will almost certainly seek special measures to ensure security and operational control of nuclear weapons. These are likely to include physical security and access control technologies at minimum and may include use control. The techniques used today by the existing western nuclear weapon states represent the culmination of a half-century of evolution in highly classified military programs, and proliferators may well choose simpler solutions, perhaps by adapting physical security, access, and operational controls used in the commercial sector for high-value/high-risk assets.From the very first nuclear weapons built, safety was a consideration. The two bombs used in the war drops on Hiroshima and Nagasaki posed significant risk of accidental detonation if the B-29 strike aircraft had crashed on takeoff. As a result, critical components were removed from each bomb and installed only after takeoff and initial climb to altitude were completed. Both weapons used similar arming and fuzing components. Arming could be accomplished by removing a safety connector plug and replacing it with a distinctively colored arming connector. Fuzing used redundant systems including a primitive radar and a barometric switch. No provision was incorporated in the weapons themselves to prevent unauthorized use or to protect against misappropriation or theft.

In later years, the United States developed mechanical safing devices. These were later replaced with weapons designed to a goal of less than a 1 in a 1 million chance of the weapon delivering more than 4 pounds of nuclear yield if the high explosives were detonated at the single most critical possible point. Other nations have adopted different safety criteria and have achieved their safety goals in other ways.

In the 1950’s, to prevent unauthorized use of U.S. weapons stored abroad, permissive action links (PALs) were developed. These began as simple combination locks and evolved into the modern systems which allow only a few tries to arm the weapon and before disabling the physics package should an intruder persist in attempts to defeat the PAL.

Safing To ensure that the nuclear warhead can be stored, handled, deployed, and employed in a wide spectrum of intended and unintended environmental and threat conditions, with assurance that it will not experience a nuclear detonation. In U.S. practice, safing generally involves multiple mechanical interruptions of both power sources and pyrotechnic/explosive firing trains. The nuclear components may be designed so that an accidental detonation of the high explosives is intrinsically unable to produce a significant (>4 pounds TNT equivalent) nuclear yield; it is simpler to insert mechanical devices into the pit to prevent the assembly of a critical mass into the pit or to remove a portion of the fissile material from inside the high explosives. Mechanical safing of a gun-assembled weapon is fairly straightforward; one can simply insert a hardened steel or tungsten rod across a diameter of the gun barrel, disrupting the projectile. All U.S. weapons have been designed to be intrinsically one-point safe in the event of accidental detonation of the high explosives, but it is not anticipated that a new proliferator would take such care.

Arming Placing the nuclear warhead in a ready operational state, such that it can be initiated under specified firing conditions. Arming generally involves mechanical restoration of the safing interrupts in response to conditions that are unique to the operational environment (launch or deployment) of the system. A further feature is that the environment typically provides the energy source to drive the arming action. If a weapon is safed by inserting mechanical devices into the pit (e.g., chains, coils of wire, bearing balls) to prevent complete implosion, arming involves removal of those devices. It may not always be possible to safe a mechanically armed device once the physical barrier to implosion has been removed.

Fuzing To ensure optimum weapon effectiveness by detecting that the desired conditions for warhead detonation have been met and to provide an appropriate command signal to the firing set to initiate nuclear detonation. Fuzing generally involves devices to detect the location of the warhead with respect to the target, signal processing and logic, and an output circuit to initiate firing.

Firing To ensure nuclear detonation by delivering a precise level of precisely timed electrical or pyrotechnic energy to one or more warhead detonating devices. A variety of techniques are used, depending on the warhead design and type of detonation devices.

Depending on the specific military operations to be carried out and the specific delivery system chosen, nuclear weapons pose special technological problems in terms of primary power and power-conditioning, overall weapon integration, and operational control and security.

Not all weapons possessors will face the same problems or opt for the same levels of confidence, particularly in the inherent security of their weapons. The operational objectives will in turn dictate the technological requirements for the SAFF subsystems. Minimal requirements could be met by surface burst (including impact fuzing of relatively slow moving warhead) or crude preset height of burst based on simple timer or barometric switch or simple radar altimeter. Modest requirements could be met by more precise HOB (height of burst) based on improved radar triggering or other methods of measuring distance above ground to maxmize radius of selected weapons effects, with point-contact salvage fuzing. Parachute delivery of bombs to allow deliberate laydown and surface burst. Substantial requirements could be met by variable HOB, including low-altitude for ensured destruction of protected strategic targets, along with possible underwater or exoatmospheric capabilities.

Virtually any country or extranational group with the resources to construct a nuclear device has sufficient capability to attain the minimum SAFF capability that would be needed to meet terrorist or minimal national aims. The requirements to achieve a “modest” or “substantial” capability level are much more demanding. Both safety and protection of investment demand very low probability of failure of safing and arming mechanisms, with very high probability of proper initiation of the warhead. All of the recognized nuclear weapons states and many other countries have (or have ready access to) both the design know-how and components required to implement a significant capability.In terms of sophistication, safety, and reliability of design, past U.S. weapons programs provide a legacy of world leadership in SAFF and related technology. France and the UK follow closely in overall SAFF design and may actually hold slight leads in specific component technologies. SAFF technologies of other nuclear powers – notably Russia and China – do not compare. Japan and Germany have technological capabilities roughly on a par with the United States, UK, and France, and doubtless have the capability to design and build nuclear SAFF subsystems.Reliable fuzing and firing systems suitable for nuclear use have been built since 1945 and do not need to incorporate any modern technology. Many kinds of mechanical safing systems have been employed, and several of these require nothing more complex than removable wires or chains or the exchanging of arming/ safing connector plugs. Safing a gun-assembled system is especially simple. Arming systems range from hand insertion of critical components in flight to extremely sophisticated instruments which detect specific events in the stockpile to target sequence (STS). Fuzing and firing systems span an equally great range of technical complexity.Any country with the electronics capability to build aircraft radar altimeter equipment should have access to the capability for building a reasonably adequate, simple HOB fuze. China, India, Israel, Taiwan, South Korea, Brazil, Singapore, the Russian Federation and the Ukraine, and South Africa all have built conventional weapons with design features that could be adapted to more sophisticated designs, providing variable burst height and rudimentary Electronic Counter Counter Measure (ECCM) features. With regard to physical security measures and use control, the rapid growth in the availability and performance of low-cost, highly reliable microprocessing equipment has led to a proliferation of electronic lock and security devices suitable for protecting and controlling high-value/at-risk assets. Such technology may likely meet the needs of most proliferant organizations.

Dualities (String Theory)

Sometimes different mathematical theories describe the same physics. We call this situation a duality. In many cases, calculations which are very complicated in one theory become much easier in the other.

Usefully, string theory is awash with dualities. These variously offer us new perspectives on reality, improve our ability to compute hard sums and unite disparate areas of physics. Much of modern research focuses on using these dualities to better understand a broad spectrum of topics.

T-duality is the simplest to appreciate. Remember that string theory requires six extra dimensions tightly curled up in space. Naively, one would think that these dimensions could be arbitrarily big or arbitrarily small, with different physics holding in each case. However something strange happens when you make these dimensions very small. Of paramount importance is a tiny quantity known as the Planck length, which we denote by a.

How does the radius of a circular dimension affect the physics of string theory? We can appreciate this work with a thought experiment.

Set up a circular extra dimension the size of the Planck length. Start contracting the circle and measure the resulting physics. Your readings will vary depending on the size of the dimension. Now repeat the experiment, but with a crucial difference; instead of contracting your circle, expand it.

Observing the physics again, you realise that it’s exactly the same as for a contracting dimension! There is a duality between the two scenarios. Mathematically it can be proven that extra dimensions with radii r and a/r produce the same physics: they are identical theories.

An extension of T-duality produces mirror symmetry. In many string theory models, the extra dimensions form a six dimensional shape called a Calabi-Yau manifold. Sadly there are millions of different Calabi-Yau surfaces, each with a slightly different structure. The properties of the Calabi-Yau manifolds affect the expected four-dimensional physics. So we must pin down the correct possibility for the physics we observe.

This requires a lot of calculation. And maths is hard in six dimensions, as you might guess! But here’s where mirror symmetry comes in. In the late 1980s it became clear that Calabi-Yau shapes come in pairs. For any given pair, both lead to the same physics. We have a duality! Physicists could chop and change between mirror pairs, making computations more tractable.

Our third duality is more fundamental: it underpins the success of M-theory. We’ll refer to it as S-duality. All quantum field theories contain a coupling constant, which determines the strength of interactions between particles. String theory is no exception. The value of the coupling constant vastly affects the behaviour predicted.

During the First Superstring Revolution physicists realised that there were five different brands of string theory. At first it seemed like they were all completely separate. But the discovery of various S-dualities sparked a paradigm shift. These dualities related the different flavours of string theory through a framework called M-theory.

More precisely physicists paired up the different types of string model, like so. Take two distinct string theories, A and B. They each have an adjustable coupling constant. If A has a large coupling constant and B a small one, then they predict exactly the same physics. The end result was that the many different string theories were united under a single banner.

Finally we come to the hottest guy in town. The AdS-CFT correspondence is a conjectured duality which has been around for barely a decade. Subtle yet powerful, it has profound implications for string theory as a tool in research. It’s such an important idea that it requires a full explanation.

Einstiens General Relativity(FullyExplained)

Spacetime and Energy

  • relativity unifies space, time, mass and energy
Special relativity and E=mc2 led to the most powerful unification of physical concepts since the time of Newton. The previously separate ideas of space, time, energy and mass were linked by special relativity, although without a clear understanding of how they were linked.

  • explanation provided by general relativity, where a complete theory of gravity is provided by using the geometry of spacetime
The how and why remained to the domain of what is called general relativity, a complete theory of gravity using the geometry of spacetime. The origin of general relativity lies in Einstein’s attempt to apply special relativity in accelerated frames of reference. Remember that the conclusions of relativity were founded for inertial frames, i.e. ones that move only at a uniform velocity. Adding acceleration was a complication that took Einstein 10 years to formulate.

Equivalence Principle:

  • equivalence principle equates accelerating and gravity effects
The equivalence principle was Einstein’s `Newton’s apple’ insight to gravitation. His thought experiment was the following, imagine two elevators, one at rest of the Earth’s surface, one accelerating in space. To an observer inside the elevator (no windows) there is no physical experiment that he/she could perform to differentiate between the two scenarios.The equivalence principle is a fundamental law of physics that states that gravitational and inertial forces are of a similar nature and often indistinguishable. In the Newtonian form it asserts, in effect, that, within a windowless laboratory freely falling in a uniform gravitational field, experimenters would be unaware that the laboratory is in a state of nonuniform motion. All dynamical experiments yield the same results as obtained in an inertial state of uniform motion unaffected by gravity.

  • although a simple and common sense assumption, the equivalence principle has strange consequences
  • such as, photons will be effected by gravity, even though they have zero mass
An immediate consequence of the equivalence principle is that gravity bends light. To visualize why this is true imagine a photon crossing the elevator accelerating into space. As the photon crosses the elevator, the floor is accelerated upward and the photon appears to fall downward. The same must be true in a gravitational field by the equivalence principle.The principle of equivalence renders the gravitational field fundamentally different from all other force fields encountered in nature. The new theory of gravitation, the general theory of relativity, adopts this characteristic of the gravitational field as its foundation.

  • two classical tests of general relativity:
  • the first is the deflection of starlight by the Sun’s gravity as measured by the 1919 solar eclipse experiment
There were two classical test of general relativity, the first was that light should bedeflected by passing close to a massive body. The first opportunity occurred during a total eclipse of the Sun in 1919.Measurements of stellar positions near the darkened solar limb proved Einstein was right. Direct confirmation of gravitational lensing was obtained by the Hubble Space Telescope last year.


General Relativity :

  • general relativity combines special relativity with the equivalence principle
  • general relativity first resolves the problem of the instantaneous transfer of gravity under Newton’s theory by stating that gravity propagates at the speed of light
The second part of relativity is the theory of general relativity and lies on two empirical findings that he elevated to the status of basic postulates. The first postulate is the relativity principle: local physics is governed by the theory of special relativity. The second postulate is the equivalence principle: there is no way for an observer to distinguish locally between gravity and acceleration.The general theory of relativity derives its origin from the need to extend the new space and time concepts of the special theory of relativity from the domain of electric and magnetic phenomena to all of physics and, particularly, to the theory of gravitation. As space and time relations underlie all physical phenomena, it is conceptually intolerable to have to use mutually contradictory notions of space and time in dealing with different kinds of interactions, particularly in view of the fact that the same particles may interact with each other in several different ways–electromagnetically, gravitationally, and by way of so-called nuclear forces.

Newton’s explanation of gravitational interactions must be considered one of the most successful physical theories of all time. It accounts for the motions of all the constituents of the solar system with uncanny accuracy, permitting, for instance, the prediction of eclipses hundreds of years ahead. But Newton’s theory visualizes the gravitational pull that the Sun exerts on the planets and the pull that the planets in turn exert on their moons and on each other as taking place instantaneously over the vast distances of interplanetary space, whereas according to relativistic notions of space and time any and all interactions cannot spread faster than the speed of light. The difference may be unimportant, for practical reasons, as all of the members of the solar system move at relative speeds far less than 1/1,000 of the speed of light; nevertheless, relativistic space-time and Newton’s instantaneous action at a distance are fundamentally incompatible. Hence Einstein set out to develop a theory of gravitation that would be consistent with relativity.

  • remembering that mass changes with motion, and that mass causes gravity, Einstein links mass, gravity and spacetime with the geometry of spacetime
Proceeding on the basis of the experience gained from Maxwell’s theory of the electric field, Einstein postulated the existence of a gravitational field that propagates at the speed of light, c, and that will mediate an attraction as closely as possible equal to the attraction obtained from Newton’s theory. From the outset it was clear that mathematically a field theory of gravitation would be more involved than that of electricity and magnetism. Whereas the sources of the electric field, the electric charges of particles, have values independent of the state of motion of the instruments by which these charges are measured, the source of the gravitational field, the mass of a particle, varies with the speed of the particle relative to the frame of reference in which it is determined and hence will have different values in different frames of reference. This complicating factor introduces into the task of constructing a relativistic theory of the gravitational field a measure of ambiguity, which Einstein resolved eventually by invoking the principle of equivalence.Einstein discovered that there is a relationship between mass, gravity and spacetime. Mass distorts spacetime, causing it to curve. Gravity can be described as motion caused in curved spacetime .

  • gravity as geometry of spacetime returns physics to classic levels of the ancient Greeks
  • however, spacetime is not Euclidean
  • matter tells spacetime how to curve, and spacetime tells matter how to move (orbits)
Thus, the primary result from general relativity is that gravitation is a purely geometric consequence of the properties of spacetime. Special relativity destroyed classical physics view of absolute space and time, general relativity dismantles the idea that spacetime is described by Euclidean or plane geometry. In this sense, general relativity is a field theory, relating Newton’s law of gravity to the field nature of spacetime, which can be curved.Gravity in general relativity is described in terms of curved spacetime. The idea that spacetime is distorted by motion, as in special relativity, is extended to gravity by the equivalence principle. Gravity comes from matter, so the presence of matter causes distortions or warps in spacetime. Matter tells spacetime how to curve, and spacetime tells matter how to move (orbits).

  • the 2nd test was the prediction of time dilation in a gravitational field, first shown by atomic clocks in the mid-70’s (note the need of advanced technology to test general relativity)
  • the effects of general relativity require sensitive instruments under the condition of weak fields, i.e. conditions where the acceleration due to gravity is much, much less than the speed of light
  • strong fields are found in extreme situations such as near neutron stars or black holes
The second test is that general relativity predicts a time dilation in a gravitational field, so that, relative to someone outside of the field, clocks (or atomic processes) go slowly. This was confirmed with atomic clocks flying airplanes in the mid-1970’s.The general theory of relativity is constructed so that its results are approximately the same as those of Newton’s theories as long as the velocities of all bodies interacting with each other gravitationally are small compared with the speed of light–i.e., as long as the gravitational fields involved are weak. The latter requirement may be stated roughly in terms of the escape velocity. A gravitational field is considered strong if the escape velocity approaches the speed of light, weak if it is much smaller. All gravitational fields encountered in the solar system are weak in this sense.

Notice that at low speeds and weak gravitational fields, general and special relativity reduce to Newtonian physics, i.e. everyday experience.


Black Holes:

  • as gravity increases the escape velocity increases
  • when escape velocity exceeds the speed of light a black hole forms
The fact that light is bent by a gravitational field brings up the following thought experiment. Imagine adding mass to a body. As the mass increases, so does the gravitational pull and objects require more energy to reach escape velocity. When the mass is sufficiently high enough that the velocity needed to escape is greater than the speed of light we say that a black hole has been created.

  • since photons have zero mass, a better definition of a black hole is given by curvature
  • a black hole is an object of infinite curvature, a hole in spacetime
  • the Schwarzschild radius defines the event horizon, the point of no return around the black hole
Another way of defining a black hole is that for a given mass, there is a radius where if all the mass is compress within this radius the curvature of spacetime becomes infinite and the object is surrounded by an event horizon. This radius called the Schwarzschild radius and varys with the mass of the object (large mass objects have large Schwarzschild radii, small mass objects have small Schwarzschild radii).Schwarzschild radius is the radius below which the gravitational attraction between the particles of a body must cause it to undergo irreversible gravitational collapse. This phenomenon is thought to be the final fate of the more massive stars.

The gravitational radius (R) of an object of mass M is given by the following formula, in which G is the universal gravitational constant and c is the speed of light: R = 2GM/c2 . For a mass as small as a human being, the gravitational radius is of the order of 10-23cm, much smaller than the nucleus of an atom; for a typical star such as the Sun, it is about 3 km (2 miles).

  • a black hole is still visible by its distortion on local spacetime and the deflection of starlight
The Schwarzschild radius marks the point where the event horizon forms, below this radius no light escapes. The visual image of a black hole is one of a dark spot in space with no radiation emitted. Any radiation falling on the black hole is not reflected but rather absorbed, and starlight from behind the black hole is lensed.

  • the structure of a black hole contains only an event horizon and a singularity
Even though a black hole is invisible, it has properties and structure. The boundary surrounding the black hole at the Schwarzschild radius is called the event horizon, events below this limit are not observed. Since the forces of matter can not overcome the force of gravity, all the mass of a black hole compresses to infinity at the very center, called the singularity.

  • the size of a black hole is set by its mass
A black hole can come in any size. Stellar mass black holes are thought to form from supernova events, and have radii of 5 km. Galactic black hole in the cores of some galaxies, millions of solar masses and the radius of a solar system, are built up over time by cannibalizing stars. Mini black holes formed in the early Universe (due to tremendous pressures) down to masses of asteroids with radii the size of a grain of sand.

  • spacetime is severely distorted near the event horizon, and extreme effects are seen
Note that a black hole is the ultimate entropy sink since all information or objects that enter a black hole never return. If an observer entered a black hole to look for the missing information, he/she would be unable to communicate their findings outside the event horizon.

Quantum Gravity 

When it was discovered in the early twentieth century that Newtonian physics, although it had stood unchallenged for hundreds of years, failed to answer basic questions about time and space, such as ‘Is the universe infinite?’ or ‘Is time eternal?’, a new basis for physics was needed.

This lead to the development of Quantum Theory by Bohr, Schrodinger and Heisenberg and Relativity Theory by Einstein. This was the first step in the development of a new basis for physics. Both theories, however are incomplete, and are limited in their abilities to answer many questions. Quantum Physics deals with the behaviour of very small objects, such as atoms, why they do not disintegrate as Newtonian Physics wanted. The theory of Relativity, on the other hand deals with much large scales, celestial bodies and others.

Both theories fail when confronted to the other’s ‘domain’, and are therefore limited in their ability to describe the universe. One must unify these theories, make them compatible with one another. The resulting theory would be able to describe the behavior of the universe, from quarks and atoms to entire galaxies. This is the quantum theory of gravity.

There are two fundamental areas of modern physics, each describes the universe on different scales. First we have quantum mechanics which talks about atoms, molecules and fundamental particles. Then we have general relativity which tells us that gravity is the bending and warping of space-time. There has been much work on finding a theory that combines these two pillars of physics.

There are three main aproches to quantum gravity all have there problems.

1) Loop quantum gravity.
2) String Theory.
3) Others; Penrose spin networks, Connes non-commutative geometry etc.

1) Loop quantum gravity is a way to quantise space time while keeping what General Relativity taught us. It is independent of a background gravitational field or metric. So it should be if we are dealing with gravity. Also, it is formulated in 4 dimensions. The main problem is that the other forces in nature, electromagnetic, strong and weak cannot be included in the formulation. Nor it is clear how loop quantum gravity is related to general relativity.

2) Then we have string theory. String theory is a quantum theory where the fundamental objects are one dimensional strings and not point like particles. String theory is “large enough” to include the standard model and includes gravity as a must. The problems are three fold, first the theory is background dependant. The theory is formulated with a background metric. Secondly no-one knows what the physical vacuum in string theory is, so it has no predictive powers. String theory must be formulated in 11 dimensions, what happened to the other 7 we cannot see? ( Also string theory is supersymmetric and predicts a load of new particles).

3) Then we have other approches, such as non-commutative geometry. This assumes that our space-time coordinates no longer commute. i.e. x y – y x is not zero. This formulation relies heavily on operator algebras.

All the theories have several things in common which are accepted as being part of quantum gravity at about Planck scale.

i)Space-time is discrete and non-commutative ii)Holography and the Bekenstin bound.

i) This is “simply” applying quantum mechanics to space-time. In quantum mechanics all the physical observables are discrete.

ii) The holographic principle was first realised by Hawking. He realised that the entropy of a black hole was proportional to the surface area of the horizon and not the volume. That is all the information about a black hole is on the surface of the horizon. It is like a holograph, you only need to look at the 2-d surface to know everything you can about the black hole.

Bekenstin showed that there is a maximum amount of information that can pass through a surface. It is quantised in Planck units.

Einstien’s Special Relativity Fully Explained!

Special Theory of Relativity :

  • experiments with electromagnetic wave properties of light finds contradictions with Newtonian view of space and time
  • Michelson-Morley experiment shows speed of light is constant regardless of motion of observer (!)
By the late 1800’s, it was becoming obvious that there were some serious problems for Newtonian physics concerning the need for absolute space and time when referring to events or interactions (frames of reference). In particular, the newly formulated theory of electromagnetic waves required that light propagation occur in a medium.In a Newtonian Universe, there should be no difference in space or time regardless of where you are or how fast you are moving. In all places, a meter is a meter and a second is a second. And you should be able to travel as fast as you want, with enough acceleration.

In the 1890’s, two physicists (Michelson and Morley) were attempting to measure the Earth’s velocity around the Sun with respect to Newtonian Absolute space and time. This would also test how light waves propagated since all waves must move through a medium. For light, this medium was called the aether.

The results of the Michelson-Morley experiment was that the velocity of light was constant regardless of how the experiment was tilted with respect to the Earth’s motion. This implied that there was no aether and, thus, no absolute space. Thus, objects, or coordinate systems, moving with constant velocity (called inertial frames) were relative only to themselves.

In Newtonian mechanics, quantities such as speed and distance may be transformed from one frame of reference to another, provided that the frames are in uniform motion (i.e. not accelerating).

 

  • Einstein makes constant speed of light key premis to special relativity
Considering the results of the Michelson-Morley experiment led Einstein to develop thetheory of special relativity. The key premise to special relativity is that the speed of light (called c = 186,000 miles per sec) is constant in all frames of reference, regardless of their motion. What this means can be best demonstrated by the following scenario:

  • special relativity interprets light as a particle called a photon
  • photon moves at speed of light and has zero mass
  • speed of light is an absolute limit, objects with mass must move at less than speed of light
This eliminates the paradox with respect to Newtonian physics and electromagnetism of what does a light ray `look like’ when the observer is moving at the speed of light. The solution is that only massless photons can move at the speed of light, and that matter must remain below the speed of light regardless of how much acceleration is applied.In special relativity, there is a natural upper limit to velocity, the speed of light. And the speed of light the same in all directions with respect to any frame. A surprising result to the speed of light limit is that clocks can run at different rates, simply when they are traveling a different velocities.

  • space and time are variable concepts in relativity
  • time dilation = passage of time slows for objects moving close to the speed of light
This means that time (and space) vary for frames of reference moving at different velocities with respect to each other. The change in time is called time dilation, where frames moving near the speed of light have slow clocks.
 

  • Likewise, space is shorten in in high velocity frames, which is called Lorentz contraction

Space-Time Lab

  • relativity leads to some strange consequences, such as the twin paradox
  • however, all these predictions have been conferred numerous times by experimentation
Time dilation leads to the famous Twins Paradox, which is not a paradox but rather a simple fact of special relativity. Since clocks run slower in frames of reference at high velocity, then one can imagine a scenario were twins age at different rates when separated at birth due to a trip to the stars.It is important to note that all the predictions of special relativity, length contraction, time dilation and the twin paradox, have been confirmed by direct experiments, mostly using sub-atomic particles in high energy accelerators. The effects of relativity are dramatic, but only when speeds approach the speed of light. At normal velocities, the changes to clocks and rulers are too small to be measured.


Spacetime:

  • relativity links where and when (space and time) into a 4 dimensional continuum called spacetime
  • position in spacetime are events
  • trajectories through spacetime are called world lines
Special relativity demonstrated that there is a relationship between spatial coordinates and temporal coordinates. That we can no longer reference where without some reference to when. Although time remains physically distinct from space, time and the three dimensional space coordinates are so intimately bound together in their properties that it only makes sense to describe them jointly as a four dimensional continuum.Einstein introduced a new concept, that there is an inherent connection between geometry of the Universe and its temporal properties. The result is a four dimensional (three of space, one of time) continuum called spacetime which can best be demonstrated through the use of Minkowski diagrams and world lines.

  • determinism is hardened with the concept of spacetime since time now becomes tied to space
  • just as all space is `out there’, so is all time
Spacetime makes sense from special relativity since it was shown that spatial coordinates (Lorentz contraction) and temporal coordinates (time dilation) vary between frames of reference. Notice that under spacetime, time does not `happen’ as perceived by humans, but rather all time exists, stretched out like space in its entirety. Time is simply `there’.


Mass-Energy Equivalence:

  • if space and time are variable notions, the momentum must also be relative
  • in order to preserve conservation of energy, mass must be connected to momentum (i.e. energy)
Since special relativity demonstrates that space and time are variable concepts, then velocity (which is space divided by time) becomes a variable as well. If velocity changes from reference frame to reference frame, then concepts that involve velocity must also be relative. One such concept is momentum, motion energy.Momentum, as defined by Newtonian, can not be conserved from frame to frame under special relativity. A new parameter had to be defined, called relativistic momentum, which is conserved, but only if the mass of the object is added to the momentum equation.

This has a big impact on classical physics because it means there is an equivalence between mass and energy, summarized by the famous Einstein equation:

 

  • mass increases as one nears the speed of light, which explains the limit to the speed of light for material objects, you need infinite acceleration to move an infinitely increasing mass
The implications of this was not realized for many years. For example, the production of energy in nuclear reactions (i.e. fission and fusion) was shown to be the conversion of a small amount of atomic mass into energy. This led to the develop of nuclear power and weapons.As an object is accelerated close to the speed of light, relativistic effects begin to dominate. In particular, adding more energy to an object will not make it go faster since the speed of light is the limit. The energy has to go somewhere, so it is added to the mass of the object, as observed from the rest frame. Thus, we say that the observed mass of the object goes up with increased velocity. So a spaceship would appear to gain the mass of a city, then a planet, than a star, as its velocity increased.

  • mass-energy equivalence is perhaps the most fundamental discovery of the 20th century
  • photons have momentum, i.e. pressure = solar sails
Likewise, the equivalence of mass and energy allowed Einstein to predict that the photon has momentum, even though its mass is zero. This allows the development of light sails and photoelectric detectors.

SuperSymmetry And Extra Dimensions

Supersymmetry (SUSY) was proposed in the early 1970s as a further symmetry in nature. The Standard Model divides particles into two camps called fermions andbosons. All the usual matter particles we observe – like electrons and quarks – are fermions. Every normal force carrying particle – like a photon or graviton – is a boson.

Roughly speaking, SUSY claims that there’s a way to replace fermions with bosons such that the laws of physics remain the same. Regardless of whether particles are strings or points, SUSY implies a connection between properties of bosonic and fermionic particles.

Supersymmetry tells us that every particle has a partner, which differs in spin by half a unit. All particles have spin. It’s a bit like the rate the Earth rotates on its axis. Spin is an intrinsic quantum mechanical property that does not change. If you change the spin of a photon, it is not a photon any more. Fermions have half-integer spin numbers – ½, 1½, 2½ etc. Bosons have integer spins – 0, 1, 2 etc. The force carriers of strong, weak and electromagnetic forces have spin 1 and the graviton spin 2.

But none of the particles that we ordinarily detect can be partners with each other. Physicists worked out that the new super-partners had to be much heavier than their counterparts, and gave them strange names like squarks, selectrons and photinos. No supersymmetric particles have been discovered so far, but evidence for supersymmetry at particle accelerators like the Large Hadron Collider at CERN would be a landmark for 21st century physics.

Including SUSY makes a big difference to string theory. Supersymmetric string theory (or superstring theory) describes both bosons and fermions, and removes the impossible tachyon (hypothetical particle that always travels faster than the speed of light). Plus it only requires ten dimensions, compared to twenty-six for bosonic string theory. This is a lot closer to the four dimensions we usually experience.All modern work in string theory is based on the superstring. Originally there appeared to be five consistent and distinct superstring theories. It would take a revolution to realise that these were all smoothly connected. They are part of M-theorydibujo20110302_standard_model_bestiary_and_susy_history.png

Extra dimensions are string theory’s most outlandish prediction. String theory demands that our cosy 4D view of the world is wrong. In fact the universe of strings must have ten dimensions! This is immediately at odds with our perception of reality, but we can resolve the paradox by requiring the six unseen dimensions to be incredibly small.

So what makes a dimension? Intuitively each dimension is an independent direction in which we can move. We live in three dimensions of space, “forward-backward”, “left-right” and “up-down”. There’s also a single time dimension, “past-future”, making 4 dimensions in total. But our perception of dimension is greatly affected by scale.

Imagine watching a faraway ship approaching port. It starts out looking like a zero-dimensional dot on the horizon. Soon you realise it has a mast pointing high into the sky: it now appears to be a one-dimensional line. Next, its sails come into view making it seem two-dimensional. As it nears the dock you finally notice that it has a long deck, the third dimension.

There’s nothing strange here. It’s just that at large distances we can’t resolve dimensions. So perhaps there could be extra dimensions, so small that we don’t perceive them. The process of curling up space to produce these tiny invisible dimensions is known as compactification.

Suppose you’re a squirrel living on an infinitely long tree trunk. The trunk is (more or less) a cylinder. You can move in two independent directions, “along” and “around”.  One day you get bored and move to a thinner tree – the circumference of the trunk is greatly reduced.

Now your “around” dimension is much smaller than it used to be. It only takes a few steps to go all the way round the trunk. Any meaningful movement has to be done in the “along” dimension. You jump to a yet finer trunk. Now a single step takes you round the tree a hundred times! The “around” dimension has become far too small for you to detect.

As the tree trunks get narrower, the dimensions of your world reduce from two to one. In string theory this must happen for all six extra dimensions. We wrap them up so they are inconceivably tiny. Every time you move your hand through space you circle the six hidden dimensions a vast number of times.I15-30-extradim

The size of these compactified dimensions is similar to the length of a string, the Planck scale. This has two important consequences. Firstly it’s unlikely that we’ll be able to detect them by direct experiment. Nevertheless several possible tests have been suggested, though generally they rely on having a healthy slice of luck. Secondly the extra dimensions form a surface which strings can become caught up in.

The shape and size of strings is vital to modelling their vibrations and interactions. Therefore it’s important to understand how they wrap themselves around the six curled-up dimensions. The precise structure of the surface formed by compactification changes the physics arising from the strings.

It turns out there are many different ways of mushing up the extra dimensions into a tiny space. Which method gives rise to conventional physics? Nobody knows! Current research focuses on Calabi-Yau manifolds, a promising group of compactifications.  But as of yet there is no definitive answer.

Strings And Particles

We’ve learnt that all strings vibrate as a superposition of modes. Each mode is a particular type of vibration and has an associated energy.

In quantum string theory every mode is identified with a fundamental particle. The equations describing the mode correspond exactly with those defining the particle. For example, the mathematical laws governing photons naturally emerge as the equations for a particular string mode.

This is totally unexpected. String dynamics has nothing to do with electromagnetism, yet Maxwell’s equations appear from nowhere. And that’s not all. The same magic can happens for the other forces of the Standard Model, including the graviton. This is why string theory is a candidate for a fundamental unified theory.

As strings vibrate more and more vigorously, their modes give rise to an infinite number of particles. Einstein’s famous equation E = mc2 tells us that there is an equivalence between energy and mass: the more energetic the string, the heavier the corresponding particle is.

Because fundamental strings are so very small, they form incredibly tight loops and therefore require a colossal amount of energy in order to vibrate. You could pack a million billion billion of these strings into the width of a human hair, but each has the same energy as a train roaring down a track at maximum speed!

So if our strings have such enormous energy, how could they ever correspond to the fundamental particles we observe? Indeed, these have truly tiny masses and thus small energies. Luckily, quantum mechanics comes to the rescue.

Remember the uncertainty principle? It implies that seemingly empty space is filled with energy, called vacuum energy. Physicists worked out that this vacuum energy could cancel with the vibrational energy of the strings, lowering their overall energy and mass.

This cancellation allows vibrating strings to appear as massless, or almost massless, particles. One of string theory’s most celebrated results is that the vibrations of  closed strings automatically give a massless particle with all the properties of the elusive graviton. Gravity emerges naturally from string theory!

Only a few of the string modes would correspond to the particles we see around us. There are infinitely many more particles predicted by string theory, far too heavy to detect in our current laboratories. But there could be indirect ways to look for them. If they were to be discovered it would be a huge leap for string theory.

The first string theories had a few problems. They accounted for bosons (such as force particles like the photon) but did not contain fermions (for example matter particles such as the electron) at all. Secondly the mathematics required twenty-six spacetime dimensions! Lastly the theory predicted an impossible particle with negative mass called the tachyon.

Thankfully supersymmetry solved these difficulties. Superstring theory produces fermions as well as bosons and removes the need for an impossible tachyon. It also brings the required dimensions down to ten – nine space and one time.

A World Of Strings

Welcome to the theory where everything is made of strings. They’re much as you might imagine from everyday life; strings can wiggle and contort in myriad ways. In doing so they give rise to particles and forces. Remarkably, the motion of a string can encode both particles and forces.

To understand string theory we must study the physics of strings. It’s good to start with a musical example: you’re playing guitar in a rock band. It’s a huge gig and the crowd is expectant. Once the noise has died down you pluck your first string. It begins to move, producing a characteristic musical note.

As you play, your fingers cause other strings to vibrate. Depending on their shape and size they sound at different pitches. The overall effect is a breathtaking display of harmony and rhythm; it’s no wonder you guys are so successful. And all because of some vibrating strings.

The analogy with string theory is immediate. The guitar strings become the fundamental strings of nature. These vibrate in different ways depending on their length and energy. The different musical pitches correspond to individual particles. The harmonies of the band represent interactions between these particles.

It turns out that these fundamental strings are probably incredibly small. The best estimate for a typical string length is 10-33 cm. This length is often called the Planck scale. If you were to lay a thousand billion billion strings end to end you would only just cover the width of a single atom. They are too minuscule to be detected by our current accelerators. This means that we must look for experimental evidence indirectly.

We partition strings into two categories. Open strings have two endpoints. These might be fixed like on a guitar or could be free to move as they please. Fixing the endpoints in particular ways gives rise to distinct vibrational patterns. Closed strings have no endpoints and so form a complete loop.

A rubber band is a good model for a string. Vibrations involve the band stretching and compressing. The tension in the rubber provides the energy to drive the motion. The tiny fundamental strings require huge tension to keep them small. Correspondingly they oscillate with very large energies.

By Einstein’s famous equation E = mc2 we know that energy is equivalent to mass. Hence a high energy string is the same as a very heavy string. The typical mass of a string is astronomical compared to that of a proton. But if strings are so heavy how can they possibly constitute elementary particles? Fortunately quantum corrections sort out the issue.

So how do strings vibrate? Unsurprisingly they undulate like waves. You can easily see this with a skipping rope. Fix one end of the rope and hold the other. By moving your arm quickly you can send a wave along the material. Different movements create complex patterns of several waves added together. Physicists call this phenomenon a superposition of waves.

So a string sways as a superposition of different oscillations. Each constituent vibration is known as a mode. Adding up all the modes gave you the complex dynamics of the skipping rope. The same is true of our microscopic strings. We can now precisely explain how strings produce particles.

Gravitational Waves,OK. But Where Are Gravitons? (Very Detailed)

Intro.

Present-day physics cannot describe what happened in the Big Bang. Quantum theory and the theory of relativity fail in this almost infinitely dense and hot primal state of the universe. Only an all-encompassing theory of quantum gravity which unifies these two fundamental pillars of physics could provide an insight into how the universe began. Einstein and his successors, who have been searching for this for almost one hundred years.

Space consists of tiny elementary cells or “atoms of space” in some modern theories of quantum gravity trying to unify General Relativity and Quantum Mechanics. Quantum gravity should make it possible to describe the evolution of the universe from the Big Bang to today within one single theory.Our world is ruled by four fundamental forces: the gravitational pull of massive objects, the electromagnetic interaction between electric charges, the strong nuclear interaction holding atomic nuclei together and the weak nuclear force causing unstable ones to fall apart. Physicists have quantum theories for the last three of them that allow very precise calculations of phenomena on the smallest, subatomic scales. However, gravity does not fit into this scheme. Despite decades of research, there is no generally accepted quantum theory of gravity, which is needed to better understand fundamental aspects of our universe.

Gravitons.

Gravitons are tiny particles that carry the “force” of gravity. They are what brings you back down to Earth when you jump. So why have we never seen them, and why are they so impossibly complicated we need string theory to figure them out?

Even without observing gravitons, scientists know a few things about them. They know, because gravity is a force with an infinite reach, that gravitons would have to be massless. This technically makes them “gauge bosons,” and puts them in the company of photons and gluons. Scientists also know that gravitons have a spin of two, which makes them unique among particles. The combined properties mean that, if scientists were able to pin down an event involving a mysterious particle with no mass and a spin of two, they would know they were looking at a graviton.

Where String Theory Jumps In.

In our everyday lives, we experience three spatial dimensions, and a fourth dimension of time. How could there be more? Einstein’s general theory of relativity tells us that space can expand, contract, and bend. Now if one dimension were to contract to a size smaller than an atom, it would be hidden from our view. But if we could look on a small enough scale, that hidden dimension might become visible again. Imagine a person walking on a tightrope. She can only move backward and forward; but not left and right, nor up and down, so she only sees one dimension. Ants living on a much smaller scale could move around the cable, in what would appear like an extra dimension to the tightrope-walker.

How could we test for extra dimensions? One option would be to find evidence of particles that can exist only if extra dimensions are real. Theories that suggest extra dimensions predict that, in the same way as atoms have a low-energy ground state and excited high-energy states, there would be heavier versions of standard particles in other dimensions. These heavier versions of particles – called Kaluza-Klein states – would have exactly the same properties as standard particles (and so be visible to our detectors) but with a greater mass. If CMS or ATLAS were to find a Z- or W-like particle (the Z and W bosons being carriers of the electroweak force) with a mass 100 times larger for instance, this might suggest the presence of extra dimensions. Such heavy particles can only be revealed at the high energies reached by the Large Hadron Collider (LHC).

Why is gravity so much weaker than the other fundamental forces? A small fridge magnet is enough to create an electromagnetic force greater than the gravitational pull exerted by planet Earth. One possibility is that we don’t feel the full effect of gravity  because part of it spreads to extra dimensions. Though it may sound like science fiction, if extra dimensions exist, they could explain why the universe is expanding faster than expected, and why gravity is weaker than the other forces of nature.

There is, however, a major problem. To understand it, let’s go back to photons and electrons. When an electron falls from one level to another, out pops a photon. When that photons falls, or otherwise moves, it produces no second photon. Electron movement produces photons. Photon movement does not produce more photons. There are occasional times when photons can do odd things. They can split into electron and positron pairs, which can produce more photons, and which then recombine into a photon again. Although this burst of particles may get hectic, it doesn’t produce an endless branching chain of photons. Because of this, photons and electron interactions are said to be renormalizable. They can get weird, but they can’t become endless.

Gravitons are not so tame. While photons are spawned by movement in electrons, gravitons are whelped by energy and mass. Gravitons are massless, but they do carry energy. This means a graviton can create more gravitons.

Like other quantum particles, gravitons can carry a lot of energy, or momentum, when confined to a small space. A graviton is confined to a small space when one graviton is popping out another graviton. At that moment, two gravitons are in a tiny space, one right next to the other. That huge amount of energy causes the newly-created graviton to create yet another graviton. This endless cycle of graviton production makes gravitons nonrenormalizable.

String theory is invoked in these situations part because nonrenormalizable gravitons are points. Strings are longer than points, and so the creation of the stringy graviton isn’t so confined in time and space. That bit of wiggle room keeps the creation of a graviton from being so energetic that it necessitates the creation of yet another graviton, and makes the theory renormalizable.

A graviton is really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really hard to detect.

The LIGO detector and team??

Quantum gravity is a theory that has been the target of decades of study by physicists worldwide. If this idea is proven, it would tie together the General Theory of Relativity (which governs gravitational fields) with quantum mechanics, and the bizarro-world of subatomic particles.

Gravitational waves, produced by accelerating objects, ripple through space-time, according to most interpretations of the General Theory of Relativity penned by famed physicist Albert Einstein. Researchers at the Laser Interferometer Gravitational-Wave Observatory (LIGO) have announced they detected these disturbances in the fabric of time and space for the first time.

Certain aspects of subatomic behavior is quantized – they can only move or exist in particular whole number states. This characteristic may be thought of like steps to walk up into an apartment. Many physicists believe gravitational waves are similarly quantized, made up of individual quantum particles of gravity – gravitons.

Although it is not certain, many physicists believe that these particles join together, forming the gravitational waves that travel through space. Like photons of light, these gravitons, would have no rest mass, and move at the speed of light.

The effects of quantum gravity are predicted to be quite pronounced in the region immediately surrounding the center of black holes. However, it is impossible to collect data from events near a singularity. The events witnessed by astronomers are LIGO-recorded activity from just outside the event horizons of a pair of black holes as they collided.

The LIGO detector cannot detect single gravitons, and cannot, by itself, test the theory of quantum gravity. However, there is reason to believe that either LIGO, or a future gravitational wave detector, could be used to find evidence of quantum gravity by examining the emission spectrum of energy seen surrounding the event horizons of black holes.

According to some theories, even at the event horizon, the effects of gravitons could cause gravitational waves to be more powerful, and less regular, than they would be without their influence.

“Certain scenarios with strong quantum modifications in a region extending well outside the horizon are expected to modify classical evolution, and distort the near-peak gravitational wave signal, suggesting a search for anomalies such as decreased regularity of the signal and increased power,” Steven Giddings of the University of California, Santa Barbara, said.

Any variation seen between observations and graviton-free theories of gravitational waves, such as Einstein’s, could assist physicists seeking to understand the ultimate units of gravity.

As astronomers use LIGO and other detectors to search for elusive ripples in space-time, they may also come across evidence of other strange features of space, including cosmic strings, theoretical one-dimensional strings of energy, which may have been created long ago, when the universe was young.

Moreover, as additional findings of gravitational waves are recorded, physicists will search the data for behavior of the ripples which might suggest the presence of gravitons. If they are found, the discovery could herald a new age of understanding how gravity works. Such a finding could suggest that other notions of gravity, such as string theory, could prove to be the basis of future work on the nature of gravity.

But until such variations are seen, the existence of gravitons remains strictly theoretical.

A Small Packet Of Gravity?

Some theorists suggest that a particle called the “graviton” is associated with gravity in the same way as the photon is associated with the electromagnetic force. If gravitons exist, it should be possible to create them at the LHC, but they would rapidly disappear into extra dimensions. Collisions in particle accelerators always create balanced events – just like fireworks – with particles flying out in all directions. A graviton might escape our detectors, leaving an empty zone that we notice as an imbalance in momentum and energy in the event. We would need to carefully study the properties of the missing object to work out whether it is a graviton escaping to another dimension or something else. This method of searching for missing energy in events is also used to look for dark matter or supersymmetric particles.

Quantum(Microscopic) Black Holes

Another way of revealing extra dimensions would be through the production of “microscopic black holes”. What exactly we would detect would depend on the number of extra dimensions, the mass of the black hole, the size of the dimensions and the energy at which the black hole occurs. If micro black holes do appear in the collisions created by the LHC, they would disintegrate rapidly, in around 10-27 seconds. They would decay into Standard Model or supersymmetric particles, creating events containing an exceptional number of tracks in our detectors, which we would easily spot. Finding more on any of these subjects would open the door to yet unknown possibilities.

Photons easy to detect…So why not gravitons?

You have to appreciate just how weak gravity really is. Gravity is 10^-36 times weaker than the electromagnetic force. Trying to detect individual gravitons will be impossible with the tools we have at hand. That is why scientists are trying to detect gravity waves generated during the early stages of the universe. The effect of gravity on mass is independent of the size of the mass unless the masses involved are of roughly equivalent size. If one mass is very much smaller than the other then its influence is negligible. The tides, whilst involving a substantial amount of mass are a very bad choice. Firstly, they are composed of liquids which are not only reacting to the gravitational force, but to the kinetic energy of the molecules that constitute the oceans. Secondly, overall they compose only a fraction of the mass of the whole earth.

Detecting a photon, for example, is extremely easy. There many types of devices that are able to detect single photons, such as photomultipliers, used in labs around the world. In fact, you don’t even need any fancy technology; the human eye can, in principle, detect a single photon. (that isnt quite the topic so i wouldnt go into depths)

Possible Way To Detect A Graviton

However, detecting gravitons is much (much much etc…) harder. A famous example  considers an ideal detector with the mass of the planet Jupiter, around 10271027 kilograms, placed in close orbit around a neutron star, which is a very strong source of gravitons. A back-of-the-envelope calculation reveals that even in this extremely unrealistic scenario, it would take 100 years to detect a single graviton!

Okay, you say, so let’s just make that detector (sometime in the far future when we have the technology to do so) and wait for 100 years. There’s a crucial detail that I forgot to mention, however. The star also emits neutrinos in addition to gravitons; in fact, many more neutrinos than gravitons. And neutrinos are much easier to detect than gravitons. In fact, we can calculate that for every graviton that is detected in this scenario, around 10331033neutrinos will be detected. So we will never be able to find the one graviton among the 10331033 neutrinos.

Ah, you say, but we can build a neutrino shield and block the neutrinos! But such a shield would need to have a thickness of several light years, and if you try to make it more dense in order to fit between the star and the detector, it would collapse into a black hole…

In conclusion, even with insanely advanced futuristic technology, it would simply be impossible to detect a graviton.

What we have been able to detect, though, are gravitational waves. This amazing discovery by the LIGO experiment was announced on February 11 2016. Gravitational waves are made of lots and lots of gravitons, just like electromagnetic waves are made of lots and lots of photons. A typical gravitational wave is composed of roughly 1,000,000,000,000,000 gravitons per cubic centimeter, therefore it is obviously much easier to detect than a single graviton.

On the other hand, we definitely do not have the technology to detect individual gravitons, and unless some new ingenious way to detect them is found, we will never be able to do so even with much more advanced technology.

What are the consequences of this technological impossibility to detect gravitons? As it turns out, it doesn’t really matter! Let me explain.

First, where exactly do gravitons appear in physics? Theoretical physicists are trying to combine general relativity and quantum mechanics into a single theory, called quantum gravity. We do not have a final theory of quantum gravity yet, but we are working very hard on it, and we already understand many aspects of what such a theory should be.

In a theory of quantum gravity, gravitons are the quanta of the gravitational field. Therefore, quantum gravity will use gravitons as part of its formulation, just like the theory of quantum electrodynamics uses photons, which are the quanta of the electromagnetic field.

However, we did not confirm quantum electrodynamics experimentally by detecting photons. Quantum electrodynamics produces predictions that are different from those of classical electrodynamics, and by experimentally testing these predictions we have been able to confirm that the electromagnetic field is indeed quantized.

In a similar way, when we finally have a good candidate for a theory of quantum gravity, it will produce predictions that are different from those of classical gravity. By experimentally testing these predictions, we will be able to confirm that the gravitational field is quantized.

In other words, what we need to do is not detect gravitons; we need to test the predictions of a theory of quantum gravity, as soon as we have such a theory. This will indirectly confirm the existence of gravitons.

Credits to:

Me (Mainly)

LHC website

Gizmodo website for some part of photons info.

🙂 Hope u enjoyed