marți, 7 decembrie 2010

Speculative Technology 



TECH LEVELS

Tech Levels are a convenient abstract used to categorize technology according to relative complexity and sophistication. They are based on the idea that looking back, history can be divided into distinct technological eras (stone age, bronze age, iron age, middle ages, renaissance, etc.) Each era can be considered rungs on a ladder, or a distinct level, that must be reached before the next one becomes possible.

On this site, we take that basic concept and project it into the future, anticipating distinct technological eras that have yet to come based on the cumulative best guesses of scientists, engineers, and various writers. Technological innovations are put on this numeric scale that represents when they first enter widespread use (ie, out of the experimental and testing phases.) The number represents a historical era, past or future, and its accompanying overall level of development.
The tech level scale used on this site ranges from 1 to 25+, where 1 through 10 represents real historical eras and 11+ represents future ages. Categorizing innovations by tech levels is NOT a precision science and represents only best guesses as to when an innovation will come on-line.
Also, this scale is by no means a smooth curve. Tech Level 2 represents a period of roughly 4500 years, while tech level 9 represents barely 1/100th that. Tech level 11 represents a mere 10 years, and the higher numbers (20+) represent leaps of many millennia. Each level represents a broad number of various innovations, and the scale only shoots up to the next level when a large number of breakthroughs significantly changes both society and the technological landscape as a whole.
For most innovations, especially near-future ones (Tech Levels 11 through 15), I try to be as conservative as reasonable in guessing when they may emerge, factoring both prevailing cultural and economic factors as well as technological ones. For example, even though it may be possible that someone will discover the means for an FTL drive within the next ten years, it will much more likely be many centuries (if ever) before we beat a beam of light to another star. Thus FTL travel has a tech level of 16+, an indication of many centuries of advancement, as opposed to the ten-plus years in the furture tech level 11 represents.
A brief breakdown of Tech Levels by group and what they mean:
Tech Level 0
This represents no technology.
Tech Level 1
This represents the slow climb of technology in Prehistory, from the time our pre-homo-sapiens ancestors first picked up a stick to dig for grubs until advancements in agriculture allowed for the formation of cities and recorded history.
Tech Levels 2-9
These represent actual historical eras (see chart)
Tech Level 10
This represents modern day Earth, today.
Tech Levels 11-15
These levels represent the near future, from tomorrow until about 100 years from now. Usual estimates for this period include steady refinements in electronic and computer technology, ever-increasing global connectiveness and travel, the advent of "base" cybernetics and genetic engineering, and a slow but steadily increasing human presence in space.
Tech Levels 16-20
This is the "far" future, representing up to 1000 or more years beyond Tech Level 15. These are the tech levels of most mainstream science fiction stories and tropes, such as starships, interstellar colonies, intelligent robots, death rays, alien contact, etc, etc.
Tech Levels 21-25
These levels represent the frontiers of the easily imaginable future, a great many thousands of years beyond today. Humankind opens up the secrets of the universe and creates ultra-sophisticated technology whose feats would seem to border on the magical to us today. Wormholes may connect distant parts of the galaxy. Black holes may be tapped for energy. Involuntary death may be unknown.
Tech Levels 26 and Beyond
Levels beyond 25 represent god-like technologies in power and reach. Cosmos-spanning networks, reality-altering machines, pocket universes, and more mind-boggling concepts inhabit these levels.

TECH LEVEL TABLE

0 No Tech
1 Prehistory, aka Stone Age (5000 BC and Before)
2 Early City-States (5000 BC to 500 BC)
3 Iron Age (500 BC to AD 500)
4 Middle Ages (AD 500 to AD 1450)
5 Rennaisance (AD 1450 to AD 1700)
6 Age of Reason (1700 to 1850)
7 Victorian Era (1850 to 1900)
8 The World Wars (1901 to 1945)
9 Cold War Era (1946 to 1991)
10 Modern-Day Earth (1992-present)
11 circa +10 years
12 circa +25 years
13 circa +50 years
14 circa +75 years
15 circa +100 years
16 Low Far Future Technology
20 High Far Future Technology
21 Low "Ultra" Technology
25 High "Ultra" Technology
26+ god-Like Technologies




AIRCRAFT TECHNOLOGY


WING-IN-GROUND EFFECT CRAFT


The L-325, an innovative Surface-Effect Vehicle prototype created by Flarecraft, Inc.

Wing-In-Ground Effect CraftTech Level: 10Wing-In-Ground (WIG) Effect Craft are also called Ground Effect Vehicles or Surface Effect Vehicles.
WIG Effect vehicles are an old idea that go back decades. Though prototypes have been built by various nations and companies, they never achieved enough widespread interest to warrant their manufacture in any significant numbers. However, new hybrid designs are coming online, and with an ever increasing eye toward fuel efficiency, this technology may yet find its proper niche.
As an aircraft moves through the air, its wing creates wingtip vortices, which generate both a down draft on the upper section of the wing and drag on the aircraft overall. However, when flying close to the ground, the formation of this vortex effect is disrupted, allowing the wing to function much more efficiently. At about one-tenth of the distance of the wingspan above the ground, about half of the drag caused by the vortex phenomenon is eliminated.
Also when traveling near the ground, the air under the vehicle is more compressed and contributes more to lift. The faster the vehicle goes, the more of this air is "rammed" under the wing, adding to the effect.
With both of these phenomena working, an aircraft operating very close to the surface can potentially fly much more efficiently than one at high altitude. More significantly, the ground-effect phenomenon becomes greater the larger the wing span is, allowing larger aircraft with cargo capacities far in excess of what today's heavy air lifters are capable of. For example, the most optimistic projection for Boeing's experimental Pelican WIG Vehicle would put its maximum cargo capacity at some 3000 tons. Compare this to the "mere" 250 tons or so for modern military air transports. As a demonstration of how much surface effect adds to the efficiency of a vehicle, thePelican will be able to transport 750 tons of cargo over 10,000 miles using Wing-In Ground flight, but its range with such a mass would be limited to about 6500 miles if it uses conventional high-altitude flight.
A number of different craft of this kind have been built over the years, from the small, sporty-looking L-325 built by the American aircraft company Flarecraft, to the large, 350-foot missile-carrying Lam Ekronoplan prototype built by the USSR in 1987. However, they have never really caught on commercially because the benefit from the surface-effect was not significant enough to convince investors, from either the military or private sector, to invest in full production. In order to create major improvements from ground effect, vessels much more massive than even the Lam Ekronoplan would be needed, and the people holding the purse strings have been traditionally reluctant to sink so many resources into what is as yet an unproven venture.
The Lun Ekranoplan, a Soviet Surface-Effect prototype designed as a fast-attack missile carrier.
Boeing's aforementioned Pelican project hopes to change that. The vehicle is meant as a hybrid between a normal aircraft and a WIG effect vehicle. It can fly at high altitudes like a conventional aircraft, and can take off from most commercial and military runways. The Pelicanwould have hinged wings. They would held in a standard configuration for normal airplane operations, but angled down in order to maximum compressed airflow during surface-effect flight.
One main disadvantage of surface effect craft is that because they fly at such low altitudes, air density and drag induced by air friction is much greater. WIG vehicles are therefore reduced to much longer flight times than conventional aircraft, but would still be much faster that a water-borne ship.
Another major disadvantage is they cannot operate very well over very uneven surfaces. WIG effect craft cannot take off or land safely when waves on the water are over a certain height. The craft would be engineered to float as well as any boat and could ride out rough weather on the surface, but in order to operate efficiently they would have to avoid rough waters altogether if possible. Accurate up-to-date weather prediction techniques and technology would have to be available to allow a surface-effect vehicle to make a transoceanic flight or other voyages of similar length. Otherwise they may need to be confined to relatively calm bodies of water, such as the Mediterranean Sea, The Caspian Sea, or the Great Lakes.
Though designed to operate primarily over water, these kind of craft could also be pressed into service over flat ground surfaces, such as deserts, tundra, marshlands and so on. Smaller WIG vehicles could also operate over flat lengths of highway with little problem, though that might upset some motorists.



JETCOPTERS


The Sikorsky S-72 Jet-Assisted X-Wing Helicopter.

Tip-Jet HelicoptersTech Level: 9
X-Wing HelicoptersTech Level: 9
Canard Rotor-Wing HelicoptersTech Level: 12
A jetcopter is an informal term applied to any vehicle that mounts both jet engines and helicopter rotors. The idea is to create a vertical take off and landing vehicle that is also capable of high air speeds. Attempts through the years have met with mixed results. Advanced forms of jetcopters are occasionally seen in science fiction, such as the classic tabletop RPGs Star Frontiers and 2300 AD.

TIP-JET HELICOPTERS
Tech Level: 9
A proposed commercial version pf Swisscopter America's Tip-Jet Dragonfly Helicopter. Image copyright Swisscopter America.
One of the main hurdles in designing a conventional helicopter is the phenomenon of torque. Most helicopter blades are spun by a central shaft revved up by an engine in the main fuselage of the helicopter. The force of the rapidly spinning shaft makes the body of the helicopter want to spin in the opposite direction. Thus, most such craft have a tail rotor angled to the side to counteract this.
But its possible to move the engine from the helicopter body to the blades themselves, and eliminate this problem completely. One well-proven method for doing so is to mount small jet engines on the tip of each rotor blade, with fuel fed to them through lines in the blades.
These tip-jet helicopters were experimented with in the 1950s and 1960s. The design proved workable and practical, and showed impressive potential lifting capacity. However, they were never able to exceed the efficiency and range of conventional helicopters.
They also had a problem with autorotation. If a conventional helicopter’s engine fails, its blades are designed to keep spinning from their motion through the air. This creates enough lift so that a skilled pilot may still land the craft safely. However, with tip-jet helicopters, the jet pods at the rotor tips created too much drag and slowed the unpowered blades down too much for autorotation to be very effective.
The concept however has been revived and upgraded recently with the Swisscopter America Inc.’s new Dragonfly helicopter. The tip jets, actually high-efficiency H2O2 rockets, are more efficient and powerful than the old style jet pods, and their smaller profile means a much less impact on autorotation. A tail rotor is still present, but only to help with maneuverability instead of counteracting torque. Without torque, the flight controls can be simplified, and a tip-jet helicopter proves easier to fly and handle. The design is also very environment-friendly; Swisscopter America’s tip jets’ exhaust is water steam.
The demonstrator model carries enough fuel for 50 minute of flight, twice that if an extra tank is carried as cargo. A more streamlined, two-seat model is being tested, and may serve as the basis for future commercially-available models.

X-WING HELICOPTERS
Tech Level: 9X-Wing helicopters are a straight-forward attempt at combining jet aircraft with helicopters. The project’s prototype, dubbed the S-72, was built by Sikorsky Aircraft with assistance from NASA and DARPA. The project ran from 1983 through 1988.
Functioning as a helicopter, the X-Wing aircraft, with rotor blades literally shaped like the capital letter X, could take off, land, and hover normally. Because its blades needed to be held rigid for jet-assisted flight, they weren’t designed to twist to control lift, the way conventional helicopters do. Instead, compressed air fed from the engine is expelled from the length of its blades to create a ‘virtual’ wing surface that can act in the same was as a twisted rotor blade. This compressed air was controlled by computer, to make sure it flowed from the proper edge of the rotors as needed.
When it transitioned to horizontal jet flight, powered by two auxiliary jet engines mounted on either side of the fuselage, its rotors would lock in place and function as additional lifting surfaces, functioning similarly to X-configured wings. Combined with its more conventional small outrigger wings, the S-72 could have obtained speeds in excess of 200 knots. The goal was to push the design so that an X-Wing’s maximum speed could eventually exceed 300 knots, but the program was cancelled due to budgetary reasons before modifications toward this end could be made.
The aircraft encountered difficulties in transitioning from vertical to horizontal flight, especially with the aerodynamic stresses put on the rotor blades as they spun down to lock into place. Sikorsky also ran into a bit of difficulty with the tail design used, which seemed to produce some longitudinal instability when combined with the airflow from the X-wing blades.
The design may be revived in the future, when more advanced materials technology and computer controls could make the concept more viable.

CANARD ROTOR-WING HELICOPTER
Tech Level: 12
An artist's conception of a two-passenger Canard Rotor-Wing Helicopter. Original artist unknown.
The Canard Rotor-Wing (CRW) design uses concepts from both Tip-Jet and X-Wing helicopters to produce a comprehensive VTOL aircraft, theoretically capable of both the agility of a helicopter and the speed of a jet aircraft.
The vehicle’s main airfoil/wing is designed to spin like a helicopter’s rotor. Exhaust from the jet engine is directed through nozzles at the rotor’s wingtips in order to make the airfoil spin. Like with tip-jet helicopters, there is no torque effect and the need for a tail rotor is eliminated.
When the aircraft is ready for horizontal flight, the airfoil locks into place to act as conventional wings, and the jet thrust is redirected rearward through a conventional engine nozzle. Ideally, aircraft using this design may exceed airspeeds of 375 knots.
Working together, Boeing and DARPA (Defense Advanced Research Projects Agency), developed the technology under the X-50 program that ran from 1998 through 2006. The program intended to develop Canard Rotor-Wing UAVs first that could be deployed from small naval ships, then to eventually expand the design up to manned versions that could escort larger VTOL craft such as the V-22 Osprey.
Two prototypes were created, but unfortunately both crashed, one in 2004 and the other in 2006. The second crash was determined to be due to aerodynamic complications in the changeover from vertical to horizontal flight, something that has bedeviled many VTOL aircraft concepts. The X-50 program was cancelled after the loss of the second prototype. However, the technology may be revived in the future if the design and flight transition issues can be resolved.



FORWARD-SWEPT WING AIRCRAFT


The Grumman X-29 Banking in flight.

Forward-Swept Wing AircraftTech Level: 9
Forward-swept wing (FSW) glider designs are in use, mostly as training vehicles. However, this article focuses on FSW jet aircraft.
This radical design was first proposed in 1936 by German designers and was developed by Germany toward the end of World War II, with the Junkers Ju 287 experimental aircraft built in 1944. Decades later, it served as a test bed for a number of advanced flight concepts during the 1980s and 1990s, under the official designation of the X-29 program. Two flight demonstrators were built over the course of the decade-long program by the Grumman Aerospace Corporation for the US military. The Russian Air Force also developed a FSW demonstrator aircraft, the Sukhoi Su-47, in the late 1990s.
Though both the US and Russian versions were considered impressive design successes, the forward-swept wing jet remains a curiosity, with no mainstream jet craft, either civilian or military, using the design. However, it has proven itself a very solid and potentially useful concept, and in the decades to come it may be revived.
Forward-swept wing jets are occasionally seen in fiction, including being the transportation mode of choice for the X-Men for many years.
The main advantage of FSW jet aircraft lie in their superior maneuverability. In simple terms, the airflow over the wing travels from wingtip to the wing roots along the central body of the aircraft. This is in contrast to conventional aircraft, which has the airflow travel from the central axis of the aircraft out over the wingtips. When flying at severe angles, conventional jets experience stalls along their wingtip airflow that can result in decreased lift and a loss of control. In a FSW jet, these stalls are instead angled toward the central body, ensuring proper airflow over their ailerons at angles of attack up to 67 degrees. Thus, FSW aircraft can prove much more controllable when performing extreme maneuvers than conventional aircraft.
A secondary advantage is that the FSW configuration allows for up to 15% better lift to drag ratio at transonic speeds.
However, the aircraft does have a number of potential drawbacks. A number of FSW aircraft were researched by the US and USSR during the Cold War, based on the Junkers JU 287. However, they encountered a problem with wingtip twisting. The metals and materials used at the time were simply not strong enough to handle the stresses of the configuration in transonic flight, especially on the leading edge of the wings. The wingtips would tend to bend up, and if the stresses were high enough, could even rip the wings off the fuselage. FSW jets would have to wait until the development of much stronger and lighter composite materials decades later.
Computer controls are also essential. FSW jets are inherently unstable aerodynamically. In a way, this is good, as its part of what allows the vehicle its high potential maneuverability. However, the various aerodynamic forces acting on its unusual shape can easily pull it out of a pilot’s control. Computers are used to make many minute corrections every second to ensure the aircraft remains stable, similarly to the flight computers used in the B-2 Bomber, another inherently unstable aircraft. The X-29 had six computers, all redundantly checking the same conditions, and would make correction only if the majority of them ‘agreed’ on the course of action. The system typically could make up to 40 such corrections per second a needed.
The Sukhoi Su-47 also employed a thrust vectoring system that helped to address the instability problem with greater efficiency, and allowed even greater maneuverability and agility overall.
Forward-swept wing jet aircraft would be extremely effective fighters at transonic speeds, able to out-maneuver most other types of jets. Their greater agility would also help them survive potential missile attacks. However, in the post-Cold War era, the need for dogfighting jets has greatly diminished, and most parties saw little reason to invest in this new technology. In the future, however, needs may change as the world situation evolves, and the Forward-Swept Wing jet may someday come into its own.

Airflow, forward swept wings vs standard swept wings.



COMPOUND COAXIAL HELICOPTER


An artist's impression of a compound coaxial attack helicopter, patterned after Sikorsky's X2 experimental rotorcraft. Image copyright Sikorsky Aircraft.

Compound Coaxial HelicopterTech Level: 11
A compound helicopter is a vehicle that combines standard vertical rotors with one or more horizontal-facing propellers, much like an airplane’s. They are also called gyroplanes or heliplanes. Compound helicopters are capable of greater forward speed than conventional helicopters but offer less stability overall for hovering.
A coaxial helicopter has two sets of counter-rotating vertical blades. The blades counter-rotate, as opposed to both rotating in the same direction, to counter each other’s angular momentum, which could make the aircraft unstable in flight. The counter-rotating blades eliminate the need for a stabilizing tail rotor. This configuration offers superior hovering power and stability.
A compound coaxial helicopter combines both features in an attempt to provide both a dependable stable platform for hovering as well as high forward speed. Since coaxial helicopters do not need a conventional tail rotor, it is replaced with one or more larger, more powerful, airplane-like propellers for forward propulsion. A rear-mounted ‘pusher’ propeller is usually used, though more standard propellers may be mounted on outrigger wings.
The Sikorsky X2 is an experimental testbed for the technology that has gone through a number of critical test flights starting in 2008. Besides the advanced configuration, the X2 also uses advanced computer controls ("fly by wire" controls, in aviation jargon), high lift-to-drag rigid blades, low drag hub fairings, and active vibration controls.
The X2’s big promise is in combining speed and hovering power in the same aircraft. Usually, a helicopter has to trade off one for the other. The Sikorsky hopes to alter that equation with the compund-coaxial configuration. The X2 is expected to be able to obtain top speeds of over 280 miles per hour. Compare to a current ‘fast’ helicopter such as the Apache attack helicopter, which usually tops out at about 180 mph.
The computer controls slow the rigid, carbon-fiber blades’ rotation at high speeds, keeping them from going supersonic and creating stress damage. They also help with left and minimize drag by adjusting the pitch of the blades.
Also, unlike another helicopter-airplane hybrid, the V-22 Osprey, the X2 does not go through a potentially hazardous ‘transition’ phase as it changes over from vertical to horizontal flight. A compound coaxial helicopter can transition between vertical and horizontal flight relatively seamlessly.
The concept is said to be scalable, meaning that larger and more robust versions of this configuration are possible. Besides the standard civilian and military versions, Sikorsky envisions big heavy-lift descendants of the X2, which will be able to lift payloads of up to 40 tons and still fly at speeds of 250 knots or better.

Different types of X2-derived compound coaxial helicopters, as envisioned by Sikorsky Aircraft.


BLENDED WING BODY AIRCRAFT



Image courtesy NASA.

Blended Wing Body AircraftTech Level: 12
Blended Wing Body (BWB) Aircraft are also more simply called Blended Wing Aircraft, or Hybrid Wing/Body Aircraft. NASA is currently studying the design under its Fundamental Aeronautics research program in cooperation with Boeing. Both hope to have a working blended wing jetliner commercially available by 2020.
Blended Wing aircraft are a type of hybrid flying wing aircraft that merges the fuselage and wing into one composite lifting body. In conventional tube-and-wing layouts in modern jetliners, only the wings provide lift. In a flying wing configuration, the entire body of the aircraft provides lift instead. Pure flying wings maximize lift and reduce drag significantly, but can have problems with stability. The B-2 bomber, perhaps the most famous flying wing aircraft, compensates by using highly sophisticated computer controls to keep the airframe stable in flight.
However, such a solution is not always practical or economical, especially for commercial jets. A BWB aircraft makes a good compromise, optimizing the blend of flying wing and conventional aircraft. The result is a sleek angular wedge shape that some have compared to a airborne manta ray.
The greater lift and lessened drag of the design could result in up to a 20% greater fuel efficiency over current jetliners and could have ranges of up to 7000 miles. The configuration has two general options for engine placement. Either external rear-mounted pods, or internally mounted engines with no external structure. The latter internal engines would have the advantage of reducing drag even further and reducing external engine noise, but may prove more difficult and expensive to repair and maintain.
The joint Boeing-NASA scaled-down prototype is dubbed the X-48B, which is built from composite materials and has wingspan of 21 feet. It has been undergoing test flights since 2007 in order to build up a base of performance data for BWB aircraft. Two more sophisticated prototypes are scheduled to begin test flights in 2011.
If ever put into service, a BWB jetliner could carry up to 800 passengers, though a smaller, 450-passenger version is likely to be built first. It would cruise at about Mach .85. The larger version would have a wingspan of about 300 feet, and be composed of tough lightweight composites, making it lighter than conventional aircraft of comparable size.
Its interior would be laid out differently from modern passenger jets; passenger seating would be spread out horizontally across the mid section of the craft, as much as 50 seats across at the vehicle’s widest point, and may be double-decker to take full advantage of its internal volume. Boarding of both passengers and cargo would be performed from the vehicle’s aft section.

The interior cabiin configuration of a proposed BWB airliner. Image courtesy NASA.


ADVANCED AIRSHIPS


The Aeroscraft ML866, created by Aeros Aeronautical Systems, an advanced airship design that uses a Dynamic Buoyancy technology called the Control of Static Heaviness system.

Hybrid AirshipsTech Level: 10
Advanced AirshipsTech Level: 12
Vacuum AirshipsTech Level: 14
Airships, also known as blimps, zeppelins, and lighter-than-air vessels, were mankind’s first true aircraft, and have been in use for well over a century now. But despite their early initial promise, the advent of much faster powered vehicles, as well as the infamous Hindenburg disaster, has consigned the technology to the backwaters of the aviation world for many decades. However, advances in technology and a vested interest in vehicles that offer superior endurance and fuel economy than modern jets and planes may see airships undergo a popular renaissance in the near future.
BASICS
Airships use the same principle as balloons for lift, in that most of their volume is filled with a lighter-than-air medium. However, whereas balloons are unguided, an airship will have engines attached for maneuvering and forward motion, and will be elongated in shape to facility forward movement through the air.
Airships come in three major types: non-rigid, semi-rigid, and rigid.
Non-rigid airships have no internal framework beyond compartmentalized gas bladders, and depend on internal pressure to maintain their shape. Most modern blimps, such as the kind one would see over a sports stadium, are this type of airship.
Semi-rigid airships also require internal pressure to maintain their shape, but usually have a rigid spine or keel running along their underside in order to better distribute the suspension load.
Rigid airships have a rigid internal framework (also sometimes called a skeleton) that keeps their shape. Internal gas bladders are inflated to provide lift. The external fabric skin is there primarily to maintain the ship’s aerodynamic shape.
Lifting medium is any lighter than air gas, usually hydrogen or helium.
Hydrogen was used extensively on earlier airships, but the Hindenburg disaster caused a serious re-assessment of the gas’s use. Even though a modern hydrogen-using airships could be made much safer, so much so that the possibility of another Hindenburg would be very remote, the stigma of that disaster persists to this day. As a result, hydrogen is rarely used.
Helium is the current lifting gas of choice. Though it doesn’t provide quite the same amount of lifting power as hydrogen gas (about 92%), it is not flammable and is considered much safer to work with. However, some predict that the world’s supply of readily-available helium may run dangerously low by the end of the century, forcing a possible return to hydrogen gas or the necessity of using some other alternate lifting method.
Sometimes heated gasses are used as a lifting agent, making the airship essentially an oversized hot-air balloon.
The usual configuration of an airship is with a large, cylindrical gas envelope as its main body, with the pilot’s compartment, engines, and any cargo aligned along its lower center beam. If the craft is large enough, all of these may be arranged into a single consolidated deck underneath the keel. Zeppelins, with their rigid skeletons and internalized gas bladders, may have spare interior spaces dedicated to various purposes, such as passenger or cargo compartments.
Airships have been used for reconnaissance, exploration, advertising, research, and during their golden years in the 1920s and 30s, for long-range passenger travel and cargo transportation.

HYBRID AIRSHIPS
Tech Level: 10
Lockheed-Martin's P-791 Hybrid Airship.
A hybrid airship gets most of its lifting power (anywhere from 50% to over 80%, depending on exact design) from its gas envelope, but requires additional lift from engines to become airborne. Thus they are ‘hybrids’ of dirigibles and heavier-than-air craft like airplanes and helicopters.
A number of hybrid airship designs have been attempted over the decades, including the helistat and the cyclocrane projects by the US Navy in the 1980s and DARPA’s WALRUS program, which ran from 2005 to 2007. Lockheed-Martin is continuing to pursue the technology with its P-791 project.
Because these craft are heavier than air even when fully inflated, they can manage actual full landings instead of having to be moored. This makes maintenance as well as the loading and offloading of cargo and passengers much simpler.
The craft’s engines are usually mounted on pairs of outrigger wings or struts along the vehicle’s horizontal axis. These engines are used both for lift and for forward motion, and are usually gimballed so they can rotate to direct the airflow as needed. Because of the slow speeds at which airships usually operate, propellers are the preferred mode of propulsion as they’re the most efficient at those velocities.
Some hybrids were designed for straight vertical lift, like the helistat project from the 1980s, which had four helicopter engines attached to the gas envelop. Others are designed to take off and land similarly to airplanes, and require runways, such as the P-791. However, because of the low speeds, they require much shorter runways for both take-offs and landings than conventional airplanes. Some designers, like those behind the proposed 1000-foot long, 250-ton capacity Dynalift project, envision hybrid airships using conventional airport runways and flying the same routes, but only using 30% of the fuel of a modern jetliner.
However, hybrids are derided in some aircraft circles as being the worst of both worlds, at least for the designs so far proposed. They tend to have poor aerodynamic characteristics, can be difficult to control, and are vulnerable to extreme weather and winds.
Many of the airships seen in steampunk and alternate-world science fiction seem to be advanced hybrid designs. Examples include the airships seen in the anime film Castle in the Sky and the webcomic Girl Genius.

ADVANCED AIRSHIPS
Tech Level: 12
The proposed Strato Cruiser airship, designed by Tino Schaedler and Michael J Brown.
These are full airships, which get 100% of their lifting power from their gas envelope.
A number of societal trends and emerging technologies seem to be converging toward the eventual revival of true airships as a widely available means of transport. The future of hybrids still seems dubious, and further development of true airships to take advantage of their characteristic strengths may follow instead. New materials, technologies, and approaches may create a number of radically new designs.
The potential fuel economy and endurance of airships make them an increasingly attractive mode of transportation. Though slower than many modern heavier-than-air craft, they could ultimately move cargo and passengers cheaper, especially over very long distances.
Some important innovations being actively researched for advanced airships are listed below.
-- Optimized Aerodynamic Shape: The traditional cigar shape of older airships worked very well for the slow speeds they were designed to fly at. Newer airships, however, will be designed for higher speeds, and to use some of their forward motion to help generate lift. Their cross section may more closely resemble an airplane’s wing, with a flattened bottom and a rounded, tapering top. Seen from above or below, they will also have a more pronounced teardrop shape.
-- Vectored Thrust: Aside from the usual stabilizing rear rudders, future airships may also have one or more pairs of outrigger wings. These would help not only with stability, but may be used to mount gimbaled propeller engines for far greater maneuverability and lift capacity. The motors would be able to rotate with a large amount of freedom, perhaps even being able to fully spin at 360 degree or angle themselves outward, depending on the sophistication of the exact design.
-- Advanced Materials: Considerably stronger, more lightweight materials will be used in the construction of future zeppelins, such as advanced composite laminates, carbon nanotubes, and graphene. These will allow the airships to be able to handle greater aerodynamic forces and loads, while at the same time allowing much lighter vehicle weights.
-- Solar Cells: Many dirigibles have a lot of upper surface area exposed to the sun. The topside of the airship may be covered with lightweight, high-efficiency solar cells, with the power generated being fed into batteries. This extra power will not only provide for the electrical systems aboard, but will also help to run the craft’s engines, lending the craft a great deal more fuel efficiency. Some airships may also use the solar cells to heat the lifting gas, affording them greater buoyancy.
-- Vertical Configurations: Though the traditional vision for airships is to have them laid out horizontally like most other aircraft, some designs have been proposed for a more vertical-oriented configuration. These tend to be slower than their longitudinal cousins, but are more stable and able to handle inclement weather better. Some proposed vertical-configuration designs envision them as heavy lifters, basically airborne cranes for military and construction use. Other see them as used for luxury travel, such as the proponents of the Aircruise design, who see them as mobile, floating hotels and penthouses.
The Aircruise proposed design, created by Seymourpowell to be a
'luxury hotel in the sky,' is an example of a vertical configuration airship.
-- Dynamic Buoyancy System: Of all the possible future innovations, it is this one which may allow airships to eventually become the useful workhorses of the air many of its boosters envision. These systems will allow airships the same type of versatility heavier-than-air vessels enjoy, while giving up none of an airship’s advantages.
Dynamic buoyancy systems have been tried before, as maintaining the same buoyancy throughout a voyage was essential for the craft to operate nominally. However, different weather effects and the consumption of fuel would alter the buoyancy throughout the journey. These older systems often came in the form of heating or cooling the lifting gas, expelling the gas as fuels was consumed, or using water condensation for additional ballast. Gas expulsion was the only one that worked well, but added to expense as the gas had to replaced at the end of every voyage or leg thereof.
The futuristic Dynamic Buoyancy System uses a much different principle. The exact design envisioned varies, but the basic idea is that the internal gas bladders have two way pumps, one to inflate the bladder with gas and the other to evacuate it and compress it back into holding tanks. Air from outside the craft may be pumped into the outer layer of the bladder cell, using its pressure to help compress the bladder and gas within. To re-inflate the bladder, gas is pumped back in, and its expending pressure is used to push the air in the outer cell back outside the craft again through one or more specialized pumps.
The shape of the internal bladders in this scheme will be much less like inflatable balloons, and more like cylindrical or box-like cells for greater efficiency. On old style zeppelins, such internal structures would have been too heavy to make the airship much use. However, the much lighter, much stronger materials on the technological horizon means such a system could be adopted without adding too significantly to the airship’s weight or performance.
This system would allow the craft to control its relative buoyancy throughout its voyage easily, by reducing or expanding the volume of lifting gas as needed. Its use goes far beyond this, however. It could also increase its buoyancy to maximum for heavy cargo lifting, and it could decrease it in the face of high winds, allowing it much greater control and stability during inclement weather. More, since the buoyancy can be reduced to zero, the ship could actually fully land instead of having to be moored. The latter would greatly expedite the loading and offloading of passengers and cargo, and allow the craft to use already-existing commercial airports (albeit with their own special landing zones).
The dynamic buoyancy system would by necessity have to be computer controlled, allowing the pilot to alter the buoyancy as needed as the vessel moved along. However, expanding or deflating the inner bladders would be a slow process no matter how sophisticated the system, and though the airship would be able to handle a much greater variety of situations than its static-buoyancy cousins, response time would still be sluggish at best.
The disadvantage of dynamic buoyancy systems is that even with the newer uber-materials, they will still add significant weight and cost to a vehicle, and may be eschewed where price or lifting performance is preferred over precision maneuvering.

VACUUM AIRSHIPS
Tech Level: 14In the future, airships may go one better than gasses like hydrogen and helium, by literally using nothing: a vacuum. Using cavities ‘filled’ with vacuum, a vessel could obtain the maximum possible static lift airship technology would be capable of.
Using the same materials and configuration, a vacuum airship would only get about 18% more lift than an identical vehicle using helium. However, with helium relatively expensive and worldwide supplies of it thought to be limited, vacuum airships would offer a more efficient alternative.
A number of technical hurdles remain before vacuum airships could be made practical, such as coming up with materials and a design with a strong enough weight to mass ratio to keep the lifting chamber from collapsing from external pressure. With the near-future development of stronger advanced composites and carbon nanotube materials, however, a practical vacuum airship design should be possible.
Vacuum airships would probably look and handle very similarly to modern and near-future advanced airships already discussed; an outer fabric skin would still be necessary for a practical aerodynamic shape, solar cells over the top would still prove useful for power generation, and so on.
The inner lifting chambers would have a different character, either reinforced cylinders or tetrahedral or geodesic spheres for optimized strength. These may be anchored at multiple points to the craft’s main skeleton, for extra structural reinforcement. Whereas gas bladders on earlier types of airships were designed to keep the gas pressure in, here the emphasis would be on keeping the atmospheric pressure out.
Mobile airtight partitions on their interiors may allow outside air to be filled in or be pushed back out as needed, altering the lifting volume and allowing a dynamic buoyancy system very similar to that discussed for other airships. Since there’s no lifting gas to compress or re-inflate, the operations of a vacuum-based buoyancy system may be considerably faster.
It should also be noted that if a major mishap were to occur to one of the lifting chambers, instead of bursting into flames like the Hindenburg, a vacuum airship would instead implodeviolently. Also, where minor piercings in the gas envelope on an earlier airship might result in a slow but easily repairable leak, a minor hole in a vacuum chamber could implode the entire thing. Even if it survives without collapsing, the violent influx of air into the chamber could skew the airship’s maneuvering with the unexpected thrust, and would of course negate a significant portion of the vessel’s buoyancy very quickly.
Vacuum airships would likely be significantly more expensive than gas-using airships to construct, but since the vacuum in their lifting chambers would never have to be replenished, their operating costs over a long period may be considerably less.



STRATELLITES


Sanswire's proposed Stratellite would hover at 20,000 meters and fill many of the same functions as an orbital satellite. Image copyright Sanswire.

StratellitesTech Level: 12
Stratellites (the term comes from combining the words stratosphere and satellite) are high-altitude, unmanned airships designed to operate continuously for weeks, months, or even years at a time. Their main purpose is to hover over one geographical location and function as a relay and/or transmission source for radio signals, including as broadband, mobile phone, and digital television transmissions. The idea is currently being developed by the company Sanswire and others.
A stratellite would hover about 20 kilometers up, above the jet stream and most weather, and could service an area about 320 kilometers in diameter, or about 480,000 square kilometers. Lag times would be many times less than that for orbiting satellites carrying the same type of signals, but would cover a larger area than traditional transmission towers.
The frame and components would be made of very lightweight but strong modern composites, and the topside would be covered over with high-efficiency solar cells. These cells would feed power into onboard batteries, which in turn would run the onboard systems and the electric propeller engines it would use for station keeping. Though designed to land at frequent intervals for maintenance, a well-designed stratellite could stay in the air for many months or even years without tending.
A system of overlapping stratellites is thought to be cheaper to create and maintain than a similar system of transmission towers, as fewer stratellites would be needed to cover any particular region. Stratellites would also suffer less total wear as they fly above most damaging weather, and so despite their more advanced design may end up needing less total maintenance and repair over their operational lifetimes.
Stratellites can also be used for long-term surveillance of a particular area, for military, research, or other purposes. Unlike spy and surveillance satellites, their coverage of an area would be constant, whereas the satellites will be able to gather data only during certain portions of their orbit.



MONO TILTROTOR AIRCRAFT


Image copyright AeroCopter.

Mono Tiltrotor AircraftTech Level: 11
This unique type of aircraft is being pioneered the AeroCopter company. Its proper name is the MTR VTOL PAV—the Mono TiltRotor, Vertical Take-Off and Landing, Personal Air Vehicle--but its known more commonly by its project name Sarus. AeroCopter is also simultaneously developing an unmanned version called Kestrel. The company has created workable test models of the concept and is at work scaling the aircraft up for full flight tests.
The most prominent feature of the Sarus craft is its large, 21-foot encompassing ring that surrounds the main fuselage. This mono-ring houses twin counter-rotating rotors on its outer perimeter that provide lift. The blades use ‘frictionless’ electromagnetic suspension and can be tilted at different angles independently to meet different flight needs and air conditions.
For take-offs and landings, the ring is canted parallel to the ground, and the spinning rotors provide lift very much like a helicopter. At altitudes of 1000 feet or higher, the ring tilts up at an angle of 87 degrees and locks in place. Power is then shifted to the rear engines on the main fuselage to propel the aircraft forward.
The Sarus aircraft will be capable of carrying two to eight passengers and their baggage while the Kestrel UAV will be capable of carrying up to 400 lbs of payload, which can include supplies, emergency equipment, weapon systems, or whatever its mission may require. It should have a cruise speed of approximately 220 knots at an optimal operating altitude of 8,000 feet, and range of 300 nautical miles on one tank of fuel.



PULSE DETONATION ENGINE


DARPA's Long E-Z aircraft, powered by the pulsed detonation engine, undergoing test flights in Mojave, California in 2008.

Pulse Detonation EngineTech Level: 12
A Pulse Detonation Engine (PDE) is a type of jet engine currently being researched by various interests. The US’s DARPA agency recently (in 2008) test flew an experimental craft using this technology, but the project was cancelled shortly after the initial flights due to budgetary issues.
A conventional jet engine mixes fuel and air and burns it continuously in a steady stream, the expansion of the heated gasses providing thrust. A PDE ignites the fuel-air mixture under high pressure, using powerful supersonic shockwaves funneled down specially designed metal tubes. Done thousands of times a second, these detonations can provide smooth but potent thrust for an aircraft. A PDE-powered aircraft go from low speed to hypersonic velocities very quickly, with very good handling capabilities for the latter.
The engine itself consists of one or more long tubes sectioned off into two separate combustion chambers. A pellet of mixed air and fuel is released down the tube. Quickly following on its heels, in the first combustion chamber, a primary explosion is induced with fuel, oxidizer, and a potent electrical charge. This first explosion is very much like the way a conventional jet burns fuel, called deflagration. However, the explosion is channeled into the second chamber in such a way that its supersonic shockwaves and intense heat detonate the much more potent pellet of fuel and oxidizer, creating the powerful detonation that drives the aircraft forward.
Theoretically, an aircraft powered by a PDE exclusively can achieve speeds of Mach 4 or better. A PDE can be combined with other engine cycles, such as turbines for better low-speed performance, or with rockets or scramjets for greater hypersonic speeds or even possibly insertions into space.
PDEs are very fuel efficient, from 5% to 20% over current conventional engines, as they are able to maintain near-constant-volume combustion. They can also use a variety of different fuels without extensive modification.
However, there are some technical hurdles to overcome before the technology can be adapted for widespread use. The detonation chambers and exhaust system has to be made much more durable than regular jet engines, upping the potential initial cost of any aircraft using a PDE, even after the technology matures. Especially troubling is the issue of valving. Conventional engine valves would take quite a beating from the detonations and would wear out very quickly. Some designs opt to eliminate the valves altogether, relying on the coordinated timing of the shockwaves within the engine to regulate gas flow.
There is also some concern about the airframe being able to withstand the constant acoustical pressure, which can create severe vibrations. Its is also feared the intense jackhammer-like noise from the engine may be too much for either the pilot or those close on the ground. However, the DARPA program and successful test flight went a long way toward allaying some of these concerns.



ORNITHOPTER


A futuristic hybrid ornithopter, designed by Douglas Ollivier.

OrnithopterTech Level: 13
Hybrid OrnithoptersTech Level: 14
This is an aircraft that flies by flapping its wings, much like a bird or insect. Flapping wing flight is one of the most tried and true methods for becoming airborne; nature has after all been doing it for many millions of years. Trying to reproduce the method artificially has proven difficult at best.
This article mostly concerns large, advanced ornithopters that can more or less match the performance of modern-day subsonic aircraft. Though ornithopter technology can eventually offer somewhat greater efficiency and maneuverability than most types of airplanes or helicopters, the alternates to it are easier to engineer and will likely remain cheaper for quite some time. Ornithopters therefore have an uphill battle if they are ever to be developed for wide-spread use; technologies already exist which can do what they can do, and the advantages they offer might not be enough to overcome both the economics and cultural momentum of using other types of aircraft. However, in the more distant future (or on alternate science-fiction worlds) where advanced materials and engineering techniques are more readily available than today, ornithopters may come into their own.
When humans first started trying to create flying machines, ornithopters were their most natural early attempts. Leonardo Da Vinci himself drew up plans for one, and many early pioneers of heavier-than-air flight in the nineteenth and twentieth centuries tried their hand at the technology, usually with less than spectacular results. Gustave Trouve flew the first true successful ornithopter, a small model one, in the 1870s. Adelbert Schmidt flew the first confirmed manned, engine-powered ornithopter in 1942. A human-powered ornithopter was flown by a team from the University of Toronto Institute for Aerospace Studies in 2010.
A number of small, unmanned, remote-controlled ornithopters already exist, and many flight enthusiasts consider them a worthwhile hobby. A few modern manned ornithopters have been built and flown. Ornithopters are also occasionally seen in science fiction. They were featured in the Duneseries of novels by Frank Herbert and in a number of anime
films by director Hayao Miyazaki, particularly Castle in the Sky.

ORNITHOPTER
Tech Level: 13An ornithopter flies by imitating the motion of the wings of birds. Some small ornithopters are based on the workings of insect wings, but for the most part insect wing flight does not scale up very well and are considered for use only on small robot toys and drones.
Bird flight depends both on the shape of the wing as well as forward motion to provide flight, much like in normal airplanes. The wing is shaped so that the bird’s forward motion through the air provides lift to the underside of the wing. Near the bird’s body, the wing does not move very much over all, and most of the lift is provided there. Near the outer edges of the wing, the wing tip is designed to curve or warp, so it cuts neatly through the air on the upstroke but catches the air and pushes it backward on the downstroke.
An ornithopter attempts to duplicate this complex motion. Artificial wings are either moved up and down, or rotated in a tight pattern, with the wing near the plane’s body moving only slightly with each wingstroke and the wingtips curling or turning as needed to push the air backward for forward motion. Unlike with birds, the entire wing need not twist, but rather certain outer gimballed or jointed sections may do so.
Two major problems have always bedeviled the development of practical manned ornithopters; wing designs that can efficiently handle both lift and twisting of the wingtip sections for forward thrust, and materials that handle the stresses on the wings and wing joints at the flapping speeds needed to sustain flight. The latter is very important; in order to lift heavy loads, the wings would have to flap fairly fast and handle a huge amount of stress, especially on take off and landings.
Modern computer-optimized designs combined with sophisticated materials such as advanced composites, carbon fibers, and graphene have made workable manned ornithopters a reality. However, the performance of these vehicles still lags considerably behind other types of modern aircraft, and they will likely be many decades catching up. For now they’re mostly just aeronautical curiosities.
Once perfected, manned ornithopters are theorized to be both somewhat more maneuverable and more energy-efficient than airplanes. Because their lift and propulsion is combined in the same structures (the wings,) they have less overall drag. Their flapping wings are also able to manipulate a larger volume of air around the vehicle, compared to the relatively thin streams of air created by an airplane’s propellers, allowing for more overall control. Very advanced ornithopters may have very versatile wings that may even allow the craft to hover, using similar wing-motions to that of hummingbirds.
One area where ornithopters may see widespread practical use is in unmanned drones, used by the military, research foundations, and other organizations. Their greater efficiency would mean being able to get more flight time out of a given power supply, and their greater maneuverability could serve the operators well in scouting out dangerous and complicated terrain at relatively low altitude, such as an unknown jungle or an enemy urban zone.

HYBRID ORNITHOPTERS
Tech Level: 14Once large-scale ornithopters are a perfected technology, it may be possible to combine them with features of other aircraft to increase performance in certain areas. For example, an advanced ornithopter would be prized for high maneuverability, but there are times when speed may be more important. In this case, a hybrid may mount auxiliary means of propulsion, such as a rear-mounted pusher propeller or even jet engines. The wings would lock in place and its back-up engines would engage, converting it into a more conventional, higher-speed aircraft for the duration.



OBLIQUE FLYING WING


Northrop Grumman's proposed Oblique Flying Wing X-plane demonstrator. Image copyright Northrop Grumman.

Oblique Flying WingTech Level: 13
One of the more unusual aircraft designs ever proposed, an Oblique Flying Wing (OFW) is composed almost entirely of a single wing and one or more jet engines. That in itself is not so odd, as there have been many flying wing vehicles in both practical use and as test models throughout the last seventy years or so. However, in supersonic flight, one tip of the Oblique Flying Wing is designed to sweep back while the other is angled forward. In other words, the wing flies forward with its body angled asymmetrically into its direction of flight.
Experiments with oblique wing flight go back to the 1940s, and in the last 30 years NASA has created a number of experimental aircraft of varying sizes and designs to test the soundness of the concept. Northrop-Grumman in cooperation with DARPA was building an experimental OFW X-plane in order to more fully test the concept, but the project was canceled in 2008. However, the concept seems sound, and work on it may one day be revived.
The principles is the same as that with swept-wing jets. In subsonic flight, traditional straight wings work best at providing efficient lift and stability while in flight. However, as one approaches the speed of sound and beyond, shockwaves generated by the supersonic airflow over the wings produce ever increasing drag and stability problems. For these velocities, you need a different kind of wing shape that can minimize these detrimental shockwaves, namely the more severe angles of swept-wing jets such as the Concorde or the F-18.
There have been many compromise wing designs over the years, trying to incorporate efficient lift and stability for vehicles at both low and high speed. Some vehicles, such as the F-14 Tomcat, use ‘variable geometry’ wings, which means it can physically shift its wings further back at an angle the faster it goes, allowing for better flight characteristics at all speeds.
The OFW uses the same principles as the variable geometry wing, only as its entire body is its lifting surface, it sweeps its full-vehicle wing back at supersonic velocities so that one tip is angled forward and the other back, allowing it to fly asymmetrically. At subsonic speeds it would fly with its body-wing perpendicular to its direction of flight like a traditional airplane, and then sweep its body-wing back at an angle the faster it goes.
The engines the OFW mounts are gimballed to allow them to stay oriented in the proper direction of flight no matter how much the wing sweeps forward or back. They are also placed strategically on the body to act somewhat as rudders to help with stability. Some early OFW test models had one or more vertical fins on the trailing wing edge to help with stability; the tailless Northrop-Grumman version eschewed these for sophisticated computer controls similar to those used in the B-2 bomber, also a tailless flying wing.
Ideally, an OFW would be a very efficient vehicle from low up to hypersonic speeds, allowing it much greater fuel economy, range, and endurance at all stages of flight than most other transonic aircraft. It was these capabilities that made it very attractive as an Air Force project. It would have been capable of a high speed supersonic dash to its target area, and then could loiter over it for a potentially long time. This would make them ideal for use as recon UAVs (Unmanned Aerial Vehicles), similar to the Global Hawk, but could reach the target area and begin surveillance or execute a tactical strike much faster. They could also be used for manned fighting vehicles, though that capability is probably considerably farther off than their potential use as drones.



HELICARRIERS


A proposed design for the helicarrier in the upcoming Avengers movie. Image copyright Paramount Pictures and Marvel Entertainment.

Hybrid Airship HelicarrierTech Level: 14
Gravitic HelicarrierTech level: 16
A helicarrier is nothing less than the airborne equivalent of a modern sea-going aircraft carrier. It is one of the signature iconic features of the Marvel Comics universe, where it is the mobile headquarters of the UN peacekeeping force S.H.I.E.L.D. The idea has been adopted in other fictional sources, such as the film Sky Captain and the World of Tomorrow and in the GI Joe cartoon series.
The original concept of an airborne aircraft carrier is actually based in reality, from efforts to convert dirigibles into such vehicles in the 1930s, complete with working prototypes such as theUSS Akron. These massive dirigibles (the Akron was 239 meters in length and could handle up to 90 tons of payload) would carry a handful of airplanes and act mostly as high-altitude recon bases.
However, in reality the concept was plagued with a number of logistic and operational problems and the program was discontinued, despite a brief revival of the idea during WWII. Launching and capturing the old-style biplanes from an airship was very tricky even for experienced pilots, and resulted in a number of mishaps.
Because of their historical failure and the vehicles’ popularity as comic book fodder, helicarriers are very much an out-of-the-mainstream idea. A search for any serious discussion of their feasibility online, pro or con, has come up empty. What follows is mostly speculation on the part of the author.

HYBRID AIRSHIP HELICARRIER
Tech Level: 14Unlike the Marvel Comics version, which is built around anti-gravity technology, this version uses a somewhat more ‘realistic’ approach, converting a massive hybrid airship into a specialized airborne UAV/missile carrier.
A hybrid airship is an advanced dirigible that gets most of its lift from its gas envelope, but needs the assistance of engines to actually become airborne. More details on hybrid airships are linked to at the end of this article.
If helicarriers become a reality at all, it will probably only be after a general revival in airship technology, after the vehicles prove themselves reliable and practical once again. Given a variety of economic, cultural, and technical factors, a full scale revival of airships probably could not happen until at least in the middle of this century, so an airship helicarrier is placed one Tech Level beyond that, at Tech Level 14.
At that Tech Level, the helicarrier can be built on a scale considerably larger than even the Akronand its sister ships, using either helium or vacuum cells for lift. With advanced materials such as graphene, the frame could be made much stronger and lighter than those earlier airships as well. Assisted by four or more gimbaled outrigger engines, the vehicle could support up to several hundred tons of payload, including crew accommodations, weapon systems, subsidiary aircraft, and fuel.
Helicarriers in the comics and other sources are usually depicted as serving as a mobile base for manned fighters and bombers. Though an airship helicarrier would carry some manned aircraft to accommodate crew and resupply, carrying a contingent of manned fighter aircraft, their fuel, and their necessary crews would simply prove too heavy to be practical, given even a large airship’s weight restrictions. Rather, the enabling technology that may make helicarriers practical in a future world may be one that’s just emerging today: UAVs.
UAV stands for Unmanned Aerial Vehicle. They are basically robotic drone aircraft, usually operated remotely, but at higher Tech levels they may be semi- or even fully autonomous. A number of these vehicles are already in use by the military, the most famous being the Global Hawk.
Because of its range and endurance, the Global Hawk is a giant among UAVs. Most UAVs, especially those meant to be used for medium and short range strike missions, are envisioned to be considerably smaller, the size of small cars or motorcycles. UAVs used for medium-range reconnaissance would be even smaller, on the scale of briefcases or lunchboxes or even tinier. All with sizes and weights much more accommodating for carrying aboard a hybrid airship than manned craft.
These drones would preferably all be battery-powered, to avoid the necessity of carrying fuel for them. Given the Tech Level cited for this innovation, 14, batteries (which could take the form of ultracapacitors or flywheel generators or other advanced tachnology) should be sufficiently potent and long-lived enough to allow the vehicles to carry out long-range or extended-time missions easily. The drones could be recharged off the helicarrier’s engines or from the array of solar cells the airship would have lining its topside.
If the task of teleoperating individual UAVs could be ‘outsourced’ off the vehicle via satellite uplink to other facilities, it could eliminate the need for many such operators aboard and in turn could save on weight. Alternately, advanced computer technology may make the UAVs autonomous enough not to need human input or direction to carry out their missions.
The drones could be launched out the back of the helicarrier, or literally dropped out from below it; once sufficiently away from the carrier, the UAV’s engines would engage and the vehicle would fly toward its mission point. UAVs could be recovered by having them ‘hook’ onto a hanging trapeze-like structure that would be lowered from the helicarrier’s under carriage (this in fact was the method used for retrieving aircraft on the Akron). Alternately, there could be an actual landing and take-off strip on the top of the vehicle. However, because of a lot of potential high winds at a helicarrier’s operational altitudes, using such a strip could be even trickier than hooking onto a trapeze bar.
The helicarrier could also carry a large number of cruise missiles. These combined with their contingent of attack drones could make hybrid airship helicarriers formidable medium-range weapon platforms. Helicarriers, just like their ocean-going cousins, are not meant to be front line vehicles; they would hang back anywhere from a few dozen to a few hundred kilometers from the target zone, and use their drones and missiles to strike at the enemy.
The main advantage of a helicarrier is that even though it is relatively slow-moving for an aircraft, it could still go almost anywhere in the world within a few days to a week. The helicarrier would be able to deliver a fairly large amount of versatile tactical power far ahead of what can usually be mustered by sea or land on short notice. And it would be able to stay in an area and continue the mission much longer than rapid-response vehicles such as long-range bombers.
The main disadvantage of a helicarrier is that its basically a big, fat, slow-moving target. Even though it would hang back from main hotspots of a conflict, it can still be targeted by long-range missiles and aircraft. Shipboard defenses against such attacks would be an absolute necessity, as because of weight issues the helicarrier can’t be too heavily armored, even with miraculous future ultra-light materials.
The helicarrier would probably always have a squadron of fighter drones on stand-by to handle incoming enemy aircraft. At Tech Level 14, there are a number of anti-missile options available; they may also employ fast defensive missiles, or tactical lasers, or flak-shooting coilguns.
Because of this potential vulnerability, helicarriers may not be deployed singly but in squadrons. At least one vehicle would play a solely defensive role, protecting the others as they carried out sorties against the enemy. If available, other aircraft may also fly alongside to lend support.
Power may also be an issue. In the comics, helicarriers were nuclear-powered. In the real world, this would lead to two complications; one of weight and the other of politics. Nuclear reactors are heavy, and would add considerable weight to a vehicle already under severe weight restrictions.
Politics would be even more of a limiting factor. Even if the reactor could be heavily armored and secured against leaks if the vehicle is shot down, many people and allies would still be extremely leery of having a nuclear reactor hovering over their heads. In today’s political climate, the mere presence of a nuclear helicarrier may prove to be more of a hot-button issue with certain nations and factions than with any military engagement the vehicle may be part of. This may prevent nuclear power from being used for these vehicles, though it may be the best mobile energy solution for it. In the more distant future, the political climate might change, and this may become a non-issue.
The topside of the helicarrier would be covered over in solar panels where possible, but even at high efficiency these likely wouldn’t be able to provide enough power for full operations. High efficiency diesel or hydrogen motors may be used instead, as well as large arrays of fuel cells. The helicarrier would therefore would have to be refueled frequently, likely by dedicated heavy-lift helicopters, airships, or VTOL aircraft.

GRAVITIC HELICARRIER
Tech Level: 16Gravitics is the as-yet fictional science of artificially manipulating gravity. Its unknown if gravitics could ever actually be made to work in real-life, so its placement on the Tech Level scale is somewhat arbitrary. Its beginnings is placed at Tech level 16, to coincide with its ubiquitous use in space opera science fiction.
The helicarriers in Marvel Comics use gravitic technology to stay aloft. The propellers and jets seen in their outrigger engines are there for maneuvering only, and are not meant to keep the massive craft aloft on their own.
The same outrigger pods which contain the maneuvering engines also seem to contain the anti-gravity generators. These seem to be gravitic repulsors, actively pushing against Earth’s mass to hold the aircraft carrier aloft. For more details on gravitic repulsors, see the article on Gravitic Drives, linked to at the bottom of this page.
These repulsors used here are very powerful, each able to hold up many thousands of tons individually. They are probably placed on pods away from the main body of the ship to make sure their anti-gravity fields do not interfere with the normal gravity in the main body of the aircraft.
Compared to a hyrbid airship helistat, the Marvel Universe helicarriers can be much larger and heavier, rivaling the mass and size of their waterborne cousins. They can field much larger crews, aircraft, and armaments, and can be much more heavily armored as well. Besides sporting a large contingent of manned jet fighters and bombers, the S.H.I.E.L.D. carrier boasts a number of advanced ship’s weapons and missiles, allowing it to engage targets in a full frontal assault, if needed. While this may seem excessive from a real world standpoint, given the truly monstrous threats on the Marvel Universe version of Earth, it actually seems like a fairly prudent development.



AQUATIC TECHNOLOGY


ADVANCED MONOFINS


The Lunocet Monofin.

Lunocet MonofinTech Level: 10
PowerSwim MonofinTech Level: 10
A monofin is a swim fin designed to fit both feet at once, allowing the swimmer to move himself by working both legs in tandem.

LUNOCET MONOFIN
Tech Level: 10Humans are very wasteful creatures in the water. Even the most efficient swimmers can convert only less than five percent of their energy into forward motion. Swim fins help, but not dramatically.
This may change with the development of the Lunocet monofin, created by engineering inventor Ted Ciamillo after extensive studies of dolphin morphology. While the idea of a monofin isn’t new, Ciamillo’s carbon fiber, semi-flexible Lunocet allows a much more efficient motion than previous models.
The shape and angle of the fin is similar to an airplane’s wing, but instead of creating a lifting effect, the monofin directs the force forward, propelling the swimmer with high efficiency on both the upstroke and downstroke. Users have been able to obtain underwater speeds with it almost twice that of olympic swimming champions, up to 13 kilometers per hour. That is fast enough for some to jump completely out of the water, just the way acrobatic dolphins do.
This increased speed and efficiency will be of great help to divers, who will be able to get to where they need to go by using less oxygen and effort. In fact, the inventor foresees the device enabling a new era of ‘hydrotouring,’ long-distance swimming expeditions using Lunocets to cover dozens of miles a day, with participants carrying streamlined, waterproof packs containing only a global positioning system (GPS), satellite phone, and enough food and water for a few nights on shore.

POWERSWIM MONOFIN
Tech Level: 10There are times efficiency and endurance is more important than speed, and its this need that the PowerSwim monofin addresses. This device allows a swimmer to obtain up to 80% efficiency in converting their body motions in forward propulsion. It still allows faster swim speeds than normal fins, but does not make divers quite as speedy as the Lunocet monofin.
The main feature of the PowerSwim is a design that takes the strain of propulsion off of the smaller leg muscles of the calves and ankles and redistributes it to the much stronger glutes and quads. When used properly, the PowerSwim’s foil undulates up and down, creating rolling currents that act as propulsive force. Because the foil is held outside of the potentially chaotic swimmer’s wake, it can work much more efficiently in producing forward motion.
Using the Powerswim does take getting used to, as it requires the swimmer to semi-crouch in the water instead of stretching out as most do instinctively. Still, it took experienced divers only a few hours to get used to the unusual swimming motion.

The Powerswim monofin. Image courtesy DARPA.


AIR CAVITY HULLS


A diagram of how an Air Cavity Hull works, using DK Group's Air Cavity System (ACS). Image copyright DK Group.

Air Cavity HullsTech Level: 11
An Air Cavity Hull (ACH) uses broad, shallow recesses built into the underside of the ship's hull in order to reduce water drag and improve fuel efficiency. The innovation is being developed primarily by the DK Group, a marine engineering firm located in Rotterdam in the Netherlands.
ACHs use the same basic principle as supercavitating torpedoes, in that an object or hull encased in a bubble of air can slip through water much more easily than a hull slicing the water directly. Most drag in water is caused by friction created by the hull and water. Air has less than 1% the viscosity of water, so it basically "lubricates" the ship as it moves through the thicker medium, allowing much easier passage.
But where supercavitating torpedoes completely encase the torpedo, an ACH uses a broad cavity filled with compressed air to allow a large percentage of a ship’s subsurface hull to glide through the water with less resistance. This produces less drag and increases fuel efficiency.
A compressor near the bow would draw in air from the deck and pump it into the subsurface cavity. Air would build up in the cavity under pressure, giving the vessel a large swath of its ‘hull’ that produces far less drag than the rest. Air would also slowly seep away along the sides and the stern of the cavity, but under most circumstances can be readily replaced by the compressor to maintain optimal pressure. Tests have shown that the compressor would use about one percent of the ship’s available power, which is more than made up for in the increased engine efficiency the air cavity hull provides.
The DK Group’s test vessel, which used its patented Air Cavity System, showed a 7% increase in fuel efficiency. Larger vessels, which would be more stable in the water and would lose less air from less pitching and rolling at the surface, would see an even larger increase. Other research programs studying in both the Netherlands and Russia indicate that a 20% increase in efficiency is theoretically possible, but the best tests have ever been able to achieve is about 10%.
These figures are dependent on the ship moving in more or less calm waters. In choppy or stormy waters, much more of the air in the subsurface cavity will escape, making it more difficult to maintain optimum pressure. But then, ships in rough waters rarely operate at full efficiency anyway, no matter what systems they use.
Originally designed to be used in newly designed ships, DK Group has recently developed a retrofit process that allows them to modify existing vessels with ACHs in about 14 days. The average ship can expect to recoup the cost of the retrofit with fuel savings after about 2 years at current fuel prices.



UNDERWATER HABITATS


Owned by NOAA and managed by the University of North Carolina at Wilmington, the AQUARIUS habitat is an 82-ton double-lock pressure vessel approximately 14 meters long by 4 meters in diameter.

Underwater OutpostsTech Level: 9
Underwater HotelsTech Level: 10
Underwater BaseTech Level: 12
Underwater CityTech Level: 14
Underwater ColonyTech Level: 15
Extreme Depth HabitatsTech Level: 17
The human race only occupies about 30% of Earth’s surface, namely its land masses. While it makes use of the remaining oceans and seas for transport and fishing, their potential as living space for human beings has only been hesitantly explored.
Underwater habitats have been a viable technology since the 1960s. They have helped to facilitate scientific and technological research, functioned as training grounds for submariners and divers, and helped open up the world under the waves for the common man to see.
The first such habitats were constructed in the early 1960s by Jacques Cousteau and his research team with the backing of the French Petrochemical Industry. Other facilities followed, built by various countries and interests, spurred on by scientific concerns and numerous Cold war projects. When that conflict ended, many were shut down in the wake of decreased need and ever-shrinking budgets. Today, only a handful of underwater habits stay in operation, including the NOAA’sAquarius Reef Base in the Florida Keys national Marine Sanctuary.
In the near future this may change. Underwater habitats are being seriously considered as major tourist attractions, with at least two underwater hotels under construction. If these facilities become a success, more would likely follow. As environmental concerns with the oceans increase, new facilities may be set up to study the intricate underwater ecosystems in numerous locales. Ocean-based farming is also becoming increasingly popular, and permanent or semi-permanent sub-surface facilities may be constructed to allow full-time tending by operators.
There has also been talk through the years about true underwater communities, envisioned by corporations as showcases, by scientists as learning centers, by military men as covert bases, by isolationist groups as refuges, and by adventurers as a new frontier. A number of political, economic, and technological barriers have kept that from happening as yet, but that may change as techniques advance in the coming decades.
Underwater bases are beloved motifs of science fiction. The Abyss, Sphere, Deep Blue Sea, and a host of other movies have taken place in underwater facilities. Sub-oceanic habitats are also seen in TV series such as Seaquest DSV, Ocean Girl, and the cartoon Sealab 2020 (and its tongue-in-cheek Adult Swim parody Sealab 2021.) The novel Oceanspace by Allen Steele also takes place in an advanced open-pressure underwater research base. The novel Saturn’s Race by Larry Niven and Steven Barnes also features a large underwater habitat as part of an artificial island.

UNDERWATER HABITAT BASICS


Subaquatic environments are just as complicated and just as potentially hazardous to human life as deep space. In other words, they are one of the most hazardous environments that humankind currently has access to. And unlike space, homesteaders on the subaquatic frontier will also have to deal with a host ecological issues as well.
Like space stations, underwater bases have to deal with a number of factors in order to make humans comfortable enough to live and work productively.
Types: There are three basic types of underwater habitats.
The first, open pressure habitats, exactly counter the pressure of the surrounding water throughout their volume, usually by means of an easily-accessible moon pool, though some also take advantage of airlocks as well. In these habitats, the air pressure is equal to the outside water pressure, and special gas mixtures may be necessary for facilities located below a certain depth. The main advantage of this set-up is that it allows divers easy access in and out of habitat without decompression procedures. However, access to the surface requires decompression. An example would be the US’s old SEALAB facility, as well as the underwater mobile mining station in the movie The Abyss and the sub-oceanic base in the novel Oceanspace.
Most open-pressure habitats operate near the surface, usually within 30 meters of it, where pressure acclimation is usually not a major issue. However, some will occasionally be located deeper. Open pressure bases are usually cheaper and easier to construct and maintain than closed pressure ones.
Closed pressure habitats usually maintain an internal air pressure similar to that of the surface, and access in and out of facility is controlled through airlocks. The main advantage of this set-up is that it allows easy access between it, like-pressured submersibles, and the surface without having to go through pressure acclimation. The main disadvantage of this set up is that it require much more extensive and robust pressure hulls and life-support systems, and any dive would require decompression acclimation. Underwater hotels such as those proposed for Fiji and Dubai, which are designed with casual tourists in mind, would be closed-pressure habitats.
A third type combines both closed and open pressure schemes, with one part of the facility, usually that associated with diving operations, built around and open pressure scheme, while the rest is enclosed in a closed-pressure design. Airlocks with adjacent decompression chambers would separate the two. Such a facility would be more expensive than the other kinds, having to incorporate two types of life support systems, but would offer the advantages of both kinds of habitats in one facility. For example, the closed-pressure section would greatly facilitate the sending and receiving of supplies and personnel from submersibles from the surface, while open-pressure work areas can tend to the needs of divers as they work outside the habitat. Most large underwater habitats depicted in science fiction are combined-system habitats.
Pressure Hulls: Though underwater habitats are often compared to space stations, there is one major critical design difference: space stations need to keep the atmospheric pressure within them from getting out, whereas aquatic habitats need to keep the outside water pressure from getting in. Though on the surface these seem to be similar concerns, they represent dramatically different design philosophies, especially when dealing with extreme conditions.
Pressure hulls underwater need to have rigid, reinforced structural skeletons and need to be able to brace themselves against any possible buckling pressures from outside. They are usually designed to disperse the structural stress evenly over their surface area, hence many underwater vehicle and habitats designed to be rounded and symmetrical, like tubes, ovoids, and spheres.
Currently pressure hulls used for underwater habitats are reinforced steel, but more advanced versions may use lighter and tougher composite alloys or plastics, and graphene or carbon nanotubes may also end up being used in the decades to come.
Air: Most underwater habitats have been supplied air mostly through bottled tanks or by umbilicals to the surface. Advanced facilities in the future may take advantage of artificial gill technology in order to draw breathable air right from the surrounding water. Most modern facilities use atmospheric recycling technology, similar to that on rebreather rigs, allowing them to keep the occupants in breathable air for nearly as long as the habitat’s power and filters can hold out.
Gas Mixtures: The normal sea-level atmospheric gas mixture of nitrogen-oxygen becomes hazardous beyond a certain depth, and could lead to potentially devastating consequences such as the bends. While closed-pressure habitats don’t generally have to worry about this, open-pressure facilities need to take every nuance of pressure effects into account in order to keep their human occupants healthy and active.
Open pressure habitats use many of the techniques pioneered for saturation diving, but geared up for a large, multi-person facility. Much more detailed articles on saturation diving and pressure effects on divers can be found in the links at the end of this article.
Most commonly, helium-oxygen or hydrogen-helium-oxygen mixtures are used at extreme diving depths (usually 50 meters or more below the surface), the ratio depending on the exact pressure the diver experiences and how long he stays at depth. Once acclimated to the pressure at a certain depth, a diver shows no ill physical effects, though long-term health hazards may come into play if he is there for too long, such as aseptic bone necrosis. Also, the helium-oxygen mix does have some inherent disadvantages, such as heat retention issues and distorting voices.
The deepest an open-pressure habitat has operated has been at 183 meters. Experimental tests show that dives down to 600 meters or beyond may be possible, but the divers in those experiments experienced trembling and memory loss from their experience. Its safe to assume that beyond 200 meters or so of depth, closed pressure facilities are necessary for the purposes of human habitation.
Power: Most subaquatic habitats have so far used either batteries, or umbilicals from the surface, or both, to provide energy needs. A number of such facilities from science fiction have also used nuclear reactors. In the future, more advanced bases may use tidal turbines, OTEC generators, or surface wave generators for electricity, or may rely on geothermal taps in certain advantageous locations.
Temperature: Maintaining a comfortable work environment would be essential for inhabitants. Water is forty times denser than air, and tends to easily drain away heat from any source, especially in the deeper parts of the ocean. Besides just being pressure-resistant, hulls of underwater habitats would also have to be heavily insulated to help keep the interior comfortable for its inhabitants.
Construction: All underwater facilities have to date been manufactured in whole on the surface and then lowered into place with cranes and cables. Very large facilities in the near future may be constructed similarly to the ISS, with many smaller modules interconnecting to form a larger base. No one has yet tried manufacturing or assembling a facility from raw parts completely underwater, though for some of the larger habitats discussed later that may become a necessity.
Access: Some habitats can only be reached by scuba diving. Some bases can be accessed by using a vehicle that docks with one or more of the habitat’s airlocks. Some closed pressure facilities may utilize elevators or submerged walkways that go right up to the surface, especially if near shore.
Ecological Considerations: Undersea habitats have to worry about something that their conceptual cousins, space stations, do not: the ecological health of their surrounding environment.
Ocean ecologies are already under threat from a variety of sources; over-fishing, pollution, invasive species, and global warming among them. Designers and operators of any underwater facility have to proceed from a much more ‘green’ perspective now than their Cold War predecessors. Most agree that healthy ecosystems and biodiversity represents the true worth of the oceans, and trying to minimize wide spread destruction of such should always be one of the top considerations with the design of an underwater facility.
Waste management is a big issue, especially with larger bases and colonies. Smaller habitats can just store their waste and send it to the surface for disposal. True underwater cities will have to find other ways of dealing with this, either through stringent recycling and/or using organic waste to help fuel aquaculture (ocean-going farming) projects.
But even so, just the presence of a lot of humans and their machines is bound to have an impact on surrounding marine life, and the more people who take up residence there, the bigger the impact will. This could take many forms. For example, humans may scare off many of the big predators, allowing certain kinds of harmful creature like jellyfish to flourish that were previously kept in check. Waste heat from the habitat and human activity may encourage the runaway growth of certain types of microorganisms in the water, making it toxic to certain species of fish. Industrial accidents could cause devastation to the surrounding ecology, as in the case of oil spills or mid-ocean plastic garbage accumulations.
Any underwater human habitat, especially the larger kind, would have to weigh potential benefits against the potential damage these conditions could cause. After all, besides there is more than just moral and bigger environmental concerns at stake; damaging the surrounding ecology could also jeopardize many habitats’ economic viability.

UNDERWATER OUTPOSTS
Tech Level: 9

Outposts are designed for a handful of human inhabitants, usually not more than a half dozen or so, and usually for short-term stays. Durations for crews are typically only a wekk or less, though some have experimented with crews staying for up of to two months. Almost all underwater habitats constructed in the real world so far fall into this category. Almost all were open-pressure habitats.
The purposes of an outpost can be many and varied. These may include:
-- Development of advanced diving techniques and subaquatic habitat technology
-- Research into underwater construction and salvage
-- Military research
-- Studying sea life and ecology
-- Dolphin training and cetacean research (Sealab used trained dolphins for various tasks)
-- Construction, maintenance, and repair of off-shore drilling platforms
-- Construction, maintenance, and repair of underwater industrial or military assets
-- Espionage
-- Laying, and maintenance of, underwater cables
-- Climatological research
-- Tourism
-- Subsurface and seabed aquaculture
-- Submersible maintenance and resupply
-- Training facilities for astronauts
Most outposts are constructed whole on the surface and lowered into place once finished, with divers and automated submersibles usually fine tuning its final placement and anchoring. The seabeds where they’re placed are often scouted and prepped before hand to accommodate the new structure with minimal possible complications.
Most outposts are also dependent on surface umbilicals for power, communication, and sometimes for air cycling and replenishment, depending on the exact systems used. This helps to reduce cost for both construction and maintenance, but does lessen the outpost’s ability to handle unexpected or emergency situations.

UNDERWATER HOTEL
Tech Level: 10
The proposed Poseidon underwater hotel under construction off of Fiji.
A number of projects are underway to construct underwater hotels. A more extensive article detailing several of these are linked to at the bottom of this page.
Large underwater hotels are envisioned to be mostly closed pressure structures, with easy access to the surface, usually with elevators going directly between the surface and the seafloor portion of the hotel. They will likely be constructed within easy reach of the shore, and will probably be not placed so deep that pressure issues would greatly increase manufacturing and engineering costs.
Clients would be able to rent various rooms, often with spectacular panaorama views opening onto the surrounding environment. Many hotels would artificially cultivate the surrounding lifeforms to give guests as spectacular a variety of sea life to view as possible.

UNDERWATER BASE
Tech Level: 12A base is a larger facility than an outpost, designed for continual occupation by crews which may stay for many weeks or months at a time. To extend the space station analogy a bit, an underwater outpost would be similar to single-launch stations from early in the space age, like Salyut or Skylab. An underwater base would be the equivalent of the multiple-module Mir or ISS.
Various enterprises may require more than just a handful of personnel on site underwater. Such projects may include larger habitat construction, military bases, large oceanographic laboratories, underwater mining facilities, and extensive subsurface and seabed aquaculture. Bases may use open or closed pressure systems, but it seems a combined systems scheme would be most advantageous, so the crew could live and work in comfort when not out on a dive.
As bases would usually be meant to hold more personnel for longer, they would also be constructed considerably larger. In another analogy to current space stations, they would likely be construction in modular form on the surface, which would then be lowered into the water and assemble together with divers or Underwater Autonomous Vehicles (UAVs.) The exact design and number of modules would depend on the base’s purpose, but individual components would likely be no larger than shipboard cranes and transport facilities could handle.
Bases could also be constructed on-site from raw building components, but would take considerably longer and would require many more man-hours in the water to accomplish. Using UAVs for the same purpose would likely take even longer, at least with current or near-future technology. The modular approach is likely in the end to be much cheaper until underwater working and construction techniques improve significantly.
Like with outposts, the base would still have extensive umbilical connections to the surface for power, air, communications, and so on. However, because they have to provide for many more people, bases may be designed to be more self-sufficient in some areas, such as power or air production through artificial gills, in order to help mitigate problems should the umbilicals fail.

UNDERWATER CITIES
Tech Level: 14An underwater city leaves behind the idea of pure-utilitarian notion of underwater habitats, and is a full-blown effort at long-term residency under the waves. The definition of a ‘city’ is used pretty loosely here as well, meaning any fairly large undersea community, from a few hundred to a few thousand inhabitants. It may be a single unified construct, like the classic domed underwater cities of science fantasy, or they may just be centralized loose conglomerations of bases, outposts, and other structures.
Though today there seems to be little call for large scale oceanic habitation beyond a few isolated dreamers, in the future needs and attitudes may change, and advancing technology may make such projects much cheaper.
For example, artificial island projects may proliferate, either as adjuncts to deep ocean oil drilling, OTEC generator platforms, tourist destinations, military bases, or the like. Sub-oceanic communities may spring up in or around these, as large underwater structures would already be in place, and may be built to accommodate human inhabitants as part of their design.
Underwater cities may also become a natural outgrowth of seabed aquaculture, as subsurface farms and fisheries grow ever larger in size.
Underwater cities may also be built with military advantages in mind. Hidden under hundreds of meters of water, a military base or population center may also be much harder to find or attack.
Coastal cities may also end up expanding into the water of their harbors just from various population and economic pressures. At first it may start as just tourist attractions, expansions of docks, and residencies for the rich, but as the technology advanced and the costs come down, they may build more affordable sub-aquan neighborhoods if expansion in other areas may be blocked.
If the worst predictions of global warming come true, many coastal communities and cities may end up permanently flooded or even completely underwater. Subsurface communities may spring up in and around these ‘zombie’ cities, intent on salvage and recovery, perhaps recycling old buildings and materials for their underwater habitats.
Underwater cities will have to be far more self-sufficient than their smaller cousins. Umbilicals to the surface could supply enough power to a large community, but consumables such as air and drinking water may be another matter entirely. And even so, the city having its own dedicated power sources would be a necessity in case of any kind of emergency.
Underwater cities would most likely be almost entirely closed pressure affairs, with only divers’ work areas having an open-pressure scheme. The large dome cities of golden age scifi would seem to be impractical as the domes would seem to be too vulnerable to wear and damage over time. Instead, for large open spaces a subaquan city may dig into the seabed instead and have a number of levels beyond just the visible constructs on the ocean floor.

UNDERWATER COLONIES
Tech Level: 15
Undersea colonies are collections of various underwater habitats of various sizes, which may or may not be physically interconnected, whose population generally runs into the thousands. The difference between a city and a colony are two fold: suboceanic colonies are completely independent of the surface, and are designed to expand on their own.
By being independent, this is not to say that the colony would have no contact with the surface. Indeed, the colony would still likely trade and receive visitors and tourists and the like from surface interests. But it will be able to tend to all its vital needs and functions itself. Food will be cultivated through various aquaculture techniques and drinking water would be desalinated on-site. The colony would likely have several different local power sources available, and be able to extract breathable air from the surrounding water or seabed. It would also have a number of mining and manufacturing industries to fabricate the parts and machines it would need to keep itself going. Recycling would be as efficient as possible.
In other words, it would be a miniature world unto itself, in some ways not unlike the grand space colonies visualized by Gerard O’Neill and others.
The majority of the colony would likely be closed-pressure systems, very similar to surface conditions in order to bring up families with a minimal of medical complications.
At Tech Level 15, when truly independent undersea colonies become possible, pressure hull and aquatic life support technology should allow for long-term establishment of large closed pressure habitats down to 250 meters or so. This would open up much of the continental shelves to potential human occupation, and entirely new territories and nations may arise under the oceans.

EXTREME DEPTH HABITATS
Tech Level: 17Eventually, materials and habitat technology may advance to the point that outposts could be set up just about anywhere in the ocean, even at its lowest and most crushing depths. Chances are these wouldn’t be very large affairs, as at this Tech Level, automatons would be able to do just about anything a human could do, if not better. Humans may venture into these depths only to supervise and coordinate these super-advanced UAVs. Purposes for such extreme depth habitats could include mining, exploration, and scientific research.



ADVANCED SUBSEA CRAWLERS


SMD's Rock Trencher 1 is currently the world's most powerful seabed tractor. Image copyright SMD.

Advanced Subsea CrawlersTech Level: 11
Manned Subsea CrawlersTech Level: 12
Mobile Underwater HabitatTech Level: 13
Mobile Drilling StationTech Level: 13
Subsea crawlers (also called seabed/subsea/underwater/ocean tractors or tanks) are remotely-operated vehicles designed to traverse the ocean floor. They use much of the same technology pioneered for AUVs (Autonomous Underwater Vehicles.) The main difference is that they usually use tracked transmission for propulsion on the seabed instead of propellers in the open water, and are not designed to float on their own. Crawlers may be built to be partially buoyant in order to offset some of their weight, and the smaller of them can be rigged with variable buoyancy systems to allow for greater mobility around obstacles and to surface for retrieval by tending ships.
Undersea crawlers have been employed in the real world for a number of years now, used for exploration, excavation, and the laying of underwater cables and piping. Most are designed to handle the wide variety of uneven surfaces encountered on the ocean floor. They are usually controlled through long cables, either connected to a surface ship or to a facility on land, if relatively near the shore. These cables not only handle communication, but can also supply power, and it the case of crawlers controlled by surface ships, serve as hauling cables for retrieval.
Subsea crawlers are not meant to be speedy vehicles. They designed with toughness and stamina in mind, as their applications usually require long-term stays underwater, sometimes beyond comfortable depths for divers or submersibles. Without the need for buoyancy, a crawler can typically pack a much heavier loads and battery packs than propeller-driven UAVs of comparable size. This endurance and ability to carry heavy machinery is what makes them very useful for industrial-sized tasks, like seabed excavation or laying undersea cable.

ADVANCED SUBSEA CRAWLERS
Tech Level: 11
South Korea's proposed Search and Rescue crawler.
As manned presence under the waves expands, undersea crawlers will likely find many new niches that may need their specific capabilities. Different types of subsea crawler present and future may include:
-- Cable Layer: Lays cable or flexible pipe over long expanses of the sea floor. May or may not be able to function as a cable maintenance crawler.
-- Cable Maintenance: Runs along already-laid cable or pipe to help spot or patch damaged sections.
-- Excavation Tractor: used to clear rocks and other debris for construction, such as for habitats or bridge pylons.
-- Exploration Probe: These are usually fairly small crawlers used exploration and scouting.
-- Search and Rescue Drone: Also usually small, these crawlers would be used to search suspected areas of shipwrecks for debris and ship remains.
-- Aquaculture Tractor: Used primarily for tending crops in seabed aquaculture.
-- Survey Crawler: Designed to do in depth, on-site mineralogical surveys of the ocean floor.
-- Mine Layer/Remover: A crawler designed to anchor cable for floating mines. A crawler would be advantageous over ships for increased stealth. Also can be used to remove said mines.
-- Mining Drone: Designed to actually drill or plant explosives for excavation.
-- Maintenance Drone: Designed for repairing and maintaining undersea structures. This may include underwater portions of mostly above-ground structures, like bridges.
-- Salvager: Used to help salvage and recover sunken vessels.
-- Construction Crawler: Used in the construction of subsea structures. Will have a number of analogs to land-going construction vehicles. I.e., bulldozer, backhoe, steamroller, etc.
-- Subsea Truck: Used to haul heavy loads that may be too big or unwieldy for available submersibles.
-- Weapons Platform: Would be slower than submersibles, but would also be stealthier (no propellers) and harder to pick up on sonar. They may carry conventional or supercavitating torpedoes to use against other sea-borne targets, or long-range strategic missiles for use against land targets.
Currently, all crawlers use tracked or wheeled transmissions. However, in the near future, crawlers using robotic legs could be feasible as well.
South Korea is currently researching small legged crawlers meant for search and rescue operations. Officials there believe that such units could have sped up search efforts of the Cheonan sinking incident in 2010, which took place in the West Sea with strong tidal currents and poor visibility.
The South Korean initiative envisions small six-legged crawlers armed with advanced sensors and variable buoyancy systems, allowing them to float and swim as needed with their legs acting as paddles or fins. Designs call for them to walk at speeds of up to 98 feet per minute and swim at up to 59 feet per minute. They could also operate at extreme depths, far below what divers or even conventional submarines could manage.
Legs on the ocean floor may be more advantageous than wheels or tracks in handling mud or silt covered surfaces. However, robotic legs are more complicated as well, and for operating at deep depths, this increases the chances for a malfunction.
Control of crawlers is usually by long communication cables. Power cables are still an option, but because crawlers can carry much heavier battery packs, they are not as much a necessity as on some other AUVs.
As computer systems advance, future crawlers may become semi- or fully autonomous. Semi-autonomous means they would be able to perform their assigned tasks with a minimum of human supervision (similarly to the Mars rovers Spirit and Opportunity), whereas fully autonomous means they may be able to actually make decisions on their own as to how best to proceed when encountering obstacles (like the small robots in the movie Silent Running.)
Cables may still be attached for both safety reasons and to allow a human to control the crawler via teleoperations if it runs across a truly formidable setback. Or the crawler may be fully on its own, and communicate with its human operates via Ultra Low Frequency radio or by occasionally sending up small, retractable antenna buoy on a long tether to the surface.



MANNED SUBSEA CRAWLER
Tech Level: 12

Information on possible manned versions of this technology is very sparse; what follows is mostly speculation on the part of the author.
This is the straight-forward marriage of subsea crawler technology with existing submersible life support systems. Manned crawlers are preferable in situations where on-site human expertise is needed, such as in underwater construction and excavation, seabed aquaculture, rescue operations, and so on.
Manned subsea crawlers are often visualized as being underwater versions of land-based heavy duty vehicles, such as bulldozers, trucks, and so on. Buoyant craft such as submersibles will likely remain the primary means of transporting people, while crawlers will bear the burdens of the heavy work. However, because of their potential expense, manned crawlers may be made modular, with a central chassis with engine and life support module, designed to be fitted and modified with different tools and attachments as the job requires.
Open-pressure crawlers are possible, where the air pressure within the vehicle is equalized with the water pressure without. This would make exiting and entering the crawler while underwater much easier to manage. They would also be easier to engineer, build, and maintain than closed-pressure vehicles. However, most open-pressure vehicles could operate optimally only within about 30 meters of the surface; any lower and the operator may have to start dealing with various pressure-related complications. Even with compensating techniques such as specialized gas mixtures, the absolute deepest any such vehicle could safely operate would be around 200 meters or so, and even then the crew will likely start developing pressure sickness symptoms.
Closed pressure systems are another matter. Closed pressure systems maintain sea-level atmospheric pressure for the operator and crew no matter the actual depth. They are able to operate at much more extreme depths, but in general are more expensive and harder to engineer and maintain than open-pressure systems. Using contemporary technology, a closed-pressure manned crawler could possibly operate at depths down to approximately 1000 meters. In general however, the deeper it is designed to operate, the more difficult it will be to engineer and the more expensive it will be build. More advanced Tech Levels may allow for deeper operating depths and/or cheaper construction costs for the same performance.
Manned crawlers using robotic legs instead of treads or wheels are possible. In fact, legged vehicles may see their first real practical uses underwater instead of on land. Under the sea, the vessel can be made partially buoyant, taking a great deal of weight stress off the legs and allowing them to operate much more efficiently, at least as far as forward locomotion is concerned. Like in the proposed South Korean Search and Rescue crawlers mentioned in the last section, its also possible to equip the crawler with a variable buoyancy system, allowing it fully float and to use its legs as flippers or paddles for swimming.

MOBILE UNDERWATER HABITAT
Tech level: 13Taking a cue from a proposal for creating a Moonbase, These would be similar to the manned, multi-person underwater outposts used today, such as the AQUARIUS habitat run by NOAA and the University of North Carolina, but outfitted with heavy-duty treads, wheels, or even legs to allow it to slowly move across the ocean floor.
The purposes of such a habitat may be twofold. First and foremost, it could operate as a mobile laboratory, exploring and observing the environment of the seabed. It may mostly operate along the shallows of the continental shelves, so that the crew can use an open pressure system and easily exit and enter the habitat on dives to collect and observe flora and fauna.
Secondly, mobile habitats could make it easier to create a larger underwater habitat. Two or more such habitats can be maneuvered into place and linked altogether through adaptive universal airlocks into a single base. They can also be reconfigured as needed if conditions change. Though mobile habitats may be more expensive than normal habitat modules, their use may make building a larger consolidated habitat easier and quicker, which may ultimately make up for the added building cost.

MOBILE DRILLING STATION
Tech Level: 13
An Ancient mobile geothermal drilling platform, from Stargate: Atlantis. Image copyright MGM.
The featured setting of the movie The Abyss, this was essentially an oil drilling platform designed to not only operate completely underwater, but also able to move itself to optimize its drilling location. A more advanced version of the idea was also seen in the ‘Submersion’ episode ofStargate: Atlantis.
The station in both sources was approximately as large as a modern seagoing drilling platform but with a sealed environment designed for long-term habitation under the ocean.
The Abyss version was an open-pressure system habitat, with large diving moon pool and a number of small excursion submersibles. Though the main habitat was open-pressure, the submersibles were closed-pressure, able to dive deeper than the station, but set at the pressure of the main habitat to allow easy access in and out.
Presumably the drilling station in the movie did not actually have to transport itself over hundred of miles to its drilling site, but was rather transported via ship like a normal drilling platform and lowered into place. Once on the seabed, it could adjust its own position for maximum efficiency. The station seemed to move itself on a series of massive treads, with anchoring stilts keeping it in place once it settles into a location. Moving speed was implied to be very slow, less than a mile or so an hour, but given its mass that was to be expected.
The mobile drilling station seen in Stargate: Atlantis, built by the uber-tech Ancients, was far more advanced. Its exact means of propulsion wasn’t disclosed, but it was implied that it could move itself at fairly good clip and could operate at depths of many thousands of meters. It was a closed pressure system that maintained the same atmospheric pressure as sea level.
Whereas The Abyss station was meant to drill for oil, the Stargate: Atlantis platform was meant to drill into various crustal hotspots and use them to generate geothermal energy.



SEMI-SUBMERGED YACHTS


The Trilobis 65, a radical new boat design.

Semi-Submerged YachtsTech Level: 11
The Trilobis 65 is a radical new design of semi-submerged yacht, which could point to a new trend in civilian boat design for a number of purposes.
Semi-submerged vessels, which can carry a majority of their mass under the waterline, are hardly a new innovation. The Civil War warship Monitor was built with this principle, as are a number of modern vessels, including large ocean-going barges that transport fully-assembled oil drilling platforms out to sea.
However, semi-submerged vessels have traditionally been used in purely utilitarian roles. TheTrilobis design means to make available this type of ship to the public at large, in roles as private yachts, leased or rented luxury living space, and as chartable tourist boats.
The vessel measures 20 meters long, 13 meters wide, and is crafted in a sleek rounded teardrop shape. But the Trilobis' most striking and commercially attractive feature is its submerged observation bay which allows a stunning 360-degree view as well as comfortable seating and accommodations. The yacht is designed to hold up to six people while at sea, and has a maximum speed of 7 knots.
The yacht is powered by a combination of fuel cells, solar power, and clean-burning diesel engines, making it fairly environmentally friendly as such vessels go. Its hull is composed of lightweight but extremely resilient composite laminate materials.

Image copyright Popular MechanicsThe Trilobis is currently envisioned as a luxury vessel and much of its interior design and accommodations reflect this. However, if proven successful, the design may be incorporated into mass-produced future vessels, allowing semi-submerged craft decades hence to enter the mainstream of civilian boating.
The ship does have several downsides aside from its steep $5 million initial price tag. The first is that because of its many new designs and power technologies, as well as the need to frequently clean the outside of the observation deck's windows, it would need much more frequent and expensive maintenance than other vessels of its size and purpose. Also, the vessel as currently designed does not work well with existing docks, especially as it would need much more bottom clearance because of its submerged observation deck. It requires a custom-designed mooring wharf that projects out onto the water and would "mate" with the rear open notch in the yacht's deck.

Image copyright Popular Mechanics






UNDERWATER HOTELS


The view from the main lobby of the Poseiden Underewater Hotel, currently under construction off of Fiji. Art by Peter Bollinger.

Underwater HotelsTech Level: 10
Colonizing the ocean depths have long been a dream of many visionaries, but very little has ever actually been done to make it a reality. Many see tourist resorts incorporating underwater dwellings as the first in a very long process in developing practical ocean-based habitats. The techniques and technologies developed by these undersea hoteliers may well be incorporated into sub-oceanic communities decades hence.
Underwater hotels currently available or being built come in all shapes and sizes, from the five-room Jules Underwater Lodge in the Florida Keys to the proposed 1.1 million square foot Poseidon resort in Fiji to the enormous luxury hotel Hydropolis currently under construction off the shores of Dubai.
The main selling point for undersea hotels is their current novelty. The Jules Underwater Lodge, the only venture of its kind fully in business, is basically a refit of the old La Chalupa research laboratory, an underwater habitat used to explore the continental shelf off the coast of Puerto Rico. Its systems were updated and it was retrofitted with a number of luxuries, including hot showers, a fully stocked kitchen, the latest in entertainment systems, and even a food delivery service.
The lodge is only five rooms large, and is not designed for an occupancy of more than half a dozen or so people at once. Perhaps its greatest selling point to privacy-minded tourists, and stumbling block to popular use, is that guests must scuba-dive down to the Lodge in order to use it.
The Poseidon Hotel in Fiji is a substantial step up in both size and ambition from the Jules Lodge, a luxury resort scheduled to be opened for business in mid-2008. Just offshore from a more traditional shoreline resort, the Poseidon features broad well-lit tunnels leading down under the waterline to twenty luxuriously-furnished suites. Each suite will offer views of surrounding coral, a jacuzzi tub, bathrooms with double sinks, exterior underwater lights, and fish feeders to enhance sealife viewing. Also featured is an underwater fine-dining restaurant, which rotates slowly to provide patrons with a stunning panoramic subaquatic dining experience.
The entire hotel is being manufactured on land and will be lowered section by section into its home lagoon. This method will allow the creators of the Poseidon to save considerable money by eliminating the need for extensive underwater construction.
Since the hotel and its main accessways are attached to the surface and maintained at surface air pressure, there is no need for any kind of unusual environmental support features. Air, power, clean water, etc, would all be able to circulate easily in and out with standard machinery. The one exception to this may be heating. Water is forty time denser than air, and as such is a much more efficient sponge for soaking away heat from a structure. And given the hotel's great many transparent surfaces, heat loss can easily become a major expense issue. Though design details on the hotel itself are sparse, its certain that the structure will be as well-insulated as the design allows, and the transparent surfaces located in the suites, restaurant, and observation ports will not only be thick and shatter-resistant, but have a number of advanced thermal insulating properties as well.
The Hydropolis underwater hotel is a project of truly stunning scope taking shape off the shores of Dubai. The largest underwater dwelling yet devised, the 220-suite structure will consist of three main sections: a "land station", basically a large surface platform, built directly above the structure to welcome visitors; connecting tunnels to take guests via tramway to the main structure; and the main hotel itself, which is described as the world's foremost submarine leisure complex. The hotel will feature a grand ballroom, theme suites, restaurants, bars, conference rooms, and even a fully-functioning marine biology research facility.
As with the Poseidon hotel, since the facility is connected to the surface, there is no need for any special environmental support facilities. Its larger and built a bit more compactly than Poseidon so it may be a bit more efficient at heat retention. Still, keeping such a large facility at a comfortable temperature for its guests will always be an ongoing concern for its operators.
Though underwater hotels currently and in the near future are likely to remain in the realm of the affluent and well-to-do, they do point the way to how practical, easy-to-access subaquatic habitats could be produced on a larger scale. As the techniques used in the construction of these hotels are proven and become more commonplace, overall costs and construction time should diminish as well, at least to the point that other communities and corporations might be willing to invest in similar projects. Underwater hotels of every stripe and type may begin popping up all over the globe, their number and competition driving down cost. In a few decades, they could be handling not only the leisure time and convenience of the wealthy, but that of the more common folk as well.



UNDERWATER GLIDERS


Underwater GlidersTech Level: 10
Underwater gliders are a type of autonomous underwater vehicle (AUV) designed for long-endurance oceanic missions that may last many weeks or months. Compare this to more common AUVs, which usually have operational durations measured in hours if not tethered to a surface ship.
The concept of oceanic gliders can be traced to DARPA and to oceanographer Henry Stommel in the late 1980s. In 2003, a working prototype was created by the Webb Research company and the technology has been used for research by a number of institutions since. In 2009, an underwater glider, created by a team from Rutgers University, completed a trans-atlantic crossing with an underwater glider, the first AUV ever to do so, after a 221-day voyage.
Underwater gliders, like their sky-borne cousins, mostly follow the ebb and flow of their medium, flowing with oceanic currents. However, they are not complete dead weights, like buoys. They can use their wings and alterations in their buoyancy to slowly maneuver themselves along at the dizzying speed of about half a knot.
These craft take advantage of thermal stratification, the phenomenon of the water being warmer at the surface than at deeper depths. The glider's engine contains wax tubes that are warmed at the surface. This causes them to expand and push oil from an internal reservoir to an external bladder, forcing a change in buoyancy and helping the glider dive. At lower depths, where the water is colder, the wax compresses and the oil is drawn back into the internal reservoir, causing the glider to rise toward the surface. These slow dives and climbs allow the glider to very gradually move through the water independently of whatever current it may be following.
At the surface portions of its cycle, one of the glider’s wings, each fitted with an antenna, is canted up fully out of the water to transmit data and to receive navigational and GPS signals.
How deep a glider can dive depends on the model, but ranges between 200 and 1500 meters. However, deep-diving gliders are under development which will be able to achieve depths down to 3300 meters.
The craft uses arrays of onboard batteries to control its pitch and orientation, as well as to power its sensors and radios. When the idea of an underwater glider was first being bandied about in the 1980s, it was thought that thermal differentials in the water could help recharge the batteries, allowing the gliders to potentially remain at sea indefinitely. Unfortunately, no one has yet developed this capability, though research is ongoing.
Underwater gliders typically carry sensors such as sonars, hyrdophones, thermal sensors, and others in order to monitor not only the ocean environment but wildlife as well. Because the gliders make almost no operational noise, they are less likely to drive away animals such as whales or dolphins. Scientists use the data collected by gliders for a variety of purposes, which include better understanding of the interactions between the ocean and atmosphere, and of potential climate change impacts on marine ecosystems.
While underwater gliders will no doubt provide a great deal of valuable data on Earth’s oceans in the decades to come, it may be in the far future they could be used to explore bodies of water on other worlds such as Europa where their low power requirements and long endurance would prove invaluable.

Cross section of the SPRAY underwater glider.


The dive cycle of an underwater glider. Times and depths vary with different models.


TIDAL POWER


An array of offshore tidal turbine generators. Image (c) MCT ltd.

Tidal Turbine
Tech Level: 11
Circular Tidal BarrageTech Level: 11

The ocean's tides contain a tremendous amount of energy that goes virtually untapped. It is estimated that the US alone has about 2100 terawatt-hours of energy tied up on its coasts. About one-eighth of that power, about 260 terawatt-hours, has been quoted as being able to be fully developed with minimal environmental impact. This is equivalent to the output of all of the US's hydroelectric dams.
Tidal generators have been in limited use in some locations for decades. For example, the La Rance Estuary in Northern France is basically an advanced, specialized dam, called a "barrage", built across a river mouth. When the tides comes in or out, water flows through tunnels in the barrage, driving electrical turbines. The La Rance barrage is able to generate 240 megawatts of energy.
The main disadvantage of this kind of tidal generator is that it can generate power only when the tide is coming in or out, or for only about 10 hours a day. Environmental concerns also exist in placing a large dam at the mouth of any river. However, the huge advantage of tidal power is that once the initial facility is built, the power is free, dependable, and renewable.

TIDAL TURBINES
Tech Level: 11
A more efficient way of harnessing tidal power is with tidal turbines. Resembling upside-down wind turbines, these are buoyed well off shore and submerged in at least several meters depth of water. Under the water, currents constantly flow from tidal forces, providing the turbines with a near-inexhaustible source of power twenty four hours a day.
The constant flow of currents causes propeller-like turbines to turn, generating power. Because the water currents are much denser than air, tidal turbines can be made smaller than wind turbines and still deliver as much energy. A single turbine in a "proof of concept" demonstration offshore of Britain produced 300 kilowatts worth of power. Turbines would be deployed in large arrays called "farms" to generate many megawatts of energy for onshore consumers.
Most of the tidal devices work well in a 4-5 knot current and anything less would be uneconomical in energy production terms. Anything more than that would result in stressing the turbine fans and reducing the optimal lifespan of the turbine.
There are also vertical-axis tidal turbine designs, resembling oversized eggbeaters or enclosed turnstiles.

CIRCULAR TIDAL BARRAGE
Tech Level: 11
In order to further utilize the surface-barrage concept while greatly minimizing their impact from blocking the mouths of rivers, it has been proposed to build a circular tidal barrage "reef" just off-shore of river deltas. These reefs would not interfere with the flow of the river or with the migration of wildlife like the La Rance facility, but would still take advantage of the more intense surface flow caused by tidal currents. During high tide the water would flow into the large circular area cordoned off by the barrage reef, and during low tide the water would flow out.



SUPERCAVITATING TORPEDO


A supercavitating Shkval torpedo fired from the bow of a Russian submarine.

Supercavitating TorpedoTech Level: 10
Supercavitating BulletsTech Level: 11
Supercavitating Mooring MissileTech Level: 11
Supercavitating Anti-Torpedo DartTech Level: 12
Transonic TorpedoTech Level: 13
The possibility of supercavitating manned vessels will be discussed in a future article.
The greatest limiting factor to the effectiveness of conventional torpedoes is the very medium through which they travel. The density of water induces a great deal of drag on even the sleekest aquatic projectile. More, push a torpedo too fast, and it will inevitable veer off-course as water drag builds up on it exponentially.
Thus, torpedoes are typically designed for maneuverability and endurance rather than speed, zooming through the water and homing in on its target. Most modern torpedoes have ranges limited to a few thousand meters, though some can reach over 30 miles, and are limited to speeds at best of about 60 mph.

SUPERCAVITATING TORPEDO
Tech Level: 10The nose of a supercavitating torpedo uses gas nozzles that continually expel an envelope of water vapor around the torpedo as it speeds through the ocean. This bubble of gas--a 'super cavity'--prevents the skin of the torpedo from contacting the water, eliminating almost all drag and friction and allowing the projectile to slide seamlessly through the water at great velocity. Some people have described supercavitating torpedoes as the first true underwater missiles.
The first such weapon in this class, the Shkval ("Squall"), was in development by the Soviet Union throughout the latter half of the Cold War but was not recognized in the West until the 1990s. Using powerful solid rocket motors, the Shkval is capable of speeds exceeding 230 mph, over four times the velocity of most conventional torpedoes. The Shkval also has a reported 80% kill rate at ranges of up to 7000 meters.
The US navy is seeking to build its own version of the Shkval, but one with a much higher velocity. This is mostly in response to Russia selling stripped down versions of the Shkval on the open international weapons market. However, a US combat-ready version is not expected for at least another 10+ years.
The technology does have one great weakness--maneuverability. The bubble of water vapor generated by the gas nozzles tends to become asymmetrical and breaks up along the outer side of the turn if the torpedo alters its course significantly. At the speeds such a torpedo would typically be travelling, the sudden re-assertion of water pressure and drag on it could not only severely knock it off course, but may even rip the projectile apart.
A new, improved version of the Shkval has been reported in use by the Russian Navy, one that can maneuver and track its intended target. However, it was also reported that in order to do so, this improved Shkval had to slow down significantly once in the general area of the target so it could scan and home in on its prey like a normal torpedo. While a genuine improvement, the true goal of current research is to have the torpedo maneuver and home in on a target without the need to decrease its velocity. Both Russian and US Navy researchers are striving toward this end.
One means of making sure the gas bubble does not wear down upon a turn would be by having the gas-ejection nozzles pump more water vapor into the side of the bubble that's on the outside of the turn, to provide the torpedo with a thick enough "buffer" for the turn without any more parts of it exiting the cavity. Another option might be to magnetically charge the vapor used in the torpedo’s bubble, and use a magnetic field to hold the bubble cohesive while it turns.
Another weakness of the technology is that the Shkval is both very noisy and shows up very readily on sonar. Whereas some long-range conventional torpedoes might be able to stealth relatively close to their targets before going active, the target of a supercavitating torpedo will know right away if they're in the bullseye. However, the supercavitating torpedo may also be travelling fast enough to give its intended victim much less time to take effective countermeasures.
A drawback that had been pointed out in several articles is that the Shkval and its peers only have ranges of several kilometers, whereas a number of modern torpedoes, like the US Mark 48, has a range of over 30 nautical miles. Its possible that a US submarine could just sit outside of Shkval-equipped submarine's range and pound on such an enemy with impunity.
The downside to that strategy is, of course, that most subs are unlikely to be equipped only with supercavitating projectiles. Like most modern combat subs, they will likely carry a variety of different weapons for different purposes, and the Shkval will just be one of the weapons it has in its arsenal. One can assume at long ranges they will likely employ conventional torpedoes, but once within the effective kill-range of a Shkval, they will use their supercavitating weapons to fullest possible effect. Also, it is almost a certainty that all parties engaging in research are striving to increase the weapon's range as much as possible.
Submarines, even with minimal warning, can evade a supercavitating torpedo by blowing some ballast and quickly ascending. However, an enemy submarine captain may anticipate this, and may launch a second or even a third Shkval simultaneously, aimed above the target submarine, in order to keep the enemy vessel from attempting this maneuver.

SUPERCAVITATING BULLETS
Tech Level: 11

The first US weapon system that is likely to employ supercavitating technology is the RAMICS (for Rapid Airborne Mine Clearance System), designed to be outfitted on naval helicopters for mine-clearing duties. The projectiles are designed to fly true both in air and water, and are fired from a specially-modified, 30 mm rapid-fire gun with advanced targeting assistance. Unlike torpedoes, the rounds have no independent propulsion system but rely on the velocity imparted to them by the gun. This would allow the aircraft to take out marine mines both on the surface and underwater, as the bullets would be able to enter the water without deflection or a significant loss in velocity. Surface ships may also deploy gun turrets using supercavitating bullets on their stern, to help defend against wake-tracking torpedoes.

SUPERCAVITATING MOORING MISSILE
Tech Level: 11One non-military application for this technology is to use it for anchor mooring lines for deep-water structures such as oil rigs, aquafarms, buoys, anchored ships, and artificial islands.
Shooting a mooring line onto a seabed with a conventional projectile tends to be ineffective. A normal torpedo simply can't get enough momentum to penetrate too far into the seabed to make it more effective than just dropping the anchor and letting it drag until it catches. Sending divers or teleoperated remote vehicles down to drill the anchor in properly can be both expensive and time-consuming.
The answer is to use a smaller version of a supercavitating torpedo, as the missile would not only be travelling at high speed but would not slow down due to water pressure. It would be able to hit the ocean floor at hundreds of miles an hour, with a "warhead" designed for deep kinetic penetration of rock in order to imbed the anchor line it would be trailing. Using this technology, even large deep-ocean platforms would be able to securely anchor themselves in a very short amount of time, perhaps even within a few minutes.

SUPERCAVITATING ANTI-TORPEDO DART
Tech Level: 12A proposed defensive system for submarines, these small mini-torpedoes are launched, either individually or in a small swarm, when an incoming enemy torpedo is detected. Equipped with high-efficiency, high-specific-impulse rocket motors, their main advantage is raw speed. They zip toward the incoming bogey, and either knock it out through direct kinetic impact, or explode in the incoming torpedo's path, creating a cloud of dense shrapnel designed to tear the enemy torpedo apart as it passes. Surface vessels may also use versions of these darts to take out mines.

TRANSONIC TORPEDOES
Tech Level: 13In 1997, the Navy tested a supercavitating projectile that reached 5,082 feet per second, becoming the first manmade vehicle to exceed Mach 1 underwater. As the supercavitating technology matures and the problems with maneuverability are solved, research will turn toward making the projectiles much faster and more lethal. And this will invariably mean trying to make them faster, up to the point that the torpedoes can routinely exceed Mach 1 and beyond. Greatly extended ranges as the projectiles use more sophisticated engines are also likely.
Supercavitation technology promises to greatly change the nature of modern naval warfare, for submarines especially. Instead of the patient, cat-and-mouse game reminiscent of that marked Cold War engagements, we would instead have heated, quick exchanges of transonic torpedoes underwater, making the confrontation much more akin to aerial combat.



MEGASHIPS



The proposed Freedom ship, which would be three times the size of the world's current largest ship and be composed of half a dozen supertanker-sized barged-like modules.

MegashipsTech Level: 12
Mobile Modular SeabaseTech Level: 12

MEGASHIPS
Tech Level: 12Megaships are an outgrowth of ever-larger ship designs, exemplified by super-carriers like the USS Enterprise and luxury liners like the Queen Elizabeth II, each designed to hold many thousands of passengers and crew. Ultra-large cargo vessels are also coming online that will easily be able to match these ships in sheer size. A new class of megaship, built on a scale dwarfing even these vessels, are being planned that could become the sea-borne equivalent of arcologies.
The Freedom Project, the most well-publicized civilian endeavor along those lines, envisions a massive vessel 4,500 feet along its beam, making it nearly a mile long and four times the size of the Queen Mary. It is designed to hold no less than 100,000 people at any one time, including 40,000 full time residents and 20,000 permanent crew, with the rest as passengers and visitors. The builders plan on having, among other facilities, first-class hotel accommodations, a casino, a fully-equipped hospital, a school system, and the world's largest duty-free shopping mall.
One would think a ship that size would crack in two just from its sheer bulk. However, the Freedomship would not really be one solid unbroken hull, but over a dozen separate aircraft-sized modules. Each module floats independently, much like a ten-story-tall barge, joined to the others at many decks via seamless gimbaled joints but maintaining enough flexibility along its dimensions to absorb and dampen wave effects.
The ship, the groundwork for construction of which is already being put into place, would slowly circumnavigate the globe every three years, stopping not only at major ports but also at many islands accessible only by sea. It would be powered by 100 individual diesel engines each capable of 3700 horsepower.
If megaships such as the Freedom project become a reality, the oceans could become dotted with such vehicles, providing potentially stable platforms and ports of call even thousands of kilometers from shore. Between megaships and artificial islands, another Tech Level 12 innovation, colonization of the oceans may become a reality as never before in history, with a network of megaships and floating ports of call independent of land-bound nations.
One of the major drawbacks of a megaship is that there are very few, if any, ports or harbors constructed to be able to accommodate such behemoths. This could make maintenance and repairs problematic and indeed may be one of the major limiting factors in whether megaships become a widespread technology. However, the economic allure of these megaships could also prove a powerful incentive to modify ports to cater to them.
Such Megaships would also be able to move only very slowly even under the best conditions.

MOBILE MODULAR SEABASE
Tech Level: 12The US military is also looking into developing a similar-sized vessel to the Freedom project ship, but one whose individual modules would sail separately and then assemble into a mega-sized ocean-going air base. Its assembled size and configurations would be similar to that of the Freedom Project ship. This would allow the individual modular vessels to move much more quickly to a destination, and assemble into a long mega-carrier on the spot as needed.
One advantage of this idea over the standard megaship is that the individual module vessels can be repaired and undergo routine maintenance at many already-existing ports.



CONCRETE SUBMARINES


A diagram of a proposed Russian concrete submarine. Image copyright (c) Popular Mechanics

Concrete SubmarinesTech Level: 12The very name elicits snickers of a badly contrived joke. Yet recent technological advances in submarine and torpedo technology has contrived to make concrete submarines, or c-sub for short, a viable and potentially deadly weapon system.
Concrete has been used in ship building for decades now, in barges, houseboats, and sailboats. During World War II, the US government commissioned the building of experimental concrete naval vessels. In the 1990s, the Russian navy initiated a serious design studies on c-subs.
The main advantage of concrete-hulled submarines is that they are much cheaper and easier to manufacture than steel-hulled vessels. Concrete, even advanced forms of it, are dirt-cheap compared to reinforced steel. Another plus is that they are somewhat more resistant to crushing depths--they theoretically can dive deeper than the 1800 feet depth limit of most steel-hulled submarines. Whether in practice a concrete-hulled vessel would actually be able to withstand greater depth pressure than a steel-hulled vessel remains to be seen.
Another significant factor in building c-subs is that concrete hulls are difficult for sonar systems to pick out from the natural rock formations and sandy material on the ocean floor, where they will most likely be deployed (see below.)
Of course, the main disadvantage here is that c-subs are not naturally buoyant. They are, in essence, large hollow rocks with motors attached. A c-sub would be equipped with powerful electric turbine pumps that would propel water downward, allowing it to ascend and maneuver at the surface.
These characteristic combined allow c-subs a unique method of engaging enemies. A c-sub would maneuver into a harbor or well-trafficked sealane, sink to the bottom, and lay stealthily in wait until an enemy ship passes overhead. It would then launch a vertical rocket-propelled torpedo. These torpedoes have been developed by the Russians in the 1990s, and are code-name Shkval--Russian for "squall." Shkval torpedoes can travel at over 230 miles per hour, over four times as fast as conventional torpedoes.
The Russian design for a c-sub called for a small vessel about one-tenth the size of a modern attack sub that could be crewed by no more than six men. Because of their much lower price tag compared to steel-hulled subs and their sneak-attack damage potential, c-subs are often seen by strategic analysts as the means by which smaller, poorer countries could successfully counter the more powerful navies of the more dominant nations. Squadrons of c-subs could wreck havoc with major ports and major shipping routes if deployed effectively.
A potential flaw with these vessels is that concrete will tend to wear down quickly chemically when exposed to the salt water of the ocean environment, especially if it has to undergo many significant changes in pressure. the submarines would have to be inspected at every opportunity to monitor potential wear problems on the hull. Also, different mixes and formula for concrete can be developed to better resist these changes, as can numerous protective polymer coatings.



ARTIFICIAL GILLS


A mask-only artficial gill unit from the Traveller RPG universe. Art copyright Mike Jackson.

Dissolved Oxygen Artificial GillTech Level: 12
Oxygen-Cracking Artificial GillTech Level: 13
The aqualung and SCUBA gear created a revolution in undersea exploration, opening up vast areas of the planet for human exploitation that had been difficult to get to previously. The technology has gone through a tremendous amount of refinement and advancement since it was first introduced decades ago, and it seems poised to undergo another major change with the advent of artificial machine gills.
Basic diving systems depends on air tank capacity for its endurance; more advanced rebreather units recycle used air through a CO2 scrubber and other filters. For both systems, the upper limit usually hovers around four to six hours of life support under the surface. Artificial gills try to bypass these limits by taking oxygen directly from the water. Ideally, it would eliminate the need for heavy compressed air tanks, since a fresh supply of breathable air is always available.
Water of course already contains a great deal of oxygen. Not only is it the primary component of water molecules, but most seawater contains a small amount, about 1.5% by volume, of dissolved air. Simple in concept, but difficult to engineer, an artificial gill would extract the oxygen from the water by one method or another, and convert it into a breathable air mixture for the diver.
Both types of artificial gill described here would only be able to work at standard diving depths, down to a maximum of fifty meters at most if used with common gas mixtures. Beyond that, pressure issues make the gas mixtures provided by the basic gill units potentially dangerous for the diver to breathe.
Though this article generally refers to use of this technology by divers, both these systems described here can be scaled up and used to provide breathable air to both sub-surface watercraft and habitats.

DISSOLVED OXYGEN ARTIFICIAL GILL
Tech Level: 12Currently being developed by independent Israeli inventor Alon Bodner under the project name of LikeAFish, this type or gill uses a centrifuge system to lower the pressure of seawater to release the free oxygen dissolved in it. The process is somewhat similar in general concept to opening a soda can; the act of popping the top causes the pressure within to lower, letting the dissolved carbon dioxide bubble up out of the soda.
Water is drawn in through one or more intakes, sent through the centrifuge system to bubble the dissolved air out of it, then expelled. The liberated oxygen in the gill is diverted to an airbag for use by the diver. The exhaled air is also expelled into the water unless part of a rebreather system.
The main advantage of this system is that it potentially can never run out of air as long as power is available. Current portable battery power could probably only provide about an hour or so of energy, but the diver could be connected to a surface boat or to a submerged generator by a long flexible umbilical, that could last as long as needed. However, this could greatly restrict the freedom of movement that modern divers enjoy.
The main disadvantage of this system is that 200 liters of water have to be processed per minute to provide enough oxygen for a diver under normal usage condition. For peak physical activity, the unit may be equipped with a heart monitor and other biosensors, which would increase the flow of water through the system in order to keep up with the demands of the diver’s body. This would mean an increase in water flow of three times or more, which in turn would require faster spin speeds and drain the battery quicker.
Another potential problem would be bulk. With current and near-future technology, a centrifuge system that can successfully process hundreds of liters of water per minute will neither be lightweight nor compact, especially if it need to carry along a mobile power source as well. It might prove somewhat lighter to carry in a backpack unit than modern rebreather units, but the system’s weight is not the only consideration in the unit’s comfort.
Because the heart of the unit is a rapidly spinning centrifuge, on a lone diver progressional instability issues will arise. Progression is the phenomenon of spinning objects to be tugged off center in the plane of the object’s spin. Its what keep bicycle wheels from wobbling while in motion. But on the back of a diver, the centrifuge would constantly be tugging gently at the diver and sliding him off form where he would want to be. Even though the force at any one moment would be very slight, it can easily build up over time if the diver is inattentive. One solution to this would be to add a second centrifuge, counter-rotating to he first, with both handling equal loads.
Another issue raised by experts is the existence of dead or depleted zones in the water, that may have reduced dissolved air content. The unit may have sensors on board that will pick up oxygen production if it encounters such, as well as having a small back-up tank of compressed air for such an emergency.
Because of its very large power supply requirements, this type of artificial gill may be best deployed on a large scale in underwater habitats as opposed to personal breathing rigs.

OXYGEN CRACKING ARTIFICIAL GILL
Tech Level: 13A tremendous amount of oxygen already actually exists in water. The problem is, its usually bound up with hydrogen atoms in the water molecules themselves. An oxygen-cracking gill is a much more complex device than a dissolved-oxygen gill, as it involves ‘cracking’ water molecules apart with electrical current to get at the oxygen within. This type of gill is seen occasionally in science fiction, usually as small, compact face-mask units that may or may not be connected to small supplementary tanks and a power source.
These devices make judicious use of advanced semi-permeable membranes, which allow molecules of one type to pass through, but not others. They are for all intents and purposes nanoscale sieves.
These units would use a portable power supply to separate the hydrogen and oxygen atoms from water molecules via electric current. A semi-permeable membrane would allow the hydrogen to bubble out of the system but retain the oxygen. The freed oxygen is then fed into a holding tank and combined with a dilutable gas (typically air or nitrox, from a carried tank) which the diver uses for breathing. Another type of semi-permeable membrane allows exhaled carbon dioxide to escape, while retaining the other gasses in the system. The gasses are both fed back into the system through scrubbers and reused, much like in modern rebreather systems.
The oxygen-cracking apparatus itself is used mostly to supplement the recycled oxygen, and thus eliminates the need for heavy oxygen tanks with hours worth of the gas. Instead, the gill-fed oxygen bottle would contain only enough reserve for an emergency, typically fifteen minutes worth of breathing or so. It would allow rebreather systems to be constructed as much lighter and more compact. However, bubble-free systems, such as special-ops military rigs, will still need waste gas holding tanks and more extensive scrubbers, so the savings in bulk and mass may be more negligible.
Mask-only systems would be used for short-term casual dives, most likely for tourism or recreational purposes, but perhaps for emergency uses as well. Without supplemental air or nitrox tanks, the gill would produce only pure oxygen for breathing, meaning it could only be used safely to a maximum depth of nine meters, beyond which the pressure makes pure oxygen toxic to breathe.
An oxygen-cracking artificial gill will continue to work as long its battery holds out and its intakes remain in contact with water.



MEGASTRUCTURES


ARCOLOGIES



Paolo Soleri's Babel arcology concept.

Urban Arcology
Tech level: 12
Coastal ArcologyTech Level: 13
Dam ArcologyTech level: 13
Some arcology-like structures, such as orbital colonies, floating cities, and undersea colonies, will be addressed in their own sections.
The word "Arcology" comes from a fusion of "architecture" and "ecology," a term first coined and discussed at length in Paolo Soleri’s 1969 book, Arcology: City in the Image of Man. Arcologies have since entered into many serious discussions on the future of cities, as a possible inevitable evolution of today’s ever-larger skyscrapers. They have also been seen in many science-fiction sources, such as the movie Blade Runner, the cyberpunk worlds of William Gibson and Masamune Shirow, the novel Oath of Fealty by Larry Niven and Jerry Pournelle, and even in video games such as SimCity 2000, among others. The largest and most detailed description of an arcology (complete with level-by-level diagrams) is Chicago Arcology, put out in 1991 by Iron Crown Enterprises for their Cyberspace RPG.
Arcologies are envisioned as optimal human urban environments, large, self-enclosed mega-skyscrapers that contain everything a human being could ever need. They would be miniature cities onto themselves, engineered for maximum comfort and efficiency. Some see arcologies as the remedy for the massive urban and suburban sprawls that are overwhelming the industrialized world; others see them as urban monstrosities that would rob cities of their culture and individuality.
Whatever their effectiveness in solving the physical problems of ever-burgeoning populations, exactly how they would effect the social problems of cities remains unknown. Proponents see arcologies not only as architectural triumphs but as grand social experiments as well, re-engineering human social interaction to a more dynamic and egalitarian level. Detractors see them as widening the already huge gap between the haves and have-nots, as the wealthy and corporate elite would retreat into the high-tech arcologies and leave the poor to fend for themselves in ever-decaying sprawls. This is in fact a very common motif in Cyberpunk science fiction.
Chances are human beings, being what they are, would bring many of both their virtues and ills into the arcologies with them. Arcologies would have sections for the rich and not-so-rich, have "good" neighborhoods and "bad", be run more often than not by leaders influenced by one special interest group or another, be at least occasionally riddled by crime, disaster, and scandal, etc, etc. But there would also be a very strong sense of community, security, neighbor interaction, and even optimism that is absent in many cities today. Designers and social engineers will no doubt try their best to emphasize the positive, but chances are the unpleasant parts of human social interaction will never be eliminated completely.
Though arcologies today are seen primarily as an urban development, arcologies can be built for nearly any environment. Sea-floating arcologies, cliffside arcologies, farmland arcologies, and even dam arcologies have all been proposed. Arcologies are also seen as a necessary precursor to building orbiting space colonies, deep sea colonies, or colonies on other planets such as Mars.
BASICS
First, a break-down of what an arcology would be required to provide for its population:
Living Space: Because space within an arcology is used much more efficiently, an arcology would require about 2% as much land as a modern city of equivalent population. Actual living space per person would be about the same, but the support structure can be built on a much more efficient three-dimensional scale, and the need for road infrastructure (which can consume as much as 60% of the space of modern urban sprawls) would be eliminated altogether.
The interior of arcology residential areas would probably be somewhat modular, with reconfigurable walls and fixtures, in order to compensate for dynamic changes in the needs of the population. The interior design will also be built to be as human-friendly as possible, with wide corridors, friendly colors, and many plants, murals, and artworks to please human aesthetic sense. Many mirrors and windows would also be worked into the design in order to create an illusion of space. With so many people living in such a compact space, the designers will do everything they can to offset psychological problems that could arise from such conditions, such as claustrophobia, malaise, and anxiety.
As a general rule, single, unmarried individuals would be given efficiencies with one to three rooms, whereas families would have between three and ten rooms, depending on their size. In both instances the space allotted would depend greatly on the Arcology’s exact design, the inhabitant’s social status, and prevailing population density.
The one big disadvantage of arcologies is that they are not readily expandable as modern cities are. Arcologies would be optimized for a certain population; it may be a few thousand or many millions, but once this number is exceeded people have to leave or the artificial ecology of the structure will start to suffer.
Open Areas: Arcologies are often designed with many open areas, referred to in general terms as parks, where the population can gather and/or spend recreation time. In fact, some arcologies are built around a single enormous open area that may be up to a quarter or a half mile square. Rooftop gardens could also serve as parks in some designs.
These open areas are extremely important to the psychological well-being of the inhabitants; human beings do not thrive well dwelling constantly in constricted environments such as apartments and hallways, no matter how user-friendly designers try to make them.
A number of arcologies are designed to make optimum use of sunlight, with windows, mirrors, and open areas designed in such a way as to bring as much natural light into the arcology as possible.
Power: Like modern cities, arcologies will almost certainly require their own dedicated power sources. Fossil fuels, with their accompanying pollution, would be extremely disadvantageous for such a compact community. Nuclear and solar power plants are often cited as solutions, but alternatives such as wind and tidal power are mentioned as well, especially for coastal and floating arcologies. Arcologies built as dams could also easily solve this problem. More exotic solutions, such as geothermal taps and fusion reactors, may be available for advanced arcologies.
Water: An arcology would by necessity have to provide an enormous amount of water to its residents every day. An arcology of 400,000 residents would require nearly 1 million gallons of water per day just to meet drinking requirements. For things such as showers, janitorial needs, hydroponics and such, multiply that by 20.
Unless the structure is built right next to a body of water, an in-arcology reservoirs would be an unavoidable necessity. The reservoir may be one centralized tank, or it may take the form of an open artificial lake around which the arcology is built. It may also be decentralized between many tanks and/or "ponds" in many of the arcology’s parks.
Water recycling would also be a smart investment for arcology builders, to reduce the amount that has to be pumped in on a daily basis. Waste water from the arcology’s human residents can be filtered and used for the arcology’s many parks and farms, for example.
Food: Food importation would be a necessary evil for many urban arcologies. However, they can supplement their food supply with rooftop farms, hydroponics, micro-organic cultivation (yeasts and such), greenhouses, and/or recycling. The residents would buy their foods pretty much like any other city resident, in in-arcology supermarkets and restaurants.
Waste Management: Efficient sewer and garbage management systems would be absolute necessities in an arcology, and would in fact have to be developed to a much higher degree than almost any other system in order to make an arcology work. It would also have to employ a veritable army of janitors. With so many people living so close together, communicable diseases are a very viable danger, and keeping on top of waste management and cleaning would be the first line of defense to prevent any potentially harmful outbreaks.
Employment: While it is true many living in the arcology will have to commute out of the building for employment, many arcologies will be set up to provide jobs directly inside the arcology. Many arcologies will have their own industrial and commercial sections right in the structure, making commuting to work just a brisk walk and an elevator ride away.
Manufacturing industries inside an arcology will by necessity have to be mostly non-polluting and, given the arcology’s nature, will most likely be high-tech.
One of the main features of an arcology would be an enormous central retail district, or mall, to cater not only to its residents but visitors and tourists as well in order to generate outside revenue. The New Edison Arcology in the aforementioned Chicago Arcology supplement had an enormous ten-story mall that became the defacto commercial and retail center of the immediate Chicago sprawl.

URBAN ARCOLOGY
Tech Level: 12

So-called "first generation" arcologies would essentially be large urban mega-skyscrapers, designed to ease burgeoning city population problems. They may or may not evolve naturally from the current trend toward ever-larger buildings. They would no doubt at first be corporate showcases and considered engineering marvels, and be located in the downtowns and commercial districts of many cities.
The urban arcology would be very much a product of the cities they are a part of, and would take advantage of a city’s infrastructure for many of its utilities, such as power, water, and so on. The first arcologies would therefore not be very self-sufficient as many later generation arcologies are envisioned to be.

COASTAL ARCOLOGY
Tech Level: 13
The next step in arcology evolution would be to move the arcology out of its urban settings and into more practical environments. Situating the megastructure close to or even overlapping a large body of water would provide a great many advantages, such as a reliable water supply, more efficient temperature control, coastal access for water vehicles and trade, and even power by tapping things such as tidal motion or river flow.

DAM ARCOLOGY
Tech Level: 13

Soleri's vision of a dam arcology.
As the name implies, this is an arcology integrated into a dam structure. Envision the Hoover Dam with most of its non-support structure given over to living and arcology space. The biggest advantage of this scheme is that the arcology would not only have its own, artificial body of water to draw upon, but would also have a ready-made power provided by the dam itself.



MEGACITIES


The megacity of near-future Tokyo, from the movie Ghost in the Shell.

MegacityTech Level: 10
Cyberpunk SprawlTech Level: 12
Continent CityTech Level: 14
World CityTech Level: 16
By the year 2030, it is estimated that two-thirds of the world's population will live in an urban environment. This will in fact be the end product of a process that has been ongoing for more than the last half-century--the explosive growth of the size and scope of the world's cities.
Even though modern cities are not megastructures by the strict definition of the word, the need for ever-more careful planning, fully coordinated large-scale construction, and long-range integration of new technologies as these urban centers grow will slowly transform them from haphazard collections of commercial and residential zones into complex, unified structures on a grand scale.
This article assumes the cities on the surface of a life-bearing planet, mostly a future version of Earth. However, megacities can also be located in artificial environments, such as underground on the Moon or Mars, and on megastructures such as O'Neill colonies. Many of what is discussed for planet-bound cities can also be applied to these situations as well.

MEGACITIES
Tech Level: 10
The current twenty largest megacities. Source: National Geographic
A Megacity is currently defined as any urban center that has a population exceeding ten million inhabitants. This includes not only what's within the city's borders proper, but all its closely associated suburbs and urban development zones. These are often described as the city's 'greater' area in popular usage, such as the Greater New York Area or Greater Tokyo Metropolitan Area or similar. As of this writing, there are between 22 and 25 megacities worldwide, depending on the source, and this number is sure to grow substantially in the decades ahead.
As urban centers become ever larger, the challenge of the upkeep of their infrastructure as well as maintaining a decent standard of living for its many millions of inhabitants presents a number of technological and logistic hurdles. Prudent long-range planning and a recognition of projected growth would seem to be the major key to both, but in many of these megacities such efforts often fall short. As a result, megacities are often hodge-podges not only of different cultural and economic neighborhoods, but also of uneven zones of decay and urban renewal.
These zones are not always the same from Megacity to megacity. One Megacity may have a decaying downtown but a thriving rim of suburbs; Another may be seeing a large investment in their central commercial sections while the outlying townships are gripped by overpopulation and decay. Still another may invest most of its building budget in its ports or highway offramps, and let the rest rot.
Waste disposal is often a major issue in these modern megalopolises. The larger the city, the more sewers and sewage treatment plants and landfills are needed to handle the mountains of waste and garbage the city will produce everyday. Today, this is a serious issue with no easy answer; some megacities, especially in the developing world, are beginning to be choked by their own waste. As urban centers keep getting larger, new methods for dealing with this problem will have to be developed.
Also, even though it struck a merely "large" city, the Hurricane Katrina disaster in New Orleans in 2005 shows just how vulnerable modern metropolises can be to unexpected disasters. Megacities, being so much larger and with much higher populations, would be far more vulnerable to an unexpected disaster such as Katrina, especially if they have large percentages of poor inhabitants without the means to evacuate easily. Imagine if the Indian Ocean Tsunami of 2006 had hit a megacity area such as New York or Hong Kong with no warning.

CYBERPUNK SPRAWLS
Tech Level: 12
In the movie Akira, the new corporate mega-towers of Neo-Tokyo loom over the half-ruined Sprawl of the old city
The next step up from today's megacities are the urban 'Sprawls' so beloved of Cyberpunk fiction. The term was first coined by William Gibson in his seminal novel Neuromancer, but similar such super-megacities have been seen in a variety of Cyberpunk fiction, including films like Bladerunnerand even comics such as Judge Dredd.
Sprawls assume that the trend toward ever larger megacities continues and even accelerates in the next few decades. Urban growth reaches a point where megacities relatively nearby to each other begin to merge, so that you could drive from one city center to the other without ever leaving a developed urban environment. For example, Los Angeles, San Francisco, and San Diego would all expand to the point where they become one single megalopolis sprawling for hundreds of miles along the southern California coast. On the east coast of the US, another megalopolis would stretch from Boston to Washington DC and subsume the existing megacities of New York and Philadelphia, resulting in a Sprawl that would be home to over a hundred million inhabitants.
One of the major features of the Cyberpunk version of such a super-city is the prominence of new urban technologies such as arcologies, mega-skyscrapers, and three-dimensional transportation systems such as elevated roadways or ubiquitous use of flying vehicles. But the Cyberpunk Sprawl also envisions the very unequal distribution of these new enabling technologies, where they are reserved only for the very rich and their allies. The majority of the Sprawl is inhabited by the disenfranchised and the poor, while those with means basically hide behind the gleaming walls of the corporate-showcase arcologies and gated communities.
However, it is also assumed that some necessities, such as food, power, clothing and the like, undergo a dramatic downturn in price and uptick in general availability, making life in the Sprawl at least tolerable for the average citizen. These benefits are generally attributed to new technologies creating consumer goods in ever greater quantity, driving the prices down just through sheer volume. In a cyberpunk city, for example, most high-performance computers are cheaper than lunch at a diner. There is some precedence for this trend, as all one has to do is look at how Wal-Mart and its contemporaries have made a number of consumer products dirt-cheap just through its broad distribution and ready availability. Whether this trend will continue into the future remains to be seen.
The production and distribution of food to the masses of a Sprawl also utilizes many new technologies in order to keep up with demand by the megacity's tens of millions of inhabitants. Fully-automated factory farms, undersea cultivation, industrial plankton vats, and the extreme high prevalence of artificial, pre-processed foods have all been mentioned in various sources. With mere contemporary technology and techniques, the Sprawl would by all rights should be stripping the surrounding farmland bare for hundreds of miles in all directions just in its sheer ravenous demand.
Neighborhoods in any city go through cycles where they decay, revitalize, decay, and revitalize again from decade to decade. Super cities such as a Sprawl will also feel this effect, but on a much larger scale. Entire townships may go through the cycle from decade to decade, affecting tens of millions of citizens. The arcologies that had once been gleaming corporate showcases may become slums twenty years later. Slum areas may be revitalized by corporate and government investment, and undergo renovation that soon turns it into a commercial or industrial center.
Waste management continues to be a major issue. Many cyberpunk sources that address the problem refer to major recycling businesses--run by greedy and corrupt corporations, of course--as well as freelance economies that sometimes spring up around old landfills. Downtrodden members of the Sprawl would mine the deep, compacted refuse piles like old time prospectors, looking for valuable materials that they could sell to the recycler centers. This is done on a limited scale today in certain cities, but one can imagine that in the massive landfills a Sprawl would generate that this could actually make some garbage "prospectors" rich.


CONTINENT CITIES
Tech Level: 14

Technically, there is no limit to how large cities can grow, as long as basic needs of its population can be met and its infrastructure can be maintained. The next step up from a Cyberpunk Sprawl would be urban centers that stretch across entire continents, from one coast to another. There may still be large swaths of wilderness or farmland, but compared to the size of the city itself they at best could be considered parkland.
Cities on such a scale would need many new technologies to keep them viable. They would for example generate enormous amounts of waste and garbage, to the point that it could bury the remaining arable land and even surrounding oceans, dooming the city if not managed prudently. Recycling of every material capable of such would be a necessity, as would renewable and non-polluting sources of energy, such as solar, geothermal, and wind. A city of such a size would take every opportunity conserve its resources without generating potentially hazardous mountains of waste that could choke it and the surrounding ecosystem to death.
Feeding such a huge megalopolis would also be very problematic. Its possible only one such continent city could exist, and the rest of the planet's landmass could be converted into farmland to support it. For multiple continent cities, however, alternative means of food production would be necessary, many of which were already outlined above for Sprawls.
Continent-sized cities need not necessarily fill up an entire continent. They may have other configurations and geometries that would add up to a continent-sized urban area. For example, in the computer game Septerra Core, a city stretched entirely around the equator of a world-shell (a hollow artificial planet), making it the equivalent of a continent-city.

WORLD CITY
Tech Level: 16
The planet Coruscant, home to a world-city in the Star Wars universe. Image (c) Lucasfilm.
This is a city that has completely taken over the surface of a planet, covering over its entire landmass, and in some cases large swaths of its undersea regions and polar icecaps as well. The two best known examples of world-cities in science fiction are Trantor from Isaac Asimov's Foundationseries and Coruscant from the Star Wars film series, both capitals of galaxy-wide empires. Another fantastically-advanced example can be found in Larry Niven's novel Ringworld in the form of the Puppeteer Homeworld. Asimov's Trantor held 45 billion inhabitants, Coruscant some 176 billion. The Puppeteer's Homeworld held an estimated one trillion alien citizens.
While there would be room for housing so many inhabitants in a world-city, depending on how much total volume the city structures occupy, the true limiting factor to the population for any such structure would be logistic and environmental concerns.
Food and fresh water would be one of the major necessities, and providing it on a daily basis for tens of billions of inhabitants can be quite a daunting challenge. Trantor had a fleet of thousands of starships whose sole purpose was to import daily foodstuffs to the massive world-city. Even so, Trantor still supplemented these with a thriving industry dedicated to vat-growing yeast and algae on a massive scale. Waste and water recycling had to be draconian in its efficiency, or even Trantor's open seas would become clogged by the mountains of human waste that could accumulate. How Coruscant or the Puppeteer Homeworld dealt with these problems is left vague, though the Puppeteers possessed both matter conversion and cheap teleportation technology, which could greatly facilitate recycling even for its massive population.
Dealing with environmental issues of a planet-sized city would be mind-boggling to say the least. Coruscant, for instance, does not have the open seas of Trantor, so one has to wonder how the oxygen-nitrogen atmosphere remains breathable without algae and other open plants to recycle the air. The planet's industry would have to engage in active atmospheric filtering and recycling on a massive scale just to avoid the billions of inhabitants from choking on their own exhaled carbon dioxide. Weather control may be used more for global circulation to prevent asphyxiation of entire regions than anything else.
Which brings us to a major new environmental issue all such world-cities will eventually have to deal with--heat pollution.
Billions of inhabitants and their attendant high-technology activities and industries create a lot of heat. As their numbers soar into the tens of billions and the hundreds of billions and beyond, this amount of heat rises substantially as well. This is NOT the same type of global warming problem we face today, which is caused by the accumulation of greenhouse gasses, though some of the same effects are present. Rather, this is a direct accumulation of heat energy itself through the activity of billions upon billions of inhabitants.
Trantor had open seas and intact polar ice regions to act as heat-sinks, and they built heat-radiator vanes at the poles reaching to the edge of space to dissipate some of the accumulated heat. The Puppeteers dealt with the problem by moving their world into interstellar space, away from any parent star, to let the excess heat bleed off into the void.
The general character and look of world-cities vary from source to source as well. Trantor's world-city was almost entirely domed or built underground, with weather-control technology in place to mitigate the worst of the storms that could still arise. Coruscant was a larger, open-air affair that covered the entire surface of the planet with many gleaming towers and skyscrapers, and was built many layers deep over thousands of years. The Puppeteer Homeworld was rife with arcologies and large tracts of parkland, saturated by teleporters called "stepping discs" that made the whole planet easily accessible from any one point. This combined with the Puppeteers' alien herd mentality helped to mitigate the crushing claustrophobia that could come with such a high population density.



ORBITAL TETHERS


Electrodynamic TethersTech Level: 12
Simple Momentum-Exchange Orbital TetherTech Level: 12
Spinning Momentum-Exchange Orbital TetherTech Level: 13
Multiple-Tether "Stepladder" Space Launch SystemTech Level: 13
RotovatorTech Level: 14
Tethers are one of the least romantic ideas ever about space travel: basically they’re long, orbiting pieces of wire. Yet, one cannot deny how imminently practical the whole concept is. To my knowledge, tethers in one form or another have been used on at least 19 space missions, proving their practical place in the future of space exploration. Tethers as long as 20 kilometers have already been used in various space missions.



TETHER BASICS

Besides the obvious uses for towing, tethers in space can also be used for two fundamental purposes: momentum transfer and power generation.
Simple momentum transfers have already been demonstrated on various missions, most dramatically in 1996 when an experimental satellite was extended from the Space Shuttle on a 20 kilometer wire. A mishap caused the tether to snap, causing the satellite to zoom out into an orbit 140 kilometers higher while the Shuttle lost a few hundred meters from its own orbit.
The reason for this can be envisioned by understanding how orbital mechanics work. We can consider orbits to be gigantic circles (in reality, almost all are ellipses, but the same principle applies). The closer in, or lower, the orbit is, the faster the object must travel in order to maintain orbit. The farther out, the slower the orbital speed needs to be.
When two objects are connected by a tether, one in a lower orbit and the other farther out, the lower one "drags" the higher one around at its own required orbital speed, meaning the higher object is travelling faster than it needs to in order to maintain orbit at that altitude. In other words, it "picks up" momentum from its lower, tethered companion. When the connection is severed, conservation of momentum is observed; the higher object zooms higher while the lower object, having given up some of its speed to its companion, slows down and slips into a lower orbit.
A good analogy for this process would be an Olympic hammer thrower. The "hammer" used in the sport is a crossbar connected to a heavy metal ball by a length of chain. The hammer thrower spins around, holding onto the crossbar, imparting momentum to the ball. When he releases the hammer, the hammer sails down field, while the thrower is forced back a step or two by the momentum transfer of the throw. This is basically what happens to two tethered satellites in orbit, but of course it is carried out on a much vaster scale.
Also, while in orbit the tether is passing through Earth’s magnetic field. If the tether contains or is made up of conductive material, this motion generates electric current along the tether. Thus, tethers have ready-made power sources to help them maintain their systems and to power attached spacecraft.
Space tethers aren’t made of just any kind of wire, of course. They need to be made of very strong yet flexible material. Kevlar, Spectra (used in fishing lines), and metal alloy fiber wires have all been used. In the near future, tethers may be made of materials like spider silk or carbon nanotube composite fibers.
Just having one strand of wire in a tether is impractical. In 1994, a payload was left hanging on the end of a 20-kilometer, single-strand tether to see how long it would stand up to collisions with micrometeoroids and space debris. At the orbital speeds involved, the strand could be cut by a particle as small as a grain of sand. It was expected to last at least 12 days. It didn’t even last four.
In order to prevent debris and meteoroids from endangering future tether-based missions, tethers with multiple strands are being designed. On scheme involves a "tape"-like configurations, of interwoven fibers connected side by side. Another option is the so-called Hoytether(TM), developed by Robert P. Hoyt of Tethers Unlimited, which uses a tubular interwoven lattice much like a fishing net to minimize localized damage to any one strand.
The Hoytether(TM) concept

ELECTRODYNAMIC TETHERS
Tech Level: 12
Tethers Unlimited's Terminator Tether(TM) Concept
As stated above, tethers that contain conductive materials can be used to generate electricity as they pass through Earth’s magnetic field. This was dramatically demonstrated in the 1996 space shuttle/tether mishap described above, as the tether was severed by an unforeseen 3000-volt electrical surge caused by the tether’s motion through the magnetic field. Longer tethers, with much better power regulation equipment, should be easily capable of generating kilowatts worth of power, allowing them to supplement or even supplant solar cells on certain space missions.
According to the Tethers Unlimited website, electrodynamic tethers can also provide modest "propellantless" propulsion for micro satellites (a micro satellite weighs 100 kg or less.) The exact details of this have proven sketchy to find, but it appears to use the electric current generated by the tether to trap electrons from Earth’s magnetic field and then propel them out of the satellite proper (perhaps using a souped-up electron gun like on a TV?), providing a small amount of thrust that can be used to (very) slowly alter the satellite’s orbit.
An interesting application being developed using this principle is Tethers Unlimited’s "Terminator Tether" Satellite Deorbiter(TM). The Terminator Tether is actually a small device attached to a satellite prior to launch. After the satellite reaches the end of its operational lifetime, the device is activated, unspooling a 5-km long electrodynamic tether. The tether produces current by interacting with earth's electromagnetic field, which in turn creates an electromagnetic field radiating out from the tether. This magnetic field interacts with ionospheric plasma (charged particles on the extreme outer layer of the atmosphere, reaching far out into space,) inducing drag forces that slows the satellite down. The satellite gradually loses altitude until it burns up in the atmosphere after a few weeks or months.

SIMPLE MOMENTUM-EXCHANGE ORBITAL TETHER
Tech Level: 12

These are orbiting tethers that impart added momentum to a satellite as described above. A satellite in orbit deploys a tether attached to a counterweight into a lower orbit. Actually, the satellite and the counterweight will "push" off each other as the tether is deployed, meaning the original satellite gains a bit of altitude from this motion alone. When fully deployed, the satellite will orbit until it hits a desired trajectory window (perhaps at the aphelion of its orbital ellipse) to detach, gaining a substantial momentum "push" from the tether and counterweight. It zooms up to a higher orbit, while the counterweight can rewind the tether and deorbit for pick-up and re-use.
Tethers can also be used in deep space missions, where the upper satellite can drag the lower one through a planet’s atmosphere for samples, or even land the lower satellite directly onto the surface on an airless body such as an asteroid or a moon.

SPINNING MOMENTUM-EXCHANGE
ORBITAL TETHER
Tech Level: 13
A spinning orbital tether is also sometimes called a bolo.
Spinning tethers can act as orbiting momentum-energy "banks." Like a Simple Orbital Tether, they exchange momentum by giving up some of their orbital speed to a satellite at the "high orbit" end of the tether. However, the Spinning Tether also adds the momentum of its rotation to the departing satellite, allowing it to impart much greater speed (perhaps even escape velocity) than by static momentum transfer alone.
The primary scheme for this is to have a long, vertically spinning (ie, always perpendicular to Earth’s surface) tether already in Low Earth Orbit. A satellite or space ship launched in a conventional way rendezvous with the end of the cable at the low point of its spin, where electromagnetic "grapples" (perhaps powered by the tether’s electrodynamic properties) latch onto it as it passes by. The satellite is then swung up by the tether’s centrifugal force and released at the apex of its rotation. It then shoots into a higher orbit, much like a stone released from a sling.
The rotating tether loses altitude and rotational speed both from the pick up and release of the satellite. On-board thrusters, again perhaps powered by the tether’s electrodynamic properties, would then have to correct these losses before its ready for its next pick-up. A scheme where the tether could correct its orbit by modifying the length of the spinning tether at aphelion and perihelion in its orbit was mentioned in one article, but unfortunately it did not go into details.


MULTIPLE TETHER "STEPLADDER" SPACE LAUNCH SYSTEM
Tech Level: 13
A multiple tether Earth-Moon launch system
Very simply, this is a series of spinning tethers that "hand off" payloads from one to the other, providing transport from one point in space to another with very little need for on-board propellant. For example, one tether (perhaps a rotovator; see below) takes a payload from Earth and flings it into Low Earth Orbit; another "catches" the payload and throws it into geosynchronous orbit; still another "catches" it again and this time launches it at escape velocity into deep space. The process can of course be reversed, to deliver an incoming payload to the surface of the Earth with almost no expenditure of fuel.
One such scheme has been proposed by the scientists at Tethers Unlimited to create a steadily--travelled "highway" to and from the Moon.
This type of multiple-tether ‘stepladder’ will most likely be created to steadily exchange payloads between two well-established points, such as Earth and a moonbase, or the Moon and a Lagrange-point station, or Earth and Mars, and so on. In this way, space outposts can be provided a steady stream of needed supplies in a relatively cheap and reliable way.


ROTOVATOR
Tech Level: 14

A Rotovator is a spinning orbital tether built on a truly gigantic scale, designed to reach down from space into the lower atmosphere, or perhaps even to the surface of the Earth, pick up and drop off payloads directly. The orbital altitude of the cable’s center of spin is equal to half the length of the cable.
The Rotovator would be orbiting along the equator, perpendicular to Earth’s surface. The rotational velocity of its tips can be matched to the rotational velocity of Earth’s surface spinning under it. Both the forward motion of the tether in its orbit and its carefully timed rotation rate can result in its lower tip "hovering" over a certain fixed point on Earth for a few minutes, allowing smooth transfer of cargo.
It is important to understand that even though the word "hover" is used above, the tether of course never stops spinning, just as the surface of the Earth under it never stops rotating. But the forward orbital motion of the rotovator is synchronized in such a way with its spin that the lower tip "glides" over a fixed spot on the rotating Earth, making it seem stationary for a few moments to observers on the ground. In fact, because of the scale and choreographed motion involved, people on the ground could never tell the rotovator was in fact rotating by eyeball alone; all they’d see is a gigantic column of material reach vertically down from the sky like God’s own arm, pick up its cargo, and retreat back up in exactly the same way.
Robert L. Forward in his novel Timemaster gave extensive details about a rotovator 8000 miles long that "touched down" into the lower atmosphere to pick up cargo and passengers flown up to it on specially-modified jets. This rotovator’s orbit and spin were designed in such a way that it set down three times per 24-hour period. Rotovators need not always be quite on this scale, but a length of several hundred miles is probably minimum.



SPACE ELEVATOR


Photocell Climber Space ElevatorTech Level: 13
Asteroid-Anchored Space ElevatorTech Level: 15

"The space elevator will be built about 50 years after everyone stops laughing".
--Arthur C. Clarke


The Space Elevator concept, also occasionally called a Beanstalk, has recently been given a boost thanks to the development of material technology which exhibit the tensile strengths needed to make the concept work.
Originally proposed in a popular science article in the Soviet periodical Komsomolskaya Pravda by Yuri Artsunatov in 1960, the concept was taken up by many science fiction writers over the years, most significantly Arthur C. Clarke in his 1977 novel The Fountains of Paradise. Since then, it has seen many incarnations, such as in Kim Stanley Robinson’s Red Mars novel, the game universe of 2300 AD, the Dirty Pair and Maps anime series, and countless science article speculations.
It is currently under serious preliminary study at NASA’s Institute for Advanced Concepts (NIAC.) While proponents say that an elevator could be built within 15 years or so, fundamental research into various component technologies still needs to be done, and political and economic factors will probably delay actual attempted construction of a space elevator for many decades yet.

SPACE ELEVATOR BASICS

The basic principle of a Space Elevator is fairly simple to envision. Tie a string to a baseball and twirl the string above your head. The string will remain taut and straight as long as the twirling motion is in effect. The Earth is spinning far faster than your hand could ever manage, about 1000 miles per hour. If you anchored an incredibly strong wire to Earth’s surface at the equator, then attached the other end to a large enough mass (say, a small asteroid) to keep it taut, you end up with a perfectly-straight railroad track right into space.
The Space Elevator’s center of mass would be at geosynchronous orbit, approximately 22,300 miles above the equator, helping to keep the entire construct fixed over a stable position on Earth. The geosynchronous point is also where the cable would be under the most stress (see below), so it would have to be thickest there and taper down exponentially as one moves away from it in either direction.
Once the cable is set up, elevators can ride it up and down via magnetic rails, delivering cargo straight into orbit. The Earth-end of the Elevator cable is usually envisioned as being attached to the top of a mountain or a super-high artificial tower. However, though both of these options could simplfy setting up the Elevator, they are not strictly necessary. One scheme, primarily involving the photocell climber elevator, details anchoring the cable to a specially-built but standard-height off-shore platform.
The Space Elevator is a simple, straightforward idea with one very important complication: the structural stresses put on the elevator cable would be truly enormous, far beyond what normal materials, even advanced alloys and composite laminates, is capable of dealing with. A Giga-Pascal (GPa) is a measurement of tensile strength. Quartz fiber has a tensile strength of about 20 GPa’s, while diamond filaments would only be slightly higher. The GPa’s required for Space Elevator cable material is around 62 GPa’s, threen times that of diamond filaments.
Also, the heavier you make the cable, the more weight it has to support and the greater stress it has to endure. So, ideally, the builders of a Space Elevator would not only have to have an incredible strong, flexible material, but said material also has to be incredibly light weight.
Fortunately, one material meeting these requirements has recently been synthesized, albeit only in microscopic quantities: carbon nanotubes composites, which would have a theoretical upper tensile strength of 200 GPa’s, over 100 times that of steel cable at only a small fraction of the weight. Small fibers of this material can be set down side by side, then interconnected to form a growing ribbon.
One interesting feature of the system is that the Space Elevator cable would constantly be moving through Earth’s electromagnetic field. If conductive material (or even superconductive material, if such is available when the Elevator is constructed) is run through the center of the cable, it would constantly generate electricity through this movement. This could provide not only ample power to operate all the Space Elevator systems and stations, but also provide a magnetic field elevator railcars can ride up the Elevator.
Using an average speed of Mach 1 for the elevators, a trip from the ground to the geosynchronous point would take about two days.
The advantages of a Space Elevator are enormous. Once the technology matures, orbital interface travel from surface to space could eventually be reduced to pennies a ton for cargo, or a rate equivalent to a passenger train for human riders. Today it costs about $22,000 per kilogram to put cargo into Low Earth Orbit because of the enormous energies standard rockets must generate in order to reach orbital velocity. Using today’s energy costs, it would take about 75 cents per kilogram for a Space Elevator to do the same thing.
A space elevator would pay for its initial set-up costs within a few decades. The builders could even make money by selling the delta—V of the space elevator to outbound ships, flinging them into space from the far end of the cable much like an oversized Rotovator. A ship "flung" form the end of the cable would be travelling at about 6.79 miles per second, fast enough to reach Mars in a few weeks if pointed in the right direction.
In most visions for a passenger-carrying Elevator, a weigh station is usually built at the geosynchronous point, where the local net acceleration forces along the cable cancel each other out, resulting in zero gravity conditions.

PHOTOCELL CLIMBER SPACE ELEVATOR
Tech Level: 13


This concept uses photocells "pushed" by a ground-based laser to slowly build a Space Elevator "from the ground up." The Elevator cable anchored to the ground is counter-balanced by an equal length of cable beyond the geosynchronous point, built up by photocell-pushed "climbers." These climbers would also be used to launch payloads up the elevator.The following is taken from the article, "The Space Elevator Comes Closer to Reality," by Leonard David, from the Space.Com website. It can be found in its completion here:http://www.space.com/businesstechnology/technology/space_elevator_020327-1.html.
"Getting the first space elevator off the ground, factually, would use two space shuttle flights. Twenty tons of cable and reel would be kicked up to geosynchronous altitude by an upper stage motor. The cable is then snaked to Earth and attached to an ocean-based anchor station, situated within the equatorial Pacific. That platform would be similar to the structure used for the Sea Launch expendable rocket program.
"Once secure, a platform-based free-electron laser system is used to beam energy to photocell-laden ‘climbers’. These are automated devices that ride the initial ribbon skyward. Each climber adds more and more ribbon to the first, thereby increasing the cable's overall strength. Some two-and-a-half years later, and using nearly 300 climbers, a first space elevator capable of supporting over 20-tons (20,000-kilograms) is ready for service.
"For a space elevator to function, a cable with one end attached to the Earth's surface stretches upwards, reaching beyond geosynchronous orbit, at 21,700 miles (35,000-kilometer altitude).
"Twenty tons of cable and reel would be kicked up to geosynchronous altitude by spacecraft to get the project started. "If budget estimates are correct, we could do it for under $10 billion. The first cable could launch multi-ton payloads every 3 days. Cargo hoisted by laser-powered climbers, be it fragile payloads such as radio dishes, complex planetary probes, solar power satellites, or human-carrying modules could be dropped off in geosynchronous orbit in a week's travel time," (Bradley Edwards of Eureka Scientific in Berkeley, California) said."

ASTEROID-ANCHORED SPACE ELEVATOR
Tech Level: 15


The more "traditional" vision of a space elevator.
A small asteroid is diverted from deep space and locked into high orbit above earth. The end of the elevator cable beyond geosynchronous orbit is anchored to it as a counterweight (think of the baseball in the above example.) The mass of the asteroid moving in a higher orbit keeps the cable under tension and the cable straight. This way, the overall length of the cable can be greatly shortened.
A shorter cable may be desirable for economic reasons; today, carbon nanotubes cost about $500 per gram of mass, or roughly $500 million dollars per ton. A Space Elevator cable will, of course, weigh many thousands of tons. If this price does not significantly go down by the time a Space Elevator is ready for construction, diverting an asteroid may actually be a far cheaper deal than building an equal length of cable beyond geosynchronous orbit.
The asteroid could also have the added advantage of being used as a source of raw materials to build space facilities for the Elevator, such as the geosynchronous station, or complete additional cables for more "tracks" along the Elevator.



SPACE COLONIES



The Interior of a Bernal Sphere Space Colony.
All images in this article courtesy NASA.

Stanford Torus 
Tech Level: 14
Bernal Sphere 
Tech Level: 14
O’Neill Colony 
Tech Level: 15
Space Colonies take up where Space Stations leave off. More than just outposts or way stations, colonies are full-fledged residences in the Great Dark. Many people may be born and live the balance of their lives on these large artificial cities in space.

STANFORD TORUS
Tech Level: 14
A much larger version of von Braun's wheel station was conceptualized in 1975 by NASA and Stanford University, as a means of housing both orbital factories and the personnel needed to run them. What resulted was the concept for the Stanford Torus, a wheel station nearly two kilometers in diameter, 200 meters wide, and capable of holding up to 10,000 permanent residents.
The innermost wall of the torus would be transparent, in order to allow sunlight to enter by means of giant louvered mirrors anchored to the wheel hub. These mirrors can be opened and closed, in order to allow an Earth-approximate day/night cycle. Below this inner-rim skylight would be large open areas holding very normal-looking houses, buildings, and even soil, parks, and hydroponic farms, all designed to provide its inhabitants with as familiar an environment as possible. Rotating approximately once per minute would provide Earth-normal gravity on the interior surfaces, with no great difference in gravity gradients between levels until one enters one of the spokes on the way to the hub. The outermost levels below the habitation level would be dedicated to maintenance, manufacturing, and storage.

BERNAL SPHERES
Tech Level: 14
A step up in sophistication and livable surface area from the Stanford Torus is the Bernal Sphere, conceived in its current form in the same 1975 study that produced its wheel-shaped cousin. Like the Stanford Torus, it crystallized from earlier science and science fiction sources.
Like the Stanford Torus, the Bernal Sphere is two kilometers in diameter and rotates about once per minute to provide Earth-like gravity. However, as its interior is a sphere rather than a simple ring, far greater surface area can be used by potential colonists. Large circular transparent sections near the rotational hub would allow sunlight to be directed into the interior by means of gimbaled mirrors. Like the Torus, the Bernal Sphere would have most of its internal volume completely open with very Earth-like communities, structures, and parks sculpted into its innermost surface. However, as one moves up the sphere from the rotational equator to the hub axis, perceived gravity would steadily decline. Thus this kind of colony would have varying high-gravity and low-gravity neighborhoods.
One of the more interesting concepts that arose for Bernal Sphere interior design is to have an open "river" ringing the entire length of the Sphere's equator, which would double as the construct's central water reservoir.

O’NEILL COLONIES
Tech Level: 15
In the early 1970s, Dr. Gerard K. O'Neill, through college courses and his book The High Frontier, began promoting the idea of large-scale construction in space, and of a particular kind of gigantic space station that has since become known as an O'Neill Colony. O'Neill Colonies were included in the NASA/Stanford 1975 study on space colonization. O'Neill Colonies have also become one of the great enduring motifs of modern science fiction, having been seen in dozens of science fiction sources. The current generation probably knows them best from the various incarnations of theGundam anime series, and from the title station in the TV series Babylon 5.
O'Neill's vision was of large rotating cylinders, from hundreds to thousands of meters across and many kilometers long. The interior would be open and pressurized, with the inner surface holding not only living and working quarters, but soil, forests, waterways, and so on, in essence becoming a large self-enclosed Earth-like ecology. Large gimbaled mirrors would direct sunlight into the interior along transparent strips running the length of the cylinder, closing for eight hours at a time to create an artificial night. Even more than the Stanford Torus or the Bernal Sphere, an O'Neill Colony would have the interior volume to become a miniature version of the homeworld, allowing people aboard to live, work, and even raise families in much the same manner as people on the ground. O'Neill colonies, once up and fully running, could hold hundreds of thousands of residents. The open park-like spaces could be turned into farms, and with strict recycling in place, the station could become virtually independent from Earth.
Inside an O'Neill Colony. The suspension bridge pictured is the same size as the Golden Gate bridge in San Francisco.
O'Neill Colonies are more than just outposts or way stations; they are true residences in the Great Dark. While other stations are usually mentioned as being in near-Earth orbit, O'Neill Colonies are often visualized as inhabiting locations much farther out—geosynchronous orbit, the Lagrange points, even orbiting the moon or other planets. In the centuries to come, they could well become the equivalent of the small towns of the solar system—the modest, sometimes isolated stopovers between the major population centers that would spring up on colonized worlds.



OLLOWED ASTEROID/COMET



Interior of a small hollowed asteroid. Artist unknown.

Hollowed Asteroid/CometTech Level: 14
Take an asteroid or a comet and excavate a large percentage of its interior while leaving its outer shell mostly intact. This interior excavation can take the form of interconnecting tunnels, concentric rings, a large central hollow sphere, or a large central hollow cylinder. The comet or asteroid is then spun up and rotated to produce artificial gravity along its inner hollowed surfaces, which are presurized and terraformed to make them more accomodating to human inhabitants.
The main advantage of a hollowed asteroid/comet compared to a more traditional space colony such as a Bernal Sphere or an O’Neill Colony is that the outer shell of the body can provide a great deal more protection from environmental hazards. Even with a large hollow tube kilometers wide in their centers, some of the bigger bodies would still have many kilometers of rock/ice to insulate the colony from potential harm. Many of the same techniques and issues associated with normal space colonies can also be applied to hollowed asteroids and comets.
Because of the way they are naturally distributed in the solar system, asteroid colonies would probably predominate in the inner system, and comet colonies would be more common in the realm beyond Saturn’s orbit. Because of outgassing issues, comet colonies would be a dubious proposition at best in the inner system, but asteroid colonies can be placed anywhere.
The megatons of material left over in the shell of a hollowed asteroid or comet can also serve as a source of raw materials for a growing colony. An asteroid colony would have a readily-available supply of thousands of tons of minerals, and a comet colony would have a very long-term supply of consumables such as oxygen and hydrogen for life support and fuel needs.



PLANETARY HALO



Art (c) Brian Kissinger

Satellite Halo 
Tech Level: 16
Planetary Halo 
Tech Level: 18
A Satellite Halo and a Planetary Halo are megastructures built in orbit around terrestrial planets or gas giants. They are called Halos here to distinguish them from natural-occurring planetary rings, and from star-centric Ringworld Megastructures.

SATELLITE HALO
Tech Level: 16
A Satellite Halo is a ring of loosely orbiting objects around a planet that becomes so dense with satellites that it forms a visible ring around the planet. The Halo will be thick and dense enough to cast a twilight-like shadow on the planet's surface, if in a proper orbit to intercept sunlight.
A Satellite Halo may be an unplanned development, as simple as the planet's geosynchronous orbit (for Earth, 22,300 miles above the equator) becoming choked with communications satellites and accompanying debris, slowly built up century after century.
More than likely, however, a Halo will be the result of meticulous planning with a definitive purpose in mind. Instead of a hodge-podge of large independent communications satellite, for example, a planet's civilization might instead opt for hundreds of thousands of cheap, carefully-herded microsatellites in geosynchronous orbit to accomplish the same task. In some science fiction works, Halos are created by thousands of large space colonies and attending structures in orbit, all in a tight orbital band to facilitate traffic, trade, and maintenance among them. Around gas giants and heavily-mined terrestrial objects, a Satellite Halo might be constructed to facilitate on-the-spot industrial exploitation of the encircled object, attending to multiple Space Elevators or similar high-volume interface traffic devices bringing up raw materials.
Some softer science fiction works have displayed Satellite Halos acting as a literal "defensive line" around a planet. While admittedly that can make for a dramatic visual, why the attackers simply couldn't approach the planet from another vector that avoided the heavily-armed Halo usually isn't addressed. However, it would be possible to create multiple Halos, each in a different altitude and orbit, to cover most of the entire planet from any approach.

PLANETARY HALO
Tech Level: 18
While a Satellite Halo is composed of many thousands of independent objects arrayed tightly in the same orbit, a Planetary Halo is one single, unified ring-like artifact completely surrounding the planet.
A Planetary Halo can accomplish many of the same tasks as outlined for a Satellite Halo, but of course offers a far greater volume for potential habitation and industrial machinery. A Planetary Halo with an orbit of 22300 miles and a one square kilometer cross section would have an internal volume of some 70,000 cubic kilometers. If spinning at the same rotational rate of the planet below (which would be very practical for it to support one or more Space Elevators to the surface) it would have a very modest gravity along its outermost surface, roughly one-twentieth of a G. Untethered planetary rings can of course be spun much faster to provide stronger pseudo-gravity.
One of the main problems with engineering a Planetary Halo (beyond its immense scale, of course) is progressional instability. The planet in its center will bind the Halo gravitationally, so it won't drift away from it, but the Halo will become unstable in the plane of its orbit. Like a bicycle wheel spinning on a carefully-balanced pole, it will eventually start to wobble and pull itself in one direction or another. Attitude thrusters would have to be placed at intervals all along its circumference to correct for any instability that might develop.
In addition to the most obvious source of power (solar) the ring could also help to power itself simply by deploying trailing tethers toward the planet's surface and generating electricity as they pass through the planet's magnetic field. If the ring is attached to the planet's surface by one or more Space Elevators, they can also serve this function.
A Planetary Halo could also fill some roles that a more dispersed Satellite Halo could not. For instance, it could function as a gigantic linear accelerator for launching spacecraft or even interstellar-bound cargoes at tremendous speeds, or they can be used as particle accelerators to plumb the depths of the quantum world on a scale no planet-bound accelerator ever could. The latter use was illustrated in the novels The Ring of Charon by Roger McBride Allen and Forever Peace by Joe Haldeman. An artificial Planetary Halo was also used by alien machines in Alastair Reynold’s novel Redemption Ark to manipulate a gas giant’s magnetic field.



ROSETTES


The Puppeteer Fleet of Worlds, a Rosette of five planets,
from the novel Ringworld by Larry Niven.

Asteroid RosetteTech Level: 15
Moon RosetteTech Level: 18
Planet RosetteTech Level: 22
Stellar Rosette
Megastructure Rosette
Tech Level: 25
A Rosette is three or more objects of comparable mass spaced equidistantly about the same circular orbit. The objects will continue in orbit about the system’s center of gravity, even if no object is occupying the center of the system. In other words, the objects in an ideal rosette will circle endlessly in the same circular path, bound together but balanced equidistantly by each other’s gravity.
This arrangement of bodies was first discussed by W.B. Klemperer in The Astronomical Journal, vol. 67, number 3 (April, 1962), on pages 162-7, "Some Properties of Rosette Configurations of Gravitating Bodies in Homographic Equilibrium". Hence, they are sometimes known as Klemperer Rosettes. The most famous example in science fiction of such a rosette would be the Puppeteers’ Fleet of Worlds from the Ringworld novels by Larry Niven (who mis-spelled Klemperer as "Kemplerer.")
Rosettes are prone to instability through gravitational perturbation over time. Depending on how well the rosette is set up, these instabilities may not become apparent for centuries or millennia, but they will eventually have to be dealt with. Thus, rosettes, like most megastructures, need active maintenance and periodic adjustment.

ASTEROID ROSETTE
Tech Level: 15
The earliest and perhaps most practical application of the Rosette phenomenon would be by setting up a Rosette of redirected asteroids around a space-industrialized world, most likely in a geosynchronous orbit. The asteroids would serve as sources for both raw materials and as bases to set up colonies in orbit. Placing the asteroids in a rosette configuration would not only contribute to their orbital stability but would also facilitate travel and trade between the hypothetical colonies placed on or in the planetoids.
Advanced factories set up on the asteroids could mine and then manufacture the raw metals within it to produce space elevator cables, allowing a world to quickly and practically set up multiple space elevators for large-scale orbital cargo hauling. If the civilization has the intention of constructing a special-purpose Satellite Halo or Planetary Halo (q.v.) placing multiple asteroids in a Rosette may be a necessary first step. The asteroids would then be disassembled and the Halo constructed, the raw materials needed for its manufacture already in the required orbit.

MOON ROSETTE
Tech Level: 18
A Moon Rosette uses the same idea for an asteroid rosette, kicked up an order or two of magnitude. Moon rosettes seem much more likely to be constructed around gas giants than around terrestrial worlds, if for nothing else the preponderance of moons to be found around such bodies.
Like with an Asteroid Rosette, a moon Rosette could be used to facilitate manufacturing and resource exploitation (such as mining Helium-3 and isotopes of hydrogen from the gas giant’s atmosphere) as well as trade and travel among any colonies set up on the moons. Setting up a rosette could also be a necessary first step in constructing an orbital megastructure in around the gas giant.

PLANET ROSETTE
Tech Level: 22
A rosette involving planetary bodies might have a very different purpose than the previous two rosette concepts. A very advanced civilization may redirect most of the terrestrial planets in its solar system into a rosette configuration around its central star, in order to maximize the amount of real estate they can have in the life-supporting zone around the sun. The means to move such immense masses without rendering them completely uninhabitable would require incredibly advanced gravity manipulation or similar technology.
A rosette of life-bearing planets with no central star could also function much as described in the novel Ringworld, serving as a very advanced (and admittedly very comfortable) type of generation ship for a race looking to cross interstellar distances. If a race is advanced enough to create and move a planet rosette, its also assumed they could very easily create and maintain artificial orbital light sources in place of a sun.

STELLAR ROSETTE
Tech Level: 25
The purpose of creating an artificial rosette involving entire stars is open for speculation, but is a logical progression of the previous rosette ideas. Stellar rosettes might function as beacons for intergalactic travel, or as centralized power sources for immense trans-Dyson sphere structures.
Moving a star could be accomplished by constructing one or more immense rings around it, which could be used to manipulate or constrict the sun’s magnetic field to produce jets of high-speed plasma around one of its poles. If done with precision (which in itself would be a mind-boggling technical feat), the star can be very slowly maneuvered into position over centuries or millennia.

MEGASTRUCTURE ROSETTE
Tech Level: 25
It is also possible to create rosettes of megastructures, both smaller ones and star-centric ones. Dyson Spheres, Ringworlds, and other immense structures could very slowly be maneuvered into a rosette arrangement by super-advanced technologies. The purpose of such an act would be open to speculation. Perhaps the rosette might exist for something as a simple as facilitating trade and travel among the cultures of the structures; perhaps the megastructures were created to be vast fortresses, or to power incomprehensible weapons of some kind against some intergalactic threat. Perhaps the building civilizations needs the immense power the megastructures can tap for some unknowable experiment.



MEGASTATIONS


Unicron in his 'metal planet' megastation form. Art by Jason Cardy. Image copyright Hasbro.

MegastationsTech Level: 18
A megastation is a free-floating space structure, usually manned, whose dimensions are so colossal that it approaches the scale of natural moons and planetary bodies. They are also sometimes called Megabases, Metal Planets, Mech Planets, Mech Worlds, Artificial Planets, and Artificial Worlds. Related structures of this scale, such as Balloon Worlds and Planetary Halos, are discussed in their own articles.
Many space opera settings have at least one Megastation as part of their background. The Death Star from the Star Wars films is probably the most famous example of such a structure, followed closely by Cybertron and Unicron from the Transformers universe. O’Neill Colonies from the Gundam anime series can also be considered megastations.
Construction of a megastation (or any megastructure) is usually a colossal undertaking by the building civilization, and so much manpower and resources won’t be wasted for trivial reasons. Megastations could serve a number of important purposes. In stories, megastations usually serve as living space, holding many environments and habitats in its interior. Because they may be big enough to have their own natural gravity field, one holding a breathable atmosphere on its surface could circle a star and sustain thriving cultures on its open surface. In other stories, they may serve as an ark or reserve to contain threatened species and races.
Megastations can be used as platforms for immense weapons and as military bases, as exemplified by the Death Star. They can also be used for industrial purposes. Large enough megastations could be used to process entire asteroids and perhaps even small moons in their interiors. Unicron ‘eats’ small astronomical bodies in a similar manner, but the raw materials extracted are used to maintain the megastation/robot itself.
Building such a structure is a bit more complicated than just laying down a core of trusses in orbit and building out from there. After a certain point, the station’s mass will start exerting a significant amount of gravity that will tug continually on its own internal structure. Though even for smaller megastations the gravitational forces of a few gigatons of metal might seem negligible, over a significant amount of time it will put significant stress on station.
Also of consideration will be the station’s orbit, if any. Tidal stress may put potential strains on the enormous structure that smaller stations could likely ignore, and will have to be taken into consideration as well during design and construction.
Like most other stations, provisions will have to be made to maneuver the structure for station keeping. The larger the station’s mass, the more energy and fuel such maneuverings will consume. And for something as immense and ponderous as a megastation, the energies required with even a simple orbital correction could be vast.
Overall shapes for megastations are usually depicted as spheres, rings, disks, or cylinders, in order for the station to be rotated to provide artificial gravity on its inner surfaces. However, almost any shape can be used if rotational artificial gravity is not an issue. In these cases, depending on actual design, inhabitants will either be in total freefall or be under the influence of the megastation’s natural gravity. Very advanced stations may use gravitic technology to generate earth-normal gravity on all decks.
The megastation’s general design will also depend on what the station’s major function will be. If its main purpose is living space, it will probably very closely resemble space colonies like a Bernal Sphere or O’Neill colony. A military station, like the Death Star, may be mostly power generators by volume in order to serve its many weapons. An industrial megastation may be a free-frame structure in order to take advantage of pervasive microgravity conditions. One type of proposed a megastation, a whole-asteroid processing node, would basically be an enormous hollow cylinder that would allow the disassembly of an entire asteroid herded into its interior.
Megastations can be made completely mobile, and in effect become mega spaceships, but are still usually called ‘stations’ as people will just assume that they’re immobile because of their size. For example, both the Death Star and Unicron were never actually called ships despite their being able to travel easily between star systems.



RINGWORLDS


A distant view of Larry Niven's Ringworld. Artist unknown.

Forerunner HaloTech Level: 18
Gossamer RingTech Level: 19
OrbitalTech Level: 21
RingworldTech Level: 23
Ringworlds are enormous artificial worlds constructed in a circular shape thousands or millions of miles in diameter. They share some similarities to Planetary Haloes, but either are independent space structures or encircle stars instead of planets.
The idea was first introduced and popularized in Larry Niven's seminal 1970s science fiction novelRingworld. His artificial world that circled a sun remains the prototypical definition of a megastructure, and is perhaps the second most famous example of such a structure after the Death Star from the Star Wars film series.

FORERUNNER HALO
Tech Level: 18
A spacecraft approaches the outer shell of a Forerunner Halo. Image (c) Bungie Studios.
This structure comes from the Halo series of computer games by Bungie Studios. It is a circular ribbon of super-strong material some 10,000 miles in diameter and 320 miles wide in orbit about a gas giant planet. The structure is set spinning to produce artificial gravity along its inner surface through centripetal acceleration. The inner surface is also sculpted to resemble a life-bearing world, complete with a breathable atmosphere, waterways, seas, land masses, forests, and so on, and stocked with lifeforms from various worlds. The total habitable surface area of a Forerunner Halo is about 10 million square kilometers, or slightly more than the surface area of Canada. Its average thickness is a little over 22 kilometers, much of the volume not dedicated to the habitat given over to maintenance machinery and other mysterious technology.
In the Halo universe, seven of these constructs are known to exist, built many thousands of years ago by a mysterious alien race called the Forerunners to contain and study a parasitic alien lifeform called the Flood.
Structures such as the Halo are under constant enormous structural stress. Remember that the Halo is basically the size and mass of a continent, spinning so that each section wants to rip away from the others and go careening into space just from pure centrifugal force. No normal kind of rock, crystal, or metal could stand up to those kind of stresses, no matter how thickly made. And unlike a planet, it does not have a planet's gravitational field to counteract this force. So in order to prevent the ring from flying apart, the outer shell of the Halo is constructed from some unknown Ultra-Tensile-Strength material, on the order of thousands of times the resiliency of carbon steel, just to keep the artifact together. The even larger habitable artifacts in this class, Orbitals and the Ringworld, require materials of even vastly greater strength to hold themselves together.
Ringworlds of this size may arise from Planetary Halo technology (not to be confused with name of the computer game here; see separate article on Planetary Haloes.) Planetary Haloes are rings constructed around terrestrial planets, often arising as a consequence of Space Elevator technology. Numerous space elevators are built around the rotational equator of planet, and the ring is slowly built up in geosynchronous orbit where each elevator system's center of gravity is located.
In this case, the ring is specially reinforced and designed to be mobile, instead of just acting as a static in-orbit habitat. When complete, the ring is detached from the mooring space elevators and very slowly nudged away from the planet along its poles. Once clear, the new independent Halo can be spun up for gravity and maneuvered into a new orbit.
This scheme would allow a steady stream of building material to the new Halo for minimal cost. Unlike other means of building a ringworld, or most megastructures, if built around a life-bearing world it does provide a means by which megatons of organic material--topsoil, breathable atmosphere, plant life, and so on--can easily be transported up to fill the habitat in a short period of time.
The Forerunner Halo may or may not have used this technique; if so, it would have to have been constructed around a much smaller, slower-spinning world than Earth, such as a moon. Since Forerunner Haloes have so far been seen only in orbit about gas giants, this would seem to follow. A Halo built around Earth with this method would have a diameter of about 52,000 miles, about five times the size of the Forerunner artifact.
The Tech Level of the Forerunner Halo as a habitat is 18, but there are number of indications that the Forerunner civilization was actually more advanced than this, and that the Haloes may in fact hold a number of ultra-tech secrets not yet revealed.

GOSSAMER RING
Tech Level: 19Perhaps the most practical type of ringworld to construct is a so-called gossamer ring--a structure encircling a star made up almost entirely of a sparse framework and micron-thin solar cell material. It sole purpose would be to gather energy from its parent star and beam it via microwaves or laser to other worlds or structures nearby.
Unlike other artifacts of this type, the gossamer ring does not need fantastically exotic materials for its construction. A thin frame work of material made of carbon nanotube filaments would probably prove workable, as would the ultra-thin but strong solar sails material stretched between the frame work would suffice in converting sunlight into power.
Because gossamer rings would be constructed solely for the purpose of energy gathering, they can be constructed much closer to the sun than their larger, life-supporting cousins. The ring could be from several thousand to a million or so miles wide, depending on how much sunlight it is designed to capture, and would most likely have a relatively tight radius around its parent star, probably some 25 to 40 million miles, in order to maximize its energy absorption per square meter. Much farther in, the ring would risk deformation and damage from the likes of intense solar winds, radiation, and solar storms. This of course assumes a sun-sized star. Different stars with different diameters and radiative output would have different minimal radii for gossamer rings.
Both light pressure from the parent star as well as its own rotational momentum would be enough to keep the structure stable and rigid. The gossamer ring has much in common conceptually with Statites, and may indeed be a natural long-term outgrowth of that technology. Solar-orbit statites may steadily grow in size and number until linking them together in a chain around the sun would seem the next logical step.
Like most rotating structures in space, it would be subject to progressional instability, where it would slowly slide off center in the plane of its orbit if left to its own devices too long. The presence of attitude thrusters would help offset this. However, the ring could also heighten or lower its reflectivity over large sections of its sunward-facing surfaces, using light pressure to very slowly correct slight perturbations in its orientation around its parent star.
Gossamer rings would also require much less building material than the larger ringworlds; maybe only the mass of a small moon would be required for construction. The shadow squares of Larry Niven's Ringworld novels have some aspects of a gossamer ring.

ORBITALS
Tech Level: 21Orbitals come from the universe of Iain M. Bank's Culture novel series.
Orbitals are rings of ultra-tensile-strength material some fourteen million kilometers in diameter and between one thousand and six thousand kilometers wide. They are spun up to simulate gravity along their inner surfaces, which are lined with soil and water and a breathable atmosphere in order to support life. The edges of the inner surface are lined with walls hundreds of kilometers high in order to keep in the atmosphere.
Orbitals circle their parent star much like a normal planet, but are canted up at an angle to it, so the back half of its inner surface receives sunlight while the other half remains in shadow. Since the Orbitals are spun up to complete a cycle in approximately 24 hours, this creates a very convenient day-night cycle for its inhabitants.
Orbital rings are separated into individual "plates", which can be thought of as individual continents. Different plates can have wildly varying characteristics from one another. Some plates may closely resemble the natural habitats found on normal life-bearing worlds. Others may have unusual structures or features of a more wild or artistic bent, such as kilometer-high mountain-bridges over large expanses of water or floating colonies on an Orbital-spanning river.
Orbital spin rates need to be high, not only to provide a convenient day-night cycle and the right amount of gravity but to keep the entire structure stable and rigid. Because of the immense stresses this causes on such a huge mass, ultra-tensile-strength materials are needed for its construction, which in the Culture universe are also supplemented by various force fields and quantum force manipulation. This super-material is thought not to be quite at the same level of toughness as scrith (see below), but it would still be thousands of times more resilient than any material currently known. Even so, if Orbitals were ever to lose power on a large scale that its structural support force fields would collapse, there's a good possibility that the structure would deform and possibly even tear itself apart just from centrifugal force.
The constructs are each overseen by a highly advanced and powerful AI called a Hub Mind, which tends to both active maintenance and overseeing the daily needs of the Orbital's inhabitants.



ALDERSON DISKS


A simplified diagram of an Alderson Disk.

Alderson DiskTech Level: 23
Alderson Disk Stellar GeneratorTech Level: 23
An Alderson disk is a colossal circular structure built around a sun, with the star in its exact center. They were named after Dan Alderson, who originally proposed the idea. The idea occasionally pops up in science fiction, most notably in the short-lived Malibu superhero line Ultraverse, where an Alderson Disk called the Godwheel played a very central role.
Documentation for this idea is extremely sparse; much of this article is speculation on the part of the author.

ALDERSON DISK
Tech Level: 23The Disk in its original conception would measure thousand miles through its cross section, and extend from within the orbit of Mercury out to the orbit of Mars or beyond. A wide variation on these dimensions is possible, depending on purpose or design.
Such an immense construct would likely out-mass its central star, and the gravity would pull toward the central plane of the disk. In other words, no matter what side of the disk you stood on, your ‘down’ would always be oriented toward the disk under your feet. Because of this, the disk does not have to rotate to produce artificial gravity, and can avoid some of the structural stresses a Ringworld and other similar rotating megastructures may have to endure to produce Earth-like gravity.
Even so, an Alderson disk would still experience vast structural stresses just from its mass spread across such enormous dimensions, requiring material strength of a magnitude completely unattainable by today’s technology. Ultra-Tensile Strength materials, such as scrith described in the novelRingworld or the force-field enhanced metals of Iain M. Banks’ Culture novels, may be needed to even contemplate building something as mind-boggling immense as an Alderson Disk. And like with other megatructures of its size, an entire solar system or more would have to be dismantled for the resources needed for construction, and raw mass may need to be drawn off of entire stars.
How much of the Disk would be habitable would depend exactly on one’s definition of the term. It could mean habitable solely by humans with minimal technology (For average temperature, that would require about 20 degrees up through 110 degrees Fahrenheit, depending on environment) or whether it means habitable by any kind of life, including extremophiles (which would extend from –70 degrees to more than 140 degrees Fahrenheit.)
Taking the latter definition, and figuring for a star like the Sun, the habitable zone would stretch from approximately 88 millions miles out to about 127 million miles from the sun. The human-comfortable ‘Eden’ zone would probably be within a small variance of Earth’s natural distance from the sun, say from about 90 million to 100 million miles distant.
Using greenhouse techniques, say by having segmented sections with different gas mixtures, or by just producing millions of large domes around the outer edge of the Disk’s ‘natural’ habitable zone, the builders could extend the habitable zone to perhaps more than 150 million miles from the sun.
Taking into account both sides of the disk, this creates the potential living space on the Disk of approximately 30 million times the surface area of Earth.
In order to maximize available sunlight for both habitation and energy gathering, the Disk may be built on a very gentle slop from its inner edge to outer. In order to keep the atmosphere from seeping out into space along its edges, enormous walls would be built along the inner and outer edges of the habitable zone. In fact, in order to maximize the amount of livable space, different concentric sections of the habitable zone may be walled off from each other in order to create individualized gas mixtures and surface pressures and biospheres optimized for the amount of sunlight received at that point. Inhabitants may have to pass through tunnels or enormous airlocks to get from one section to another. Seen from space, the Disk’s habitable zone on both sides may look like dozens or hundreds of different stepped circles, like a vast rounded ziggurat.
A day-night cycle could be created by having the central sun ‘bob’ up and down along its vertical axis. The gravity of the disk would keep pulling it toward its center of gravity, which would be right in its empty center zone where the star would reside. So when its momentum would carry it up, gravity would slow it down and eventually pull it back toward the enter, and its momentum would carry it over to the other side of the disk before it was pulled back again. It would be very much like a super massive pendulum, allowing the sun to rise and set over each surface of the disk over a 24 hour or similar period.
Another solution would be what was described in Malibu Comics’ Godwheel, which involved having a binary star at its center, instead of a single sun. The stars would revolve around each other, and the Disk would be built at an angle where the stars would be hidden from view for a least a short time in their cycle, allowing a true night.
Still a third option would be to have a set-up similar to Niven’s Ringworld, where a ring of gigantic connected ‘shadow squares’, made up of solar-sail-like material millions of miles wide, would rotate around the star’s equator within the central open space of the disk, to block out the sun on the disk’s surface for hours or days at a time.
The environment on an Alderson disk would have a profound twilight character. No matter what day-night system, if any, is used, the landscape would look like it is perpetually undergoing sunset or sunrise. Most plants and trees would adapt and invariably grow on slants, bending toward the light.
Because a life-supporting Alderson Disk would not rotate, it would not have the same progressional instability problems of some other megastructures. The central star would also be able to center itself on the Disk’s center of gravity, avoiding the danger of brushing against the megsatructure, as existed in the case of the Ringworld. However, a means of attitude adjustment may still be needed, as the star and the disk’s revolution through the galaxy can still lead to imbalances in the system in the very long term. However, with an Aldeson Disk, this can prove very difficult, as unlike a ringworld or dyson sphere, it may actually mass more than its parents star. In fact, it would probably be ‘easier’ to adjust the position of the central star than the disk, perhaps by using the disk to generate a massive magnetic field that could nudge the star along to a proper position.
This all describes only a ‘simple’ life-supporting Alderson Disk. The super-technology needed for this construct, here estimated at level 23 of the Tech Level scale, could actually make the entire surface area of the Disk potentially habitable, as well as a large portion of its interior. This would involve technologies not easy to describe or comprehend by us in the 21st Century.

ALDERSON DISK STELLAR GENERATOR
Tech Level: 23An alternate version of an Alderson Disk could be built around stars that have powerful magnetic fields. This may include neutron stars, in which case the disk would likely be much smaller than the solar-system-sized one described above. These generator disks would uninhabitable, and would spin as fast as they could tolerate.
These Disks would have a built-in lattice of conductive (or better, super-conductive) material, and their motion through the star’s magnetic field would generate enormous amounts of current. The disk can therefore be used as an enormous power generator, like a vastly oversized homopolar generator. Unlike the habitable disk, gravity would not be a major consideration, and could be made hollow and much lighter, allowing it to be spun up or down much more easily as needed.



DYSON SPHERES


A simplified cutaway diagram of a Type II Dyson Sphere

Type I Dyson SphereTech Level: 20
Light Pressure Dyson SphereTech Level: 20
Type II Dyson SphereTech Level: 22
Mobile Dyson SphereTech Level: 23
Solid Dyson SphereTech Level: 25
Sunken Dyson SphereTech Level: 27
Dyson Spheres are enormous physical shells constructed to englobe stars. Dyson Spheres have proven to be one of the most mind-boggling concepts in science fiction, representing an almost incomprehensible scale of engineering that would seem godlike to the people of today.
Highly specialized types of Dyson Sphere objects, like Matrioshka Brains, will be discusses in other articles.
The name comes from Freeman Dyson, a renowned astronomer who was part of Project Ozma in 1959, the very first effort to find alien civilizations through radio and optical astronomy. In that year he wrote a paper called "Search for Artificial Stellar Sources of Infrared Radiation" in the publication Science. He suggested that as a civilization advanced, its energy needs would grow exponentially, so they'd turn to the most potent nearby source of power--their sun--and englobe it to trap all of its energy output directly. He suggested looking for large, low-level infrared sources (a Dyson Sphere would block all sunlight but would still have to shed excess heat) that would be a telltale sign of such structures.
Dyson spheres have become one of the much-loved concepts in science fiction, and have appeared in a great many science fiction sources. The earliest known source of the concept is in Olaf Stapledon’The Star Maker from 1937, which helped to inspire Dyson’s more serious scientific speculation twenty two years later. Other novels with Dyson Spheres include Across A Billion Years by Robert Silverberg, Farthest Star and Wall Around A Star by Jack Williamson and Frederick Pohl, The Ring of Charon and The Shattered Sphere by Roger Macbride Allen, the Orbitsville series by Bob Shaw, andThe Time Ships by Stephen Baxter. The Star Trek: The Next Generation episode, "Relics," also featured a Dyson Sphere, and is probably the example best known to the public at large.
A Dyson Sphere’s primary purpose would be energy gathering. As Dyson pointed out in his paper, humanity’s energy needs have been growing exponentially all throughout its history, and if this trend continues there will come a time when Earth-based means of creating energy simply won’t be enough. There is, however, an extremely potent and long-lived power source just 93 million miles from us--the Sun. Englobing it to trap all its energy would supply humanity with the power to support a population of trillions and perform technological feats unimaginable today.
A secondary purpose of such a construct would be living space. Create a Dyson Sphere with a radius of Earth’s distance from the sun (or whatever the radius of a star’s life zone would happen to be), the inner surface of the sphere would receive just enough sunlight to support liquid water and organic life. The inner surface would be lined with soil, seeded with flora and fauna, and pressurized. If the entire inner surface of a life-supporting sphere is used, it would create a habitable surface area equal to nearly one billion Earth-sized worlds.
Dyson Spheres could also be used for other, secondary purposes, such as defense, quarantine, herding stars, and so on.
Advanced artificial gravity technology would be an absolute necessity if one wanted to use any part of the Sphere for an artificial habitat. A Dyson Sphere could be made to rotate, producing spin gravity along the inner surface close to its rotational equator. To use more than just the equatorial regions, however, true artificial gravity technology would have to be developed and employed on a massive scale.
Creating a Dyson Sphere would be an enormous undertaking for any civilization, and would require entire planets to be gutted and disassembled for raw materials. Construction time is estimated to be from several centuries to many millennia for all but the most god-like cultures. Because they would be such enormous investments for the building civilization, Dyson Spheres would be designed to be as long-enduring as possible.
Because the laws of entropy still apply and even the most advanced energy gathering can never be completely 100% efficient, Dyson Spheres will leak thermal radiation--waste heat. So even though all of the star’s radiative output is intercepted by the sphere, these objects will still glow in the infrared spectrum. Astronomers are well aware of this telltale signature, and despite decades of searching, no Dyson objects have ever been found by human telescopes.

TYPE I DYSON SPHERE
Tech Level: 20These are also sometimes called Dyson Swarms, and are what Freeman Dyson had actually proposed in his original 1959 paper. The more solid versions of a Dyson Sphere came later, mostly from science fiction. An example of a Type I Dyson Sphere, called the Array, is occasionally seen in the comic book series Tom Strong. Olaf Stapledon’s seminal novel The Star Maker also had this kind of sphere.
A Type I Dyson sphere is made up of many billions, if not trillions, of small independently-moving bodies circling the sun. Their orbits are layered and synchronized in such a way that they intercept all of the light given off by the star.
A Dyson swarm may not start out as a planned object; it may evolve over time as the owning civilization expands and adds more and more habitats and solar power collectors as it advances. A Dyson Swarm may be a necessary preliminary step to creating the more advanced versions of these objects, as all the construction materials and assembly machinery would have to be carefully laid out in various orbits before actual assembly began.
The thickness of the swarm would vary depending on its exact architecture and working components. The objects may be held in a tight layer only a few thousand miles thick, in order to facilitate easier transportation between all elements. Or it may be many millions of miles thick, with inner elements gathering the majority of the sunlight and outer elements containing enclosed habitats or manufacturing facilities.
Type I spheres would also be the easiest of this class of object to spot by distant telescopes, as it would leak the most amount of thermal radiation.

LIGHT PRESSURE DYSON SPHERE
Tech Level: 20Also called a Gossamer Sphere or a Dyson Bubble, this is basically a star-englobing solar power array. This type of star envelopment is accomplished with a thin framework and large swaths of micron-thin solar-sail like material, used to gather the sunlight for the star for power. This energy is then beamed to nearby locations for use or storage.
Light pressure from the star alone is enough to keep the sphere rigid and "inflated," though it may also be rotated to help ensure structural stability. The sphere and its components need not revolve around the star either; the entire bubble may be nothing more than a collection of super-sized statites, counter-acting the star’s gravity through photon pressure alone.
Unlike other types of Dyson Sphere, a Gossamer Sphere need not take possible habitation into account, and can be constructed at a radius much closer in to the central star. The sphere may be a loose swarm or one consolidated sphere; both configurations would work as well for energy-gathering. A single consolidated Sphere would require less over all material, where as multiple layers of a swarm would allow for more redundancy.
In fact, a Gossamer Sphere would require much less building material than any other object of this class. Where other Dyson Spheres would require the disassemble of most or all of the terrestial bodies in a solar system, a Dyson Bubble could be built using only the mass of a single large moon.



POCKET UNIVERSES



A tethered pocket universe will usually have one or more permanent openings through which one could fly, drive, or perhaps even walk right through. Original artist unknown.

Pocket Universe AccessorTech level: 19
Pocket Universe GatewayTech Level: 23
Pocket UniverseTech Level: 25
Pocket Universe GeneratorTech Level: 26
A pocket universe is a portion of our own space/time that has been deliberately truncated off from the rest of the universe, to exist independently either at a tangent to, or completely outside of, our own continuum. They are sometimes also called pocket dimensions or baby universes.
Pocket universes have become a very common motif in soft science fiction, super hero stories, and even fantasy worlds. However, they do occasionally show up in more serious science fiction, especially in stories that deal with more esoteric quantum theories. Examples of artificial Pocket Universes have been seen in the novels Eon and Eternity by Greg Bear, in the Traveller RPG, in various Superman Comics, in the Ben 10 cartoons, and in Dr. Who, where one apparently makes up the interior of the TARDIS.
Some fictional cosmologies assume pocket dimensions were naturally formed during the Big Bang, as a result of extreme fracturing of space/time at the moment of creation. Some also believe that they may be a consequences of certain ongoing natural processes, especially involving multiple universes and brane theory and the like. However, this article addresses only pocket universes that are created artificially.
Manufactured pocket universes are at the extreme far horizon of possible future technology, one that seems nigh-magical to us today. Though their creation seems unlikely according to our current understanding of physics, if they do prove possible, they would be one of the most powerful expressions of technology imaginable--each would be an artificial, if limited, cosmos unto itself.
Because pocket universes are a very ‘soft’ science fiction technology, this article is almost pure speculation, based on musing by theorists and ideas explored in science fiction.
Creating pocket universes would require an intimate understanding of dimensions other than the four (three of space and one of time) that we’re familiar with. It has been theorized, as in Einstein’s General Theory of Relativity, that these familiar dimensions are curved into a higher dimensional plane, often called hyperspace, like a two dimensional piece of paper would be curved into our own three dimensional world. As the dots on that paper could not easily perceive the curving of their two-dimensional universe, we too are often blissfully unaware of this hyperspace giving greater over-all form to our more familiar 3 + 1 dimensional space. But we have seen some evidence for this warping of space both directly and indirectly, in the form of the effects of massive objects on planetary orbits and gravitational lensing effects observed on stars in deep space.
A two dimensional metaphor for the nature of pocket dimensions may be to envision the universe as perfectly flat plane of pliable material with no width, such as rubber or plastic film. This very stretchy material, with the proper techniques and application of force, can be bent and bumped and warped in all kinds of ways. You can even force a very deep curving hole that breaks through it entirely—what we may call a black hole or a singularity.
But what you can also do is stretch this material and push on it until it forms a distinct bubble separate from the rest of the plane of pliable material. You can constrict the tether of material connecting the bubble to the plane until only a narrow ribbon remains, or you may truncate the bubble off and have it resting on the surface of the plane. You can also have the bubble float free entirely.
Keep in mind these are metaphors for a very basic understanding only. Remember, we’re talking about our 3+1 space warped into one or more higher physical dimensions, something that’s not necessarily easy for our poor analog meat brains to envision.
So how exactly would one go about creating such a pocket universe? The extreme warping of space may be accomplished through the use of highly advanced and powerful artificial gravity technology, and perhaps combined with the creation and fine control of singularities and wormholes. Extremely sophisticated manipulation of the universe’s quantum scale may also be necessary. All this points to an absurdly high technological level, at the very horizon of what we in the 21st century may be capable of comprehending.

How to create a pocket universe.
TYPES OF POCKET UNIVERSES
Pocket universes in science fiction seem to come in three general types: tethered, tangent, and severed.
A tethered pocket universe is one that still has an easy physical connection to its parent cosmos, through what is for all intents and purposes an unusually stable and robust wormhole connection. Entering the miniature cosmos might be just as simple as walking right into it. Though a ‘tether’ is used as an example in the 2D/3D metaphor, in a 3D/4D reality this connection could look like any kind of three-dimensional volume, perhaps like an actual doorway or opening if built into another artifact, or maybe a visual discontinuity of some kind floating directly in mid-air or space.
These ‘tethers’ usually come in two varieties—open and directional. An open access can be entered and exited from any direction on either side of the portal. In this way, they act very similar to conventional wormhole mouths. A conditional access can only be entered and exited from one or maybe several directions only. The portal would invisible and undetectable except from the designated directions it can be accessed. In fact, one could walk or drive a vehicle right through the position where the entrance is supposed to be and never know what was there if approached from the wrong direction. From the proper direction, however, the entrance of the pocket universe is clearly visible.
Tethered pocket universes may have more than one tether point, and where this portal connects in both the pocket universe and in the main universe may be made movable. The Way, from Greg Bear’s novel Eon, is an example of a tethered pocket universe. Dr. Who’s TARDIS may be another, if one considers the interior of the Doctor’s ship to be a pocket universe that ‘travels’ through time and space by merely shifting its tether point (the timeship’s entryway, aka the blue police box.)
A tangent pocket universe closely coexists side by side with its parent universe, but is physically truncated off from it. The two cosmi may still be attached through different layers of subspace and/or they may still share the same basic levels of quantum foam, but unaided travel from one to the other is not possible.
These pocket realities require specialized devices called accessors (see below) to enter and exit them. However, as the relationship between parent and offspring universe is static, tangent pocket universes can be reliably entered and exited any number of times, though usually from one or more predetermined set of coordinates in both the main cosmos and the pocket dimension. Usually (though not always) the act of travelling from a parent universe to one of its offspring, even a tangent one, is depicted as being easier to achieve than travelling from one full dimension to another. Though physical travel from pocket universe to the main one and vice versa may be difficult, signals and other types of information exchange may be easier to obtain, depending on the nature of the subspace/quantum separation between the two. The Phantom Zone from Superman lore could be considered a tangent pocket universe, as could Grandfather’s pocket universe from the Traveller RPG.
A severed pocket universe is the same as a tangent one, except that it is no longer attached to its originating cosmos in anyway. Neither travel nor the exchange of signals and information can be readily obtained from it. It is at this point truly a universe unto itself, though it may still have other restrictions of a pocket universe. It may hold its ‘position’ to the main universe, meaning access to it can be reliable, or it may be drifting or such through inter-universal quantum fluctuations and probabilities, meaning access to it may fluctuate between easy and all but impossible. Travel to a severed pocket universe is often depicted as much harder to achieve than to a tangent one, and may require something as resource-intensive as creating a interdimensional wormhole.
FEATURES OF POCKET UNIVERSES
Pocket universes in science fiction are usually limited in some ways that the mainstream parent universe is not. The most common type of limiting factor is overall volume. While the real universe may measure billions of light years across, pocket dimensions are usually much smaller, running from a few microns across to many thousands of kilometers. Some may even be large enough to contain entire worlds or solar systems, but these aren’t typical. Some very advanced pocket universes may have one or two physical parameters that can be considered the same scale as our universe. For example, the Way from Greg Bear’s Eon is seemingly infinite, but only in one direction.
Its unknown if the expansion of space/time in our universe carries over into pocket universes. In stories it is implied that the pocket universe remains at a static volume once created, but this might only be in the short-term as humans measure time. Over millennia and eons, the pocket universe may keep expanding like the normal universe does, but perhaps at a different pace.
Another common feature of pocket universes is their extreme hyperspatial curvature. The parent universe may be open or saddle-shaped in its overall structure, but there is often little doubt that a pocket universe is definitively a closed cosmos that curves back on itself. For example, if a resident traveled straight in a single direction, the curvature of space in the small universe would eventually bring the traveler back to the very same spot he started from.
People in the pocket dimension may not notice this effect in casual everyday circumstances, but further investigation can reveal the truth. For example, in the old Land of the Lost tv series from the 1970s, the characters one day looked through a telescope and saw their own distant backs, confirming they were in a pocket universe. In Grandfather’s pocket universe from the Traveller RPGsetting, the background space in the truncated solar system is slowly turning from black to gray and in the far future will one day be a bright white, as the light from the pocket dimension’s star has nowhere to go and endlessly cycles through the curved space/time of the pocket dimension.
A pocket universe may also experience the passage of time differently once separated out from its parent cosmos. The flow of time within may be greatly accelerated or greatly slowed down compared to the mainstream universe.
Physical laws may also be altered by the engineers in order to fit a particular need its builders may have. Alternately, something may go wrong in the creation process, and the physical laws within may be skewed in some random way. This may prove very hazardous to potential travelers to the pocket universe, as even minute alterations to the constants and laws of our own world can cause our bodies or our technology to spontaneously stop working.
The creators of the pocket universe may fiddle with physical laws in order to create items or technology that may not be possible within their normal cosmos. For example, in the novel Anathemby Neal Stephenson, the orders of the avout used specially-designed particle accelerators to create microscopic, short-lived pocket universes of slightly skewed physical laws in order to create New Matter, a material with many unusual properties. In the Traveller RPG setting, the Ancients used specially-designed pocket universes with collapsed dimensions to create their robust teleportation devices.
Pocket universes may also be created with inherent, large-scale instabilities that will cause them to collapse after a set amount of time. This may be a deliberate feature put in by its creators, or it may be the result of an imperfect formation. When a pocket universe collapses, it may just evaporate away, fading away into the quantum foam bit by bit like a ghost, or it may spontaneously collapse into a singularity. An example of a collapsing pocket universe can be found in the Star Trek: The Next Generation episode "Remember Me," where an experiment in warp bubble manipulation creates an inherently unstable pocket universe that traps Dr. Crusher.


POCKET UNIVERSE ACCESSOR
Tech Level: 19

If a civilization is advanced enough to cross interdimensional barriers, it can also find ways to access tangent pocket universes that may already exist. The ability to create such miniature universes is still reserved for higher tech levels, however. Detecting the existence of, and determining the hyperspatial coordinates of, a tangent pocket universe may also require separate interdimensional sensor technology.
An accessor would be similar in operational principle to an Interdimensional FTL drive, in that they would use some unknown advanced form of gravity and quantum manipulation to warp the very fabric of space and time to reach parallel dimensions. As the parent universe and its offspring may still be connected via subspace or a shared quantum ‘foam," travel to and from the pocket universe may be easier and require less energy than a true interdimensional hop. Hence, an accessor may be easier to manufacture and operate, and may require less technical sophistication.
Because pocket universes are "smaller" than naturally occurring universes, they may not be accessible except under certain circumstances, or only from certain predetermined coordinates in real space. For example, in the Traveller RPG universe, Grandfather’s pocket universe is only accessible from certain points around the Regina star system.
An accessor would move a predetermined mass (such as a starship or a group of intrepid explorers) into or back out of the pocket universe when activated. Usually there is no formal movement involved; when activated, the user(s) will see the old universe fade away and the new one fade into existence around them. Other types of technology can sometimes serve as an accessor. For example, both the warp drives and transporters of Star Trek fame can be considered accessors as they have been shown being used to occasionally move ships and people between connected universes.
Accessors can be used to travel to severed pocket universes as well, but only by ‘hopping’ from adjacent dimensional space to adjacent dimensional space. This may require only a few hops, or many hundreds in an indirect manner. Note that the severed pocket universe may not even be along the ‘chain’ of related realities that the accessor can enter, or may drift away in the mean time.

POCKET UNIVERSE GATEWAY
Tech Level: 23Creating an open, two-way doorway in and out of either tangent or severed pocket universes would require wormhole technology, which becomes available at Tech Level 23. These would also be the only way to directly access a severed pocket universe without a number of intermittent dimensional hops that would be required using an accessor or interdimensional drive. As with an accessor, determining the existence and hyperspatial coordinates of such pocket universes will usually require separate interdimensional sensor technology.
It would for the most part look and act very much like a ‘standard’ wormhole; the entry way at both ends would be spherical, and would visually ‘reflect’ the environment on the other side of the opening. However, each end would exist in a separate universe.
But also like conventional wormholes, they would require tremendous amounts of energy and/or negative matter to both set up and to maintain. They would also be inherently unstable, and could collapse if too large a mass is moved through them or the wormhole mouth is impacted with sufficient energy.
The wormhole openings into and out of tethered pocket universes are considered far more stable and robust than these gateways, but would have the same vulnerabilities, only to a lesser degrees.

POCKET UNIVERSE
Tech Level: 25Both warp drives and wormholes, technologies discussed in the FTL section, involve the extreme warping of space/time through gravity and quantum manipulation, and the creation of pocket universes may be a natural outgrowth of those technologies (assuming they’re possible at all, of course.) Like with wormholes, a complete mastery of gravity manipulation, quantum engineering, and the custom creation of singularities may be required. However, warp bubbles and wormholes are usually assumed to require constant monitoring and upkeep in order to keep them working. A pocket universe by contrast is usually stable and independent, requiring no further intervention after its creation.
Pocket universes may occupy only the volume of a few atoms, or they may be many billions of miles across, or even larger. The nature of a pocket universe (including its type, volume, density, physical laws, flow of time, etc.) are set at the moment of its creation and generally can’t be altered afterward.
What exactly pocket universe-creating devices would look like would be anybody’s guess. At this Tech Level machinery is all but indistinguishable from magic from our 21st Century perspective, so it could take on any number of forms. But even so, creation of a pocket universe even at Tech Level 25 would not be an easy feat, so the machine or machines would probably still look impressive or distinctive in some way.
Purposes of a pocket universe may include:

--To conduct experiments investigating the creation, formation, structure, and ultimate fate of our own universe, in miniature.
--For storage and preservation of valuable items and locales, up to and including entire cities, planets, solar systems, and perhaps even entire civilizations. The pocket universe in question may also have a time differential from its parent universe, allowing time to pass much more slowly, helping to preserve said treasures for the long term.
--To isolate dangerous super-science experiments from the rest of the universe. Grandfather’s pocket universe in the Traveller setting was created partially for this purpose
--To isolate undesirable elements from an environment while still keeping access to them nearby, so to speak. For example, in the comic book series DC One Million, all major cities on Earth were contained in their own individual pocket universes, allowing the planet to return to a completely natural state.
--If the time differential allows a much faster flow of time within the pocket dimension, experiments requiring months or years or even centuries could be carried out within hours or even minutes instead.
--To facilitate faster than light travel by setting up pocket dimensions to act as interdimensional ‘bypasses’ between points light years across. Would work very similarly to the Interdimensional Drive, but in this case the dimensions traveled into would be artificially created.
--For easy transportation of a possibly unlimited number of goods. They can be tethered pocket universes with their entry wormhole tied to a literal suitcase or storage trunk or other such object, or they may just need an accessor set to the right hyperspatial coordinates. The number of objects they could store would depend on their internal volume and configuration; it could be small and knapsack sized, or it may contain a whole planet of use as storage space.
--To act as energy sinks for a teleportation system. The Ancients in the Traveller RPG setting used planets and solar systems tucked away in pocket dimensions for this. They were used to "soak up" the energy created by an object teleporting thousands or millions of miles in one jump, where velocity differentials between the origin and destination points that could be measured in tens of thousands of miles per hour or more.
--To act as a potentially unlimited energy storage device. Upon its creation, the pocket universe is saturated through with radiation and heat energy. When the pocket universe is accessed, this energy readily pours through into the mainstream universe and can be used for various purposes. The amount of energy available depends on its volume and the energy density contained within.
--To act as a bomb enhancer. Imagine an atomic bomb detonated in a small pocket universe only three feet wide. The energy would have nowhere to go and the entire force of the bomb would be held in suspension and concentrated, even more potent than the bomb would have been on its own. When released with an accessor or wormhole, the explosion would be more devastating that using the bomb just by itself. (However, there would be an upper limit to how highly concentrated you could make the explosion—too much energy concentrated in one spot would create a kugelblitz, or a black hole created solely by energy. The newly formed kugelblitz would likely destabilize and collapse the small pocket universe itself. However, the energy densities for this to occur are literally astronomical.)
--To imprison criminals and entities considered too dangerous to otherwise incarcerate. The Phantom Zone from the Superman comics and the Null Void from the Ben 10 TV series were both used in this capacity.
--To act as a safety buffer to explore interdimensional spaces. The pocket universe in this case is used as an intermediary space to buffer its parent universe should something go wrong with such exploration.
--To create a shelter against a cosmic disaster. A tangent or severed pocket universe can protect a planet or solar system from almost anything occurring in its originating universe, up to and including the death of the parent cosmos.
--As a monument to the building civilizations’ existence, just to prove that it could build something as amazing and awe-inspiring as its own customized universe.

POCKET UNIVERSE GENERATOR
Tech Level: 26Up to this point, creation of a pocket universe was a singular, monumental event for the building civilization. One Tech Level later, however, the technology may improve to the point that machines can manufacture pocket universes in quick succession, perhaps only taking a few minutes apiece. The devices may also be small enough to carry around in one’s hands, with most of its operational machinery located, appropriately enough, in a tethered pocket universe. They would also be able to function easily as accessors and could create interdimensional wormholes in order to enter its creations.



XEELEE RING


Xeelee RingTech Level: 30
One of the most mind-boggling megastructures yet proposed, this was the central feature in Stephen Baxter's novel Ring. Whereas previously discussed structures were built on the scale of planets and stars, the Xeelee Ring was built on a scale dwarfing galaxies. This is ironic, as the "material" that made up the Ring is far too small to be seen by the naked eye: a cosmic string.
A cosmic string is a flaw in spacetime stretched into a line or loop many light years long. It shares some features with the singularities found at the hearts of black holes, but is a different kind of phenomenon. A single inch of such a string would weigh ten million billion tons on the surface of the Earth. Remnants of the Big Bang, the gravitational influence of cosmic strings is theorized to be one of the primary factors responsible not only for the formation of the first galaxies, but how they came to be distributed throughout space in large "filaments" surrounding gargantuan cosmic voids.
The physics of cosmic string are very complex, involving conical-shaped spacetime curves and other exotica. Links below lead to sites with much more extensive explanations of these phenomena.
One feature they share with singularities is their ability to severely warp space/time in their immediate vicinity, literally "dragging" the fabric of the universe behind it in a kind of wake as it spins. This has a number of odd effects, including gravity lensing and the use of the String to theoretically travel forward or backward through time.
In the novel, the alien Xeelee were losing a universe-wide war against dark matter beings for control of the cosmos. So they retreated through an "escape hatch" from the universe—created by ripping open a spinning naked singularity with a 10 million light year wide looping spiral of cosmic string spinning at near light speed. This wormhole-like "escape hatch" had a mass of several galactic clusters and measured some 300 light years across. Ten million light-years in diameter, the Ring's purpose was to rip away the singularity's event horizon to allow easy access in and out of the portal. The Xeelee Ring was so huge that the Xeelee's enemies threw entire galaxies at it to try and disrupt it.
To truly understand the scale of the Ring, the crew of the refugee human ship investigating it realized that the Ring is the source of the Great Attractor, one of the great mysteries of modern real-life astronomy. The Great Attractor is an unknown force that is pulling galaxies from hundreds of millions of light years in all directions toward some unknown concentration of mass in the direction of the constellation of Virgo. In the novel, the Xeelee Ring is revealed to be the Great Attractor, so massive (equivalent to some ten thousand galaxies) that it actually counter-acted the expansion of the universe across a significant portion of the cosmos.
It's never clear if the Xeelee simply gathered the cosmic string from around the universe or found some way of manufacturing it through some insanely advanced process. Either way, however, this represents resources and technology of a truly god-like level.



POWER

SOLAR POWERED CITIES


Solar Powered CitiesTech Level: 10
With energy costs soaring and dependence on fossil fuels in general is increasingly called into question, both municipalities and individual property owners will be looking for cheap, renewable sources for power. Far from being just a fashionable "green" trend anymore, more and more people are looking to solar power as a viable economic alternative to the often volatile costs of traditional power supplies.
Solar cells for buildings have been available for decades, but modern systems are proving cheaper, lighter, more versatile, and most significantly, far more efficient in converting sunlight into electricity than ever before. The most modern solar cells have an energy conversion efficiency rate (how much of the energy in the sunlight hitting the cell is converted into usable electricity) of between 42% to 56%. Also significantly different from earlier solar power set-ups, modern systems can feed power into rechargeable batteries during the day for use during the night. A few municipalities in Great Britain, for example, have adapted the wide-spread use of solar-powered streetlights.
Some cities in various parts of the world are beginning to mandate or heavily push for the use of solar power in individual homes, either in the form of solar water heaters or as the more traditional electricity-generating panels. Many companies are also showcasing "green" facilities and skyscrapers that are powered in whole or in part by solar energy cells. Individual home owners are discovering that power generated by their cells can not only lower their electricity bills, but excesses during low-use periods can be fed back into the grid, and actually earn them money back from local power companies. In many parts of the US, a number of tax incentives or rebates are available to home owners and contractors who install solar energy cells in new buildings, and more are being proposed at both the local and federal level.
This all points to a trend that can lead to solar-powered cities--urban areas with ubiquitous solar cells on every roof top or sunward-facing wall. Though it currently seems unlikely that this alone can meet all of a modern city’s energy needs, they can help to greatly offset energy costs to individuals as well as to the city as a whole.
Also of significance is the fact that solar cell systems are much simpler to repair and maintain than emergency generators, and as many modern designs incorporate themselves into the integral structure of a building roof, they are more likely to survive certain extreme conditions such as flooding. During the hurricane Katrina disaster, for example, many of the city’s emergency generators for hospitals and other facilities were knocked out by flooding and storm damage. The presence of a solar cell alternative could have offset at least some of the power loss to these critical installations.
There would be some downsides, of course. Those concerned with the visual aesthetics of historic cities (not a small concern in urban areas that depend on tourist dollars) may bemoan the sudden cropping up of thousands of large rectangular black panels that would suddenly dominate the skyline roofs and walls. Also, roof space on many building is already at a premium for a number of systems, including vents, air conditioners, water towers, and so on. Solar cells would only exacerbate roof crowding.
Even though solar powered cities are a technological innovation that could be created today, chances are it will be the end result of the gradual retooling of the industrial world’s energy production philosophy and infrastructure in the decades to come.



ULTRACAPACITORS


UltracapacitorsTech Level: 11
Ultracapacitors are also known as supercapacitors and both terms are used interchangeably.
Normal capacitors work by physically separating electrical charges across a barrier that allows electrical potential to build without actually completing a circuit. Chemical batteries do this chemically drawing negative ions to one node and positive ions to the other, but a capacitor has an actual physical barrier between its regions of positive and negative charge. The electrons want to jump the barrier to the area of positive attraction, but can't, so they build up on the negative charge surface. A good way to think of this is like water building up behind a dam. The water wants to rush into the open space beyond the dam, but is held back by the barrier.
And like the water behind a dam, once released the electrons flow in a raging torrent, so to speak, dumping all their built-up electrical potential all at once.
The trick to increasing a capacitor's stored up electrical energy is by increasing the surface area that separates the areas of negative and positive charge. To extend the analogy a bit, the longer and higher the dam, the more water it can hold behind it, and the more energy it can release once taken down. Advanced capacitors use a number of different efficient geometries to maximize their stored potential within the space they work with, allowing greater energy to build up in them for various applications.
Ultracapacitors does this method one better, by creating the barriers between the areas of charge on a molecular and atomic scale. In other words, the capacitor geometry is nanoengineered on the scale of billionths of a meter in order to ultimately maximize the space within for energy storage.
Ultracapacitors uses molecule-thin layers of polarized electrolytic solution separated by a dielectric barrier to store the charged regions electrostatically. Even though it uses a chemical solution, no actual chemical reactions are involved, allowing the solution to be charged and discharged hundreds of thousands of times without significant wear. In the most advanced form of ultracapacitor yet engineered, vertically-aligned carbon nanotubes hold individual atoms of the electrolytic solution, allowing for an even greater surface area to be achieved between the individual particles themselves. Many engineers believed that nanotube-enhanced ultracapacitors, once fully perfected, will be able to compete head-to-head with chemical batteries in most applications.
Current ultracapacitors can store much more current than other types of capacitors, but are not quite yet up to the same level of storage a chemical battery enjoys. As new techniques and geometries are employed in their manufacture, this may change, but ultracapacitors now and in the near future will remain merely as a supplementary technology to other energy storage and generation devices. They are also only able to store their charge for a few hours at a time, though this has been slowly but steadily improving.
The big advantage of any capacitor over a battery, however, is that they can release some or all of their stored up energy all at once, where as a battery can only offer a comparatively small but steady stream. And as an ultracapacitor can store much more power than its technological predecessors, it can be used in a much wider range of applications than normal capacitors.
Ultracapacitors can also be charged much more quickly than chemical batteries. A typical ultracapacitor can take ten seconds or less to fully charge, compared to several hours for the best commercial rechargeable battery.
Because they are much more energy-dense, compact ultracapacitors can be used alongside side advanced batteries in electronic devices without adding significant weight or bulk. The presence of a fully-charged ultracapacitor will not only allow the device to extend its potential use by several hours, but allow it to perform more high-end functions longer and more efficiently. For example, laptop computers could used an ultracapacitor to run DVDs or high-end computers for several hours without significant battery drain. Cellphones could temporarily boost their signal strength for better broadcast power in remote areas, speakers can use them to produce better quality and louder sounds without distortion, and so on.
One of the major areas ultracapacitors are expected to make a significant impact is in motor vehicles. A car will still use an alternate source of energy as its main means of propulsion, whether that be from a gasoline engine, a fuel cell array, or a battery. Ultracapacitors will be present in the vehicle to improve its efficiency and motor power. Breaking a vehicle can create a lot of electromechanical energy, which can be fed into the onboard ultracapacitors. When the vehicle is ready to go again, it draws upon this stored energy to give that extra oomph to get up to speed again with minimal overall energy loss, thus greatly increasing the car's potential fuel and battery efficiency.
And because ultracapacitors can charge quickly and hold that charge for a few hours, busses in China have been experimenting in using them as their main motive power. The bus pulls into a stop, hooks up to a power outlet for thirty seconds while it takes on and unloads passengers, then disengages, fully charged until the next stop. Thus, the busses never need refueling, and the ultracapacitors can be recharged by some estimates over a million times, making them ideal for this kind of system.
Ultracapacitors may also see a lot of military applications, especially with electrically-powered weapons such as laser and electrothermal chemical artillery and firearms. The ultracapacitors can provide the large amounts of current fast for these power-hungry but effective weapon systems.



INFLATABLE WIND TURBINES


The basic concept behind the Magenn Air Rotor System (MARS.) Image copyright Magenn.

Inflatable Wind TurbinesTech level: 10
One of the major disadvantages of ground-based wind generators is that winds at that level can be notoriously unreliable. However, some 300 meters up, winds blow not only much more steadily but with more overall energy.
Inflatable wind turbine technology, spearheaded by the Magenn Air Rotor System (MARS), seek to take advantage of this phenomenon. The system consists of an inflatable cylindrical turbine tethered to the ground that spins about its horizontal axis, using angled vanes along its body to capture the wind for rotation. This rotation in turn generates power, which is funneled to the ground via power cable sin its tether.
The system also take advantage of the Magnus effect, where a spinning object creates a vortex of air around itself to various effects, to help create additional lift and to stabilize it in place.
One of the great advantage of the MARS system is that it can be placed almost anywhere--oceans, coasts, lakes, farms, deserts, suburbs, mountain areas, and so on—and still prove an excellent source of power. The designers hope that the MARS system can be used to provide power not only to population centers, but bring power to remote areas where it might be otherwise too difficult or too expensive to have. MARS can also be used in disaster-relief situations to provide ready power.
The MARS when fully developed is expected to have an efficiency rate of up to 60% (compared to the 25% or so of ground-based wind generators) and cost less per unit of energy than most other wind power solutions. It also produces no carbon emissions and is very environment-friendly, posing less hazard to local flying fauna. The tether and the turbine itself is made out of a Vectran material, which is stronger and lighter than steel.
If it does escape its tether, it has an automated deflation system which would bring the unit down to the ground slowly and safely. During times of turbulent weather, it can be more quickly deflated and reeled in by its tether.
Its thought that the MARS system could also be attached to ships, to be used to help drive the vessel’s propellers.



FLYING WIND TURBINES


Image copyright Sky Windpower.

Flying Wind TurbinesTech Level: 11
Wind power has often been touted as one of the most promising energy sources we can easily access. Clean and efficient and imminently renewable, wind power could provide a large percentage of the US’s current energy needs if a proper infrastructure is built up.
However, one of the biggest disadvantages with ground-based wind power stations is that winds do not always flow at a constant rate, and at times can die out altogether. Not a good thing for an industry trying to keep up with the demands of an energy-hungry society.
However, engineers at the San Diego startup company Sky Windpower have developed an ingenious solution: Put the power generating windmills 15,000 to 45,000 feet up in the air, where jet stream winds are strong and constant no matter the weather on the surface.
Sky Windpower plans on creating an array of these large "rotorcraft" tethered to the ground like enormous kites to catch and utilize such winds. Each rotorcraft would have four 130-foot long rotors, with one pair counter-rotating against the other to provide stability.
They would be tethered to the ground by three-inch thick cables, made up of a core of super-strong Vectran fibers and housing twin insulated aluminum filaments to carry the power loads. Each ground station would contain a large winch contained in a housing about the size of semi truck.
Each rotorcraft would start on the ground, and then would be fed power to their rotors so they could take off and maneuver like a helicopter. Once at the desired altitude, the motors would shut-off and the rotors would autogyro in the wind, keeping them aloft. The motors could be activated again if powered maneuvering is needed, but the rotor craft would mostly rely on its stabilizing rudders and tether to maintain its position. Once in place, the rotors would be tilted up at an angle to create more torque for spinning the turbines, on-board controls monitoring wind speed and direction and adjusting automatically.
Each individual rotorcraft is expected to generate up to 20 megawatts of power, and Sky Windpower envisions creating "farms" containing up to 600 of them on a single plot of land two hundred square miles (roughly 10 miles by 20 miles) in area. That would provide enough juice to power two Chicago-sized cities.
The task of coordinating so many rotorcraft in such a relatively tight airspace could be tricky. Automated GPS-guided computer controls will go a long way in helping such a problem, but the system would have to be constantly monitored live nonetheless. There’s also the problem of what happens if one of the rotorcraft’s tethers actually snaps. Though the autogyroing effect will mean that there is little chance fall like a brick, rotorcraft would simply not be that maneuverable. Its entirely possible that the rotorcraft could drift several dozen to several hundred miles along the jet stream and settle onto a heavily-inhabited area. Even in the best circumstances, having a 20-ton, 300-foot-wide rotorcraft settle onto your house would not be a good thing. Still, if the rotorcraft and their tethers are properly maintained and monitored, the chance of such a mishap would seem remote.
Air traffic would of course have to be rerouted away from the rotorcraft array, and its extremely high altitude will mean far fewer birds will become fouled in the spinning blades like they do with ground-based windmills.



COMPULSATORS


CompulsatorsTech Level: 11
The word compulsator is an amalgam of the term Compensated Pulsed Alternator. They were originally conceived in the 1970s at the University of Texas Center for Electromechanics (UT-CEM) to power laser flash-lamps for nuclear fusion research. Prototypes were built at UT-CEM in the early 1980s, and they have since found applications in a number of other fields, most significantly at the US Department of Defense and NASA in powering experimental electromagnetic launchers.
A closely related technology to flywheel batteries, compulsators are an advanced form of alternator that stores potential energy in the form of rapidly spinning rotors. (Technically these rotors can also be called flywheels, but they are referred to here as "rotors" to avoid confusion with flywheel batteries.) The main difference between the two technologies seem to be that compulsators are designed for short-term use as an alternative to high-voltage capacitors, while flywheel batteries are designed for long-term energy storage. The rotors in a compulsator tend to be lightweight to allow for super-high spin, while the flywheels in flywheel batteries tend to be heavy and dense to store more energy with less extreme spin rates.
An external power source typically spins up the rotors in a compulsator, allowing it to store the energy potential in the form of inertia from its spin. A compensating shield or winding in the alternator is used to lower the rotors’ internal impedance, i.e., it allows it to spin up to speed with only minimal interference from the surrounding electromagnetic fields.
Like with flywheel batteries, the amount of energy a compulsator can store depends on its size and its maximum rate of spin. One rate of spin on test rotors in military compulsators was quoted at over 18,000 rpm’s. Unlike flywheels in flywheel batteries, however, their rotors depend much more on high velocity spin than on the mass and density to retain their imparted energy.
Compulsators store energy from large but relatively low-current generators and motors for use in the short term as quick-access high-current sources. In this way, they act like capacitors, but are capable of storing and releasing greater amounts of energy per unit of weight and volume.
Similarly to their flywheel battery cousins, material technology is one of the great limiting factors in exactly how much energy the unit can store. If the rotors in a compulsator are spun too fast, centrifugal force will literally rip them apart. Typically being lighter and spun much faster than a flywheel disc, compulsator rotors are much more vulnerable to this kind of damage.
Also, even though theirs rotors are designed to be relatively lightweight, compulsator units themselves tend to be massive and bulky, needing to handle heavy-duty torque and magnetic forces. Efforts to make them more lightweight and compact are ongoing.
Because of the anticipated need for frequent large pulses of power on a moment’s notice in a number of applications, including high-performance electric vehicles and high-energy weapons like railguns, compulsators are a technology that is likely to enjoy more wide-spread use in the near-term. Much research is going into constructing them of much tougher composite materials that can better withstand ultra high rates of spin and thus allowing them to store more and impart energy.
Currently the most advanced compulsators, built by UT-CEM for the US Marine Corps’ Cannon Caliber Electromagnetic Gun System (CCEMG) program, could store up to 40 megajoules of energy and was capable of delivering the energy for 15 shots for the prototype weapon before needing to be recharged. Compulsators are also to be an integral component of the US Army Research Laboratory’s Future Main Battle Tank program, scheduled for deployment by 2015, which would feature an armored fighting vehicle with an electrically-powered main armament.



ADVANCED FLYWHEEL BATTERIES


A diagram of a contemporary flywheel battery. Image copyright Active Powers, Inc.

Advanced Flywheel BatteriesTech level: 12
Flywheel batteries are also known as flywheel generators. They share some characteristics with homopolar generators, and some systems integrate features of both.
Flywheels batteries use a disk composed of dense materials rapidly spinning in an enclosed, near-vacuum compartment to store and generate electricity. When electricity from an outside source is applied to the battery, this interior disk is spun faster and faster. After the electricity is cut off, the disk continues to spin, "storing" the energy potential of the electricity with its rotational motion. When one wants to draw energy out of the battery again, the spinning of the disk is used to drive an electrical motor, or alternately it may be used as the motor itself. This places a load on the spinning wheel, slowing it back down.
Flywheel batteries now being developed for vehicle applications can produce peak outputs of 150 kilowatts or more, and one being researched by University of Texas' Center for Electromechanics for use in railroad engines envisions a massive 3 megawatt flywheel battery system. NASA is also developing flywheel batteries for use on the International Space Station and other future space ventures. As the technology progresses, compact and lightweight flywheel batteries for use in personal applications may emerge.
One of the great advantages of using a flywheel battery is that it can store large amounts of potential electrical power for a very long time, exceeding modern batteries both in terms of capacity and longevity. Modern flywheel batteries can store kilowatt-hours worth of electricity, and the more advanced models available today are projected to be able to store their energy for twenty years or more. As the flywheel is employed in a near-frictionless vacuum environment, there is very little to slow it down and can keep spinning for years on end.
Compare this to the best chemical batteries, which invariably store electricity for a year or two at most, and have to be carefully disposed of at the end of their operational lifetimes (typically 3 to 5 years) because they contain a number of caustic chemicals.
The amount of energy a flywheel battery can ultimately hold depends on both the mass of the flywheel and its maximum rate of spin. The more massive the wheel, the more kinetic energy it will contain for any given rotational rate. To ensure the fastest spin possible, a flywheel is suspended using magnetic bearings within a vacuum or near-vacuum chamber.
Of course, there are inevitable complications in developing more advanced versions of this kind of technology. The more massive the flywheel is, the more likely centrifugal force will try and tear it apart the faster it spins. Research into making the flywheel out of advanced composite materials and alloys that can withstand these kind of forces is ongoing, with candidate materials including wheels made out of diamond filament fibers and carbon nanotube fibers. And the more resistant to break up from centrifugal force the wheel is, the faster it can be spun, and the more energy it can store. The current flywheel champ, being developed by NASA, is capable of 60,000+ rpms. Future versions of flywheel batteries envision rpm’s in the hundreds of thousands.
NASA's Flywheel Battery prototype being developed at the Center for Space Power in College Station, Texas. >
Another potential problem is that these dense, rapidly spinning wheels contain a lot of kinetic energy, and should a mishap occur or the battery be badly damaged, the flywheel could be knocked loose and tear up anything in its path. For this reason, flywheel batteries have to be heavily shielded and would often be run at below peak capacity to avoid this kind of potential problem.
Also, a pair of counter-rotating flywheels in the same battery may be necessary to avoid rotational progression problems in applications where the battery may not be well-anchored. These include mobile weapon and space flight applications. Some speculation has been put forth that flywheel batteries placed on satellites and spacecraft could also double as gyroscopes as well as energy storage devices.
Besides applications in transportation and space, advanced flywheel batteries would also be useful in providing power back-ups to installations, building, and private homes; allowing communication and power distribution systems to better handle large surges in their use; provide the high current needs for all-electric or electric-enhanced construction equipment; and provide power for high-energy-consuming weapon systems like railguns, coilguns, lasers, and plasma guns.



EXPLOSIVE POWER GENERATORS


Cutaway View Of A Flux Compression Generator. Image courtesy of the US Dept. Of Energy.

Flux Compression GeneratorsTech level: 10
Explosive Power Generator (EPG) Weapon CartridgesTech Level: 14
Explosive Power Generators (EPGs) are a type of pulsed power generator. However, not all classifications of pulsed power generators use explosives, hence the distinction drawn here.
This articles deals exclusively with EPGs as energy sources. Discussion of their potential as electromagnetic pulse weapons will be discussed in another article.
EPGs use explosive charges to induce large spikes of current in specially configured equipment. Their great advantage is that they can be made much more compact than most other types of generators and can produce their power almost instantaneously, making hem ideal for a number of applications, particularly portable weapons technology.

FLUX COMPRESSION GENERATOR
Tech Level: 10A technology that has been researched since the Cold War, it is used in a number of laboratory and research applications where large amounts of power needs to be generated almost instantly. Flux Compression Generators(FCGs) have also been of long-standing interest to the military, as they can also be used to create an electromagnetic pulse that renders all electronics and electrical systems in the affected area inert.
An FCG basically consists of an explosive charge inserted into or around a coil of copper or other conductive wires. The wires are charged from an auxiliary power source and the charge is detonated. The explosion causes the coil to generates a brief but intense fluxed magnetic field. This magnetic field is used to produce current in the coil, which can be fed to a device or capacitors in the split second before the wires are destroyed.
The physics of Magnetic Flux Compression is a fairly complex subject and is more thoroughly explained in the links at the end of this article. In essence, the explosive ‘compresses’ the magnetic flux of the field generated by the wires, creating an additional current in the wire. Because it happens near-instantly, electrical resistance factors don’t have time to kick in. This unfettered current multiplies the strength of the existing field immensely for a brief moment. The process destroys the generator, but is capable of producing currents as great as millions of amperes in a fraction of a second, up to over 60 times that of the starting current. For truly colossal power spikes, Flux generators can be rigged in series, where the current produced by one is used as the starting current in the next.
Flux generators have a number of design and engineering challenges. For maximum effectiveness, the detonation has to occur just as the initiating current in the coil is peaking, and getting the timing on those two systems exactly right can be tricky. The devices can also generate an intense electromagnetic pulse, so they usually have to be used with heavy shielding to protect nearby electronic devices. This is in addition to the usual precautions and protections must be taken when dealing with explosive equipment. If being used to power equipment or an experiment, both the debris and waste gasses of the explosion have to be cleared before another generator can be put in place.
Because of issues with electromagnetic pulse, developing FCGs for widespread commercial use would have a number of security and legal complications that would have to be resolved first. It is therefore unlikely that we will see them in commonplace use anytime soon.

EXPLOSIVE POWER GENERATOR (EPG) WEAPON CARTRIDGES
Tech Level: 14These are mentioned in the Traveller tabletop RPG, and would seem to be a natural outgrowth of Flux Compression Generators. Basically, FCGs would become compact and efficient enough to be integrated into portable weaponry and other equipment, from artillery and vehicle weapons to rifles and handguns.
Functionally, they would operate in many ways similar to weapon magazines. Individual cartridges would be relatively small and cylindrical. Each different weapon system would likely have its own specially-designed cartridges, depending on its power requirements, though some manufacturers may deliberately design different weapons to take the same EPG cartridges in order to simplify manufacturing and reduce costs. The cartridges would be arrayed in magazines, and would be loaded and unloaded into weapons in a similar manner.
Some weapons, such as railguns and coilguns, use projectiles as well as large amounts of current. With these technologies, the explosive charge would serve a dual purpose, both to power the weapon and to give the projectile an initial kick in velocity before it is accelerated electromagnetically. These weapons may end up having mechanical ammunition feed systems similar to modern day firearms, in order to load the EPG/projectile bundle into the barrel and then eject the spent cartridge and waste gasses after.
Other high-energy-use weapons, such as plasma guns, particle beams, and so on, would not necessarily need such mechanical feed systems. The magazine could be designed that each cartridge could be used directly in place without the need to be fed into the main gun mechanism. This would be dependent on being able to manufacture cartridge magazines durable enough to withstand numerous internal explosion without damaging the unused cartridges. An added advantage to such a system would be that the magazine could be designed to use two or more cartridges in series, with one EPG’s output serving as the starter current for the next, allowing the user to dial-up the weapon’s potential power substantially depending on how many cartridges are used at once.
There are some downsides to this system. For one, the internally-contained explosion will add a great deal of heat to weapon systems that will likely already generate quite a bit. Advanced coolant systems will have to be made even more robust and efficient to handle this, and weapons may need a cooling-down period between shots.
Second, the internal explosion will add vibrations and kickback to the weapon, though probably not as much as modern guns. Still, the user may have to steady the weapon after each shot to compensate for the recoil. Just as in some modern guns, waste gasses from the EPG could be vented at strategic points in order to help stabilize the weapon.
Also, the internal explosion will likely not be completely muffled, meaning the ideal of a completely silent beam weapon may never be realized. This will be especially true if the very energetic waste gasses are vented, resulting in a very gun-like bang with each pull of the trigger.
Because the EPGs’ detonations would be contained entirely within a properly shielded weapon and/or magazine, its environmental EMP effects would be neutralized.



SPACE-BASED SOLAR THERMAL POWER


A moonbase concept with a solar thermal generator. Art by Kevin Hand. Image (c) Popular Science.

Space-Based Solar Thermal PowerTech Level: 12
These generators are sometimes informally called solar boilers. Using solar thermal power for propulsion is discussed in the article on Solar Boilers, which is linked to at the end of this article.
Solar Thermal power is just entering the mainstream on Earth, in the form of large power stations comprised of dozens to hundreds of gimbaled mirrors which focus sunlight on a container or pipes containing a liquid medium. The liquid in these pipes is quickly superheated, in the case of water maybe even turned into steam, and then used to drive electrical turbines. In space, water is usually assumed to be the medium of choice, because it is easily transported in the form of ice, and it can be readily found in quantity on comets and in deposits on asteroids and the moon. However, other liquids have been used in real-life solar thermal applications, including oil and molten salts, which tend to have superior heat retention qualities.
Space-based versions of this scheme occasionally see the light of day in various proposals for space stations and moonbases, as well as the occasional mention in science fiction. With the success of solar thermal power stations on the ground a reality for over 20 years now, the use of these systems in space now seems much more plausible. Different configurations for such power plants can be found in the links at the end of this article.
In the case of space-based energy, traditional photovoltaic solar cells can provide a steady supply of wattage over the long term, but are not so good at supplying power in large spikes, as may occasionally be needed for some applications. Nuclear reactors can do this, but they are heavy and expensive, making their lift to stations and bases problematic. They can also become major safety hazards in the case of a breakdown. High-performance batteries and capacitors can provide large amounts of power at once, but may require long recharge times.
Solar thermal generators can fill this niche comfortably, however, by providing large bursts of power steadily with minimal long-term safety concerns, and relatively cheaply. Mirrors, pipes, and water pumps will likely remain more economically viable than plutonium or advanced composite flywheels for the foreseeable future.
The major change between ground-based solar thermal and its space-based cousin is that vacuum-environment solar boilers have to be a completely closed system. On Earth, water can be readily resupplied from outside sources. In space, the need for the water recycling to be as close to 100% as possible is paramount.
This means that after the solar-heated steam is used to drive the turbines, it must be re-condensed into water and cycled back into the pipes to be returned to steam once again. Vacuum environments do have a resource that can be of great help here: shadows.
Without an atmosphere to evenly distribute the heat, the temperatures of objects in shadow can plunge to hundreds of degrees below zero. Steam fed into pipes that pass into shadow in space will quickly recondense back into water. Specific shade units can be built around the pipes, or engineers can make judicious use of station or base design to provide the shadows instead.
The solar boiler scheme does have a number of disadvantages, such as its many complicated moving parts and the high-pressure cycling system that would need constant monitoring and maintenance.



ENERGY ISLANDS


Energy Islands
Tech Level: 12
As the world grows ever more energy-hungry, and the need for alternate power sources become paramount, a new proposal has come along that combines a number of emerging technologies into a single structure.
At the heart of this new system is a process called Ocean Thermal Energy Conversion, or OTEC. The original idea is credited to 19th century architect Jacques-Arsene d’Arsonval, who envisioned using variations in ocean temperatures to create electricity. Today, a modern, expanded version also includes supplementary solar-electric, wind, tide, and wave generators at the same site to produce electricity.
The extreme differences in water temperature between the surface and a kilometer cab reach up to 24 degrees Celsius in tropical waters. Ammonia or other working fluid with a low vapor point is used. The ammonia is gaseous at surface temperatures and used to drive electrical turbines. Cooler water from the depths, typically around 5 degrees, is then used to recondense the ammonia and send it back into the system to be reused. This constant cycling of the ammonia vapor drives an electrical generator.
Even though its energy conversion efficiency is relatively low, about 3% at best, a full-scale OTEC plant could generate about 250 megawatts, or the equivalent of a quarter of an average fossil fuel power plant.
The OTEC generator would only be part of the energy island, however. Solar cells, multi-tiered wind turbines, wave actuators around the rim of the island, and tidal generators below the surface all combined would provide an additional 73 MW to add to the OTEC plant’s 250 MW. Estimates state that it may take between 4 to 8 energy islands to replace one nuclear power plant, or half that to replace a conventional fossil-fuel plant. Some 53,000 would be needed to supply the world’s current energy needs. But as they would have to be widely spread over the tropics of three ocean to be maximally effective, even this number would only have minimal effect on navigation and the environment.
One advantageous side-product of the OTEC process is desalinated water from the evaporation and condensation of the water pumping through the system. The island would produce several tons per day, and can be used for export to agricultural concerns on land, or even be used to cultivate hydroponic farms on the lower levels of the island. Using electrolysis on the steam from the evaporator could also yield a cheap and plentiful supply of hydrogen for various energy and commercial uses.
But the biggest advantage with energy islands, of course, is that all the energy it produces comes from 100% renewable, non-polluting sources. They would utilize deep-ocean artificial island techniques such as those used for oil drilling platforms, but expanded to accommodate a much larger surface area.
The biggest obstacle to creating these energy islands is mostly economic. Most agree that OTEC technology could become commercially viable if given the proper funding and time for research, something that most organizations until recently have been very unwilling to do. However, with oil prices rising alongside environmental concerns, OTEC and energy islands are currently being actively researched by the governments of Japan, Taiwan, India, South Africa, the Philippines and the United States.



SOLAR POWER SATELLITES


Image courtesy NASA.

Solar Power SatellitesTech Level: 12
One of the great enduring ideas of the near-future in space is that of enormous, orbiting arrays of solar cells that collect sunlight, convert it to energy, and beam that energy to Earth for use. The idea can be traced back to Peter Glaser of the Arthur D. Little Company, who originally suggested the concept in its modern form in 1968. NASA and the US Department of Energy did an extensive conceptual study of solar powers satellites (SPS) in the 1970s, and the idea has popped up again and again in both science fiction and in space applications studies.
These satellites are usually envisioned as large planar affairs composed of many square kilometers of high-efficiency solar cells. The satellites could be placed in geosynchronous orbits that would never pass through the Earth’s shadow, ensuring a non-stop flow of energy. The sunlight on the cells is converted into electricity, which is gathered and beamed back to Earth via microwave emitters.
Because they are in geosynchronous orbits, their microwave emitters can always be trained on one specific spot on the ground. Here, arrays of receiving antennas, also called rectennas, intercept the microwaves and convert the energy into electricity usable by average consumers. This grouping of receiving antennas is sometimes called a rectenna farm.
The original NASA/DOE study called for a rectangular satellite with a collecting array that measured 10 kilometers by 14 kilometers. It would have used a transmitting antenna roughly about a kilometer across (the larger the better, to prevent beam-spreading,) which would beam the power to Earth at a frequency of 2.45 GHz, the same frequency used by microwave ovens, but also has the advantage of allowing the beam to pass unimpeded through clouds and rain. The rectenna farm would cover an oval area roughly 13 kilometers long and 10 kilometers wide.
The peak intensity of the microwave beam would be 23 milliwatts per square centimeter; the maximum allowable leakage from a consumer microwave oven is 5 milliwatts per square centimeter. While this would not be healthy in terms of long term exposure, it would certainly be possible to walk through the entire multi-kilometer width of the naked beam without experiencing any ill effects. Since the receiving area is expected to be covered over with large, raised rectennas, anyone on the ground underneath them would receive only negligible microwave exposure. Still, rectenna farms would likely be located in remote areas such as deserts in order to allay concerns from residents about possible ill effects of the microwave exposure.
At the distance of Earth’s orbit, sunlight delivers about 1400 watts worth of power per square meter. Using the types of solar cell technology available at the time of the NASA/DOE study, this would result in a net power gain on the ground of about 5 billion watts, or about ten times the output of a typical ground-based power plant.
These estimates, however, were made with the assumption of solar cell efficiency (how much of the 1400 watts per square meter of sunlight they can convert into usable energy) of around 5%, typical for 1970s technology. Today’s space-based solar cell arrays, such as those used on the International Space Station, have an energy-conversion efficiency of about 14%. The most modern systems have efficiencies ranging between 42% and 56%. The amount of power that can be delivered to the ground would be increased proportionally as well.
The mass of the SPS in the NASA/DOE study was estimated to be between 30,000 and 50,000 metric tons. With modern composite materials and far more lightweight solar cell designs, this mass could be cut to about one half to one third that. But even so, this represents a tremendous amount of material one would have to boost into space. At a current cost of at least several thousand dollars per pound to put an object in orbit, SPSs, despite their other advantages, would remain economically unfeasible in the near future.
A recent study conducted by the Space Studies Institute (SSI) showed that 98% of the material needed to construct a SPS could be mined from materials on the Moon. This would greatly reduce the cost of construction, but it would also mean that at least the seed of a lunar manufacturing infrastructure would have to exist first before the SPS scheme became feasible.
Though the microwave beam from an SPS cannot do much harm to any individual person, it is feasible the beam could be used as an environmental damage weapon, especially if the beam intensity were increased. If trained on an area for an appreciable length of time, it could use heat damage to kill cropland, forests, swamps, and perhaps even be used to oppress the residents of a large modern city that could otherwise be under siege. This effect need not always be used to detriment, however. In the novel Fallen Angels, by Larry Niven, Jerry Pournelle, and Michael Flynn, the heat from SPS transmitters was used to keep the last Canadian city, Ottawa, ice-free and livable after the rest of the country was buried under the glaciers of a new Ice Age.
At least one nation, the perpetually power-starved Japan, has committed itself to constructing a working solar power satellite by 2040. An smaller, cheaper, but less efficient alternate design by Japanese engineers suggest an SPS with the solar cells arranged in an equilateral triangle 300 meters wide and 300 meters to a side. The satellite would sweep along the equator at an altitude of 1100 kilometers and beam its power to a long array of rectenna stations below its flight path.
SPS technology also has a secondary application, that of providing beam power to launch craft and space-borne vessels, such as Myrabo’s Lightcraft and various incarnations of solar and magnetic sails. For the latter applications, however, the energy from the satellite might be converted to laser light or frequencies other than microwaves, depending on the type of spaceship used.



FUSION


Simplified diagram of a tokamak fusion reactor. Image from fusionscience.org


Deuterium/Tritium FusionTech Level: 12
Deuterium/Helium-3 FusionTech Level: 13
Helium-3/Helium-3 FusionTech Level: 13
Deuterium-Deuterium FusionTech Level: 14
Portable Fusion GeneratorTech Level: 14
Bubble FusionTech Level:15
Deep Plasma Focus Fusion ReactorTech Level: 15
Hydrogen/Boron FusionTech Level: 15
Proton Chain FusionTech Level: 16
Muon-Catalyzed FusionTech Level: 16
Tabletop FusorTech Level: 17
Though many of the above reactions have been achieved in laboratory fusion experiments, the Tech Level number indicates when each type of fusion reaction will likely become a commercially-viable source of power.
There are few current research efforts more synonymous with the potential of future technology than fusion power. The coming era of fusion promises potent, clean nuclear energy to meet all of our needs for centuries to come.
Or so proponents say. Researchers have been promising the "fusion breakthrough" for over half a century now. The reality of fusion power may not be as rosy as some would like to paint. While still providing abundant energy on a level current technologies simply cannot match, it is also rife with a number of potential missteps and hazards

BASICS


Fusion is the process by which two atoms combine—"fuse"—to become a heavier element. In the process, some of the mass of the fusing elements is converted into energy. It is the fundamental process that makes the stars shine, so we know enormous amounts of energy can be unlocked with nuclear fusion. In fact, per gram of fuel consumed, fusion can produce ten million times as much energy as burning petrochemicals like oil or gasoline.
One of the greatest stumbling blocks to creating a sustainable fusion reaction is the enormous amounts of pressure and heat needed to make atomic nuclei fuse. The fusion reaction with the lowest temperature needed, deuterium-tritium fusion, requires an environment of over 40 million degrees Kelvin. Such great temperatures are required to overcome the Coulomb barrier, the field of electric repulsion surrounding the protons of the fusing nuclei. The particles must basically be slammed together with enough force to get them close enough for the attractive nuclear strong force to take over. In stars, the immense pressures created at their cores by their own mass helps to overcome the coulomb barrier with lesser temperatures of only a few million degrees Kelvin, but as star-core pressures are a long way from ever being duplicated on Earth, fusion researchers must rely much more on high temperatures to propagate their reactions.
A heavily-filtered image of fusion plasma

HISTORY


Nuclear fusion was first proposed as a theory back in the 1920s. In 1939, the German-American physicist Hans Bethe worked out the mathematics of the energy generation of fusion reactions. Bethe's results closely matched astronomical observations, proving that fusion powered all the stars in the universe. The idea of harnessing fusion energy was bandied about by the scientists of the Manhattan Project during World War Two, and research along those lines led to the creation of the first thermonuclear bombs in the 1950s.
Fusion energy research began in earnest in 1951, when the Atomic Energy Commission established a secret program called Project Sherwood to investigate the feasibility of using a controlled fusion reaction to generate electricity. In 1958, much of that initial research not tied to military applications was declassified in the West at the Atoms for Peace conference in Geneva. Fusion energy research projects sprung up worldwide in the decades that followed.
The first major breakthrough came in the 1960s from the USSR, where researchers created a toroidal magnetic confinement system called a tokamak, based on a design by physicists Andrey D. Sakharov and Igor Y. Tamm, to sustain plasma temperatures in the millions of degrees. In the 1970s, the energy crisis prompted renewed interest in fusion energy in the West, leading among other things to the creation of the US's Tokamak Fusion Test Reactor, which spearheaded US efforts in that direction for years to come. Efforts into Inertial Confinement Fusion were also started in the 1960s and declassified at about the same time as the Tokamak Fusion Test Reactor was being built. In the 1980s the US's Strategic Defense Initiative began another solid push for fusion generators that could function as power sources for space-based missile defenses.
In 1989, a public furor over fusion was sparked with the report of a successful "cold fusion" experiment. Chemists Martin Fleischmann and Stanley Pons at the University of Utah reported that electrolysis experiments with heavy water produced both an excess of heat energy and other byproducts consistent with fusion reactions. However, efforts to reproduce their experiment met with both mixed results and heated controversy. While cold fusion at this time seems to be a dead end, it did help to spark another surge of interest and funding in mainstream fusion research.
Since then, research has continued steadily, whittling away at the barriers of the fusion "break even" point, where a reaction will yield more overall energy than what was used to create and sustain it. Better methods of plasma containment and heating have been developed, reactions have been sustained longer, and newer and better equipment is continually being developed. Today, physicists have a much clearer idea of the plasma dynamics needed to control a reaction, and many are already sketching out a detailed map of the developments needed to move from today's world to the fusion-powered future.

CONFINEMENT


Fusion requires conditions that would instantly vaporize any material substance that tried to contain it. Instead, scientists had to develop specialized means of propagating reactions without destroying the machines they used to create them. So far two techniques, magnetic confinement and inertial confinement, have proven the most promising.
Inertial Confinement is simple in concept but very hard to achieve in reality. It quite simply is squeezing the fusion fuel from all sides equally, until the fuel reaches the critical temperature and pressure needed for fusion to occur. However, under these conditions the isotopes of hydrogen and helium used for fusion fuels quickly turn into superheated, very chaotic plasmas, making uniform compression incredibly difficult.
One form of inertial confinement fusion is found in hydrogen bombs, where radiation pressure from a surrounding nuclear fission chain reaction (an A-bomb) is used to compress the deuterium in the bomb's core to fusion conditions. While a proven and very effective technique, it is not a very practical method for creating anything except vast amounts of destruction.
A recently developed, more sophisticated, and far less destructive method of inertial confinement uses an array of many lasers or particle beams, focused on a single small pellet of fuel. The beams are aligned in such a way that the energy from their crossed beams compresses the fuel pellet as well as superheats it, allowing it to achieve fusion conditions. By cycling through fuel pellets rapidly, an inertial confinement fusion reactor might be made into a practical source of electrical power.
The other main means of producing fusion reactions is with magnetic confinement. Magnetic confinement fusion typically uses a tokamak, but there are a small minority of other designs. Similar in configuration to experimental particle accelerators, a tokamak holds a ring of plasma in the doughnut-shaped cradle of powerful, carefully maintained magnetic fields. The constantly looping plasma is superheated to fusion conditions by various techniques, such as high-speed collisions, compressing magnetic fields, and ignition via particle beam.
The experimental fusion tokamak at the Oakridge National Laboratory
Fusion of elements heavier than isotopes of hydrogen and helium is of course possible, and is going on all the time in the heart of supermassive stars and in stars leaving the main sequence. They are usually not considered practical as a means of power production.

DEUTERIUM/TRITIUM FUSION
Tech Level: 12Deuterium and tritium are both isotopes of hydrogen. Normal hydrogen has a single proton for its nucleus. Deuterium has a proton-neutron pair in its center, and tritium has a proton and two neutrons in its nucleus. Slamming an atom of deuterium and an atom of tritium together in nuclear fusion produces an atom of Helium-4 and a neutron along with 17.6 million electron volts of energy. This gives us an energy yield of about 3.38 x 10^14 joules per gram of fuel used, compared to the 8.8 x 10 ^13 joules per kilogram of nuclear fission, or nearly four times as much.
Deuterium-tritium (DT) fusion is the easiest fusion reaction to obtain, requiring the least amount of temperature (a "mere" 40 million degrees K) and pressure, but it unfortunately produces a great deal of high-speed neutrons as a byproduct of the reaction. Neutrons are electrically neutral and therefore are not easily contained in magnetic fields. This presents a serious radiation hazard, requiring heavy physical shielding. Worse yet, the shielding itself becomes radioactive after extended use and has to be disposed of. The need to control and eventually dispose of hazardous radioactive shielding could well prove to be the greatest stumbling block in selling DT fusion to the public as a safe and viable energy source.
The sources of the fuel for DT fusion may also prove problematic. Deuterium is relatively easy to obtain; about 1 in 5000 water atoms on earth has a deuterium atom as part of its hydrogen component. Sophisticated sifting of ocean water for deuterium gives modern civilization a potential supply of billions of tons of deuterium. And if ocean water can be considered as a fusion fuel, one gallon of ocean water has a potential energy yield equal to 300 gallons of gasoline.
Tritium is another matter. There are no readily-available natural sources for tritium on or near Earth, mainly because tritium has a half-life of only 10 years. However, tritium can be "bred" by bombarding an isotope of lithium, lithium-6, with high-speed neutrons. Lithium-6 makes up about 7.4% of naturally occurring lithium, giving potential DT fusion reactors an ample supply, but one still very limited and costly to produce compared to deuterium. Because one of the main byproducts of DT fusion is high-speed neutrons, it has been suggested that the inner layer of shielding in a DT fusion reactor be lined with lithium-6, so that it can in effect help to produce its own fuel. There has also been speculation that tritium can be mined from the hydrogen-heavy atmospheres of gas giants. However, along with the problems of the heavy shielding required, the relative scarcity of tritium can be the other limiting factor in the commercial viability of DT fusion reactors.
DT fusion would create power mostly by using the high-speed neutrons it generates to create heat.

DEUTERIUM/HELIUM-3 FUSION
Tech Level: 13Helium-3 is an isotope of Helium that is deficient one neutron. Fusion reactions using Helium-3 have a number of advantages over DT or Deuterium/Deuterium (DD) fusion, the most significant being they produce far less radiation. However, Helium-3 fusion processes require up to ten times the temperatures of DT fusion, and produce less energy overall.
Helium-3 is rare on earth, but exists in abundance on the Moon, deposited on the surface rocks and soil over hundreds of millions of years by the solar wind. Some estimates put the total available supply of Helium-3 on the moon at over 1.1 million metric tons, enough to supply the world’s current energy needs for thousands of years. Deuterium-Helium-3 (DH3) fusion and Helium-3/Helium-3 (2H3) fusion will most likely will not become commercially viable source of power until a moonbase is established and harvesting operations are underway. However, with the recent push by a number of national space agencies to return to the moon and establish a permanent manned presence there, it is also likely that Helium-3 technologies will be fast-tracked in part to help justify the cost of these initiatives. Indeed, it has been speculated that the whole reason the US, China, and other powers are now looking at the moon anew is specifically to acquire its vast stores of Helium-3.
Deuterium-Helium-3 (DH3) fusion has the benefit of producing only about one percent of the neutronic radiation of DT fusion, making DH3 reactors safe enough to build right alongside, or even in the midst of, cities. Unfortunately, the reaction produces significantly less energy (about 1/80th) that of DT fusion and requires about three times the operational temperature, so there is a trade off. However, the energy produced is still millions of times that of petrochemical fuels.
Slamming a Deuterium atom and a Helium-3 atom, or two Helium-3 atoms, together in a fusion reaction produces one atom of Helium-4, the more common form of Helium, and proton. As the proton is electrically charged, it is easily manipulated by electromagnetic fields, and a means of electrostatic, as opposed to electromagnetic, containment can be used to propagate the fusion reaction.
The fusion point is surrounded and contained by a powerful positively-charged electrical field. When a high-speed proton is given off by the reaction, it is repulsed by the electrical field. However, this act of repulsion transfers its energy potential from the proton to the surrounding field. Running a current through the field converts this potential into electrical energy available for use. Unlike other forms of fusion, in which the fusion process is used to create heat which is then used to generate electricity, DH3 and 2H3 fusion can be used to produce electrical current directly with much less energy loss. In fact, some proponents contend that up to seventy-five percent of the energy released by the fusion process could be harnessed. Thus while DH3 fusion produces less overall energy than DT fusion, DH3’s much higher percentage of energy conversion makes up for this.


HELIUM-3/HELIUM-3 FUSION
Tech Level: 13

As fusion is the holy grail of nuclear energy research, Helium-3/Helium-3(2H3) fusion is the holy grail of fusion research. The reaction produces very little harmful radiation or radioactive byproducts, even less than DH3 fusion and it produces currently directly, meaning up to 75% of its total energy potential can be harnessed.
Because Helium-3 is rare on Earth, its likely that 2H3 reactors will first be built in space, especially on future moon bases and settlements, where Helium-3 saturates the dust and surface rocks. On Earth, because Deuterium will likely remain much cheaper and easier to acquire, DH3 reactors will probably always predominate.

DEUTERIUM/DEUTERIUM FUSION
Tech Level: 14Deuterium-deuterium (DD) fusion, like 2H3 fusion, requires much higher temperatures (400+ million degrees K) than DT fusion. However, it yields more overall energy, and deuterium is far more plentiful, and easier and cheaper to obtain, than Helium-3, making it a very economically desirable form of power, especially on Earth. However, like DT fusion, DD fusion produces neutronic radiation, requiring heavier shielding than the Helium-3 reactions. Though not quite as much as the DT reaction.
A very unusual form of DD fusion currently being researched is using the collapse of bubbles in deuterium-rich water. The process is known as Bubble Fusion or sonofusion, and is discussed below.

PORTABLE FUSION GENERATOR
Tech Level: 14Gearing up for the higher temperatures and pressures needed for the more advantageous reactions like DH3 and 2H3 fusion will likely teach engineers a great deal about safely generating, containing, and taming fusion plasma, allowing them to scale down a great deal of the technology previously developed for it. Even though some laboratory fusion devices today are fairly small and can be said to be nominally table-top, these are still experimental and a long way from producing the types of energy needed for commercial applications.
Most likely the first portable fusion generators will be the fusion rockets discussed in that section, particularly the gas dynamic mirror fusion rocket. Helium-3 reactions would by far be preferred for interplanetary craft, as they would require far less shielding to protect the crew and there fore could be made much lighter.
Fusion generators will also likely see early deployment on large sea vessels, just as fission generators did in the 20th century. Modular fusion generators will also likely be developed to act as portable power sources for military and disaster relief operations, carried by either truck or cargo plane.

BUBBLE FUSION
Tech Level: 15Bubble fusion is also called sonofusion. Unlike the cold fusion claims of the late '80s and early '90s, bubble fusion actually does hold the promise of creating tabletop fusion generators sometime in the coming century.
In March 2002, in the journal Science, researchers reported that they had created fusion in a canister of deuterated acetone, which is saturated with deuterium. Every five milliseconds, researchers bombarded the canister with neutrons, causing tiny, microscopic cavities to form in the liquid. At the same time, they bombarded the acetone solution with selected frequencies of ultrasound, which causes the cavities to expand to 100,000 times their original size in microseconds, just barely large enough to be spotted with the naked eye. Rusi Taleyarkhan, the principal investigator of this phenomenon and a professor of nuclear engineering at Purdue University, was quoted in an article as comparing this potential energy buildup within the expanding bubbles as the equivalent of stretching a slingshot from Earth to the sun.
When these bubbles spontaneously collapsed a fraction of a second later, they generated heat and pressure within them equivalent to that found in stars. Temperatures as high as 10 million degrees Kelvin and pressures of thousands of atmospheres exist briefly at the heart of the imploding bubbles. This is enough to overcome the coulomb barrier in the deuterium within it, causing the atoms to undergo fusion.
The main advantage of bubble fusion is that while it still generates the extremes needed to create fusion, the bubbles in which they're created are so tiny as to pose no real risk to the outside environment.
At the moment, bubble fusion is seen less as a potential means of energy propagation and more as a means of producing large amounts of localized neutrons. Its first practical applications will be to act as part of portable neutronic sensors, to help synthesize certain substances like tritium, and for some medical radiation therapies. Though it's possible we may see bubble fusion generators someday powering our homes, most agree that decades of research and development has to take place first.

DEEP PLASMA FOCUS FUSION REACTOR
Tech Level: 15
Deep Plasma Focus (DPF) fusion is also called dense plasma focus, z-pinch, or micropinch fusion, depending on the variation of the idea used. It is designed to create temperatures and pressures for fusion fuel plasmas that cannot be obtained with other confinement techniques.
In simple terms, a DPF reactor uses powerful electrostatic and electromagnetic forces to swirl superheated, super-accelerated plasma into a thin, compressed column--a "pinch"-- where the pressure and heat escalate to unheard of levels at the column’s thinnest point. The byproduct of a DPF reactor mostly comes out in the form of a beam or jet on the other end of the pinch, one of the reasons DPF is usually discussed more as a form of fusion rocket propulsion than as a source of power. Still, DPF reactors can be made to harness this exhaust to produce electrical power, either through direct current induction or indirectly through heating.
A commercially-viable DPF reactor would require enormously powerful electrical and magnetic fields molded and handled with extreme precision in order to work. With other types of fusion reactions requiring less extreme methods of propagation and containment, DPF fusion will probably not be fully pursued until other types of fusion technology are already proven viable and already on the market. A more technical and detailed examination of this technology is included in the links below.
DPF reactors will most likely be needed to create the more intense types of fusion reactions listed below, both of which rely on temperatures approaching a billion degrees Kelvin or more.

HYDROGEN/BORON FUSION
Tech Level: 15A Hydrogen-Boron (HB) fusion reactions creates three helium atoms and a proton. Like DH3 fusion, HB fusion creates a clean reaction with no radioactive waste and an energy byproduct, a proton, that can be converted directly into electricity via electrostatic confinement, making it a very clean and efficient source of power.
The isotope of Boron used is Boron-11, which has one extra neutron, combined with normal, non-isotope Hydrogen. Boron is a fairly common element found in both the oceans and in Earth’s crust; it is most commonly known as a cleaning agent.
This type of fusion does have some disadvantages, the first being that it needs temperatures of nearly one billion degrees Kelvin to sustain. Also, such energetic reactions produce X-rays and low-energy neutrons as a byproduct, resulting in some radiation hazard in operating an HB reactor.

PROTON CHAIN FUSION
Tech Level: 16Hydrogen-Hydrogen fusion, or more properly Proton-Chain fusion, uses plain old atomic hydrogen, requiring the insane temperatures and pressures usually only found in the hearts of stars. It releases the most energy of all the fusion reactions discussed here and is the basic fusion process that brings light to the universe.
Proton Chain fusion actually consists of several steps. Two atomic hydrogen nuclei (basically naked protons, as the voracious heat of the fusion environment has long since stripped the atoms of their electrons) collide and fuse, forming a deuterium atom. At this stage, energy is given off as a positron and a neutrino. These deuterium nuclei fuse again with another proton, forming Helium-3 and a burst of gamma rays. Finally, the Helium-3 particles fuse, creating the stable and fusion-resistant Helium-4 nucleus along with two free protons. Like with Hydrogen-Boron fusion, this reaction can produce electrical current directly, but will also need heavy shielding because of the gamma radiation produced. Of course, this excess radiation could be converted to heat, which in turn could help drive electrical turbines.

MUON-CATALYZED FUSION
Tech Level: 16Fusion reactors may take advantage of Muon Catalyzed Fusion, where the electron of hydrogen fuels is replaced with a muon. A muon is 207 times larger than an electron, and therefore reduces the classical Bohr radius of an atom by like amount. Thus, atomic nuclei are able to approach each other more closely and this enhances the likelihood of overlapping wave functions, increasing the probability of fusion.
Muon-Catalyzed Fusion is a well-proven technique in the laboratory, and is notable in that it can create fusion reactions at lower temperatures than ordinary fusion reactions. However, the energy needed to create the muons for the reaction offsets any energy gained. If this can be offset, however, by finding a way to produce muons en masse cheaply, Muon-Catalyzed fusion could lead to a revolution in the way fusion reactions are created.

TABLETOP FUSOR
Tech Level: 17One of the long-held dreams of science fiction is the cheap, light, easy-to-use fusion generator, sometimes called a fusor or fusion battery. Perhaps the best known example was the "Mr. Fusion" device that powered Dr. Brown’s time-travelling DeLorean in the Back to the Future movies.
There are of course a lot of technical obstacles to overcome before fusion reactors become as light, cheap, and easy to use as Mr. Fusion. Rather than downsizing enormous, billion-degree-temperature reactors, both bubble fusion and muon-catalyzed fusion point the way to how small, portable fusion generators may someday be made to work without vast amounts of shielding or magnetic containment.
With the bubble fusion generator, the problem would be creating and manipulating enough bubbles to generate the power needed to run usable devices such as a computer, phone, or a car. In large open containers such as those used in the initial experiments, that many bubble forming, expanding, and collapsing in rapid succession my end up interfering with each other.
An alternative is to a create a liquid chamber honeycombed with microtubes, just wide enough to accommodate a single bubble. These microtubes would also contain the vibrational equipment. Neutrons would still bombard the entire chamber from an external source. Each bubble could thus be insulated from the rest, and the power of each bubble could be tapped directly via heat conductors through these tubes. Such micro-engineering capable of handling the continuous expansion and contraction of such high-energy, high-temp microbubbles over long periods of time reliably edges into science fiction territory.
It is also possible to create muon-catalyzed tabletop fusors, but in this case a miniature reactor to create the muons would be needed. This may or may not be separate from main fusor reactor unit itself. An alternative would be that muons could be created in a large centralized facility, and stored and sold in portable "traps". These modular traps would snap onto the fusor unit along with the fusion fuel, and like the fuel would have to be periodically replaced or refilled periodically.



ANTIMATTER POWER


The antimatter-powered warp core at the heart of the starship Enterprise, from Star Trek: The Next Generation. Image copyright Paramount Pictures.

Advanced Antimatter TrapsTech Level: 14
Antimatter Injection GeneratorTech Level: 14
Antimatter Induced FusionTech Level: 15
Matter/Antimatter Annihilation ReactorTech Level: 18
Antimatter FactoriesTech Level: 19
To the popular imagination, antimatter is a nigh-magical substance. In various fictional sources, it has been used to warp time and space, destroy whole worlds, and almost kill the Pope. But it is also a very fundamental physical phenomenon in the real world, one responsible for much of our understanding of the subatomic world. Links to pages with much more detailed explanations into the nature and workings of antimatter are provided at the end of this article.
This article addresses the use of antimatter in power generation only. Use of antimatter in other applications, such as space propulsion or weaponry, will be covered in other articles.
Antimatter particles have the same mass as normal matter particles, but opposite electrical charges. Matter and antimatter mutually annihilate each other on contact and are converted to pure, 100% energy. This energy usually takes the form of a combination of gamma rays, neutrinos, antineutrinos, and pions. This total energy conversion makes forms of antimatter very attractive as a fuel. One gram of antimatter, annihilated with one gram of normal matter, can generate as much energy as 23 Space Shuttle external fuel tanks. A kilogram of matter and antimatter smashed together would produce an explosion equal to approximately 43 million tons of TNT.
Antimatter was first postulated by physicist Paul Dirac in 1928, and in 1932 the first antiparticle, the positron, was detected. Other antiparticles were discovered in the decades after as nuclear accelerators became more powerful and sophisticated. In 1995, the first antihydrogen atoms (composed of an antiproton nucleus and an orbiting positron) were created at the CERN facility in Europe. Antimatter traps--devices that use magnetic and electrical fields to hold antiparticles and antiatoms for extended periods of time--have also steadily improved over the decades. Today, investigations into the nature and use of antimatter is studied at many facilities around the world.
Though antimatter is much rarer than its twin, it is readily found in certain circumstances in nature (for example, it is a natural byproduct of proton chain fusion which powers most stars, and the annihilation of these antiparticles actually accounts for about 11% of the Sun’s energy output.) However, it has proven extremely difficult to manufacture antimatter in quantity on Earth. It currently costs far more in terms of energy and resources to create an antiparticle than the amount of energy one could ever hope to get back from it.
This combined with extremely low production rates even using the most advanced equipment keeps antimatter from being considered seriously as a means of power production, even for the foreseeable future. At CERN’s current rate of antiproton production, it would take about 2 billion years to create 1 gram of antihydrogen. Even if production time could be accelerated to within a few years, it would still cost over $62 trillion in resources. Prospects for antimatter power any time within this century looks grim especially when compared to other cheaper, more readily available sources of power that will be available as alternatives.
However, antimatter may eventually come into its own if production techniques can be ramped up considerably. How this can be done is usually left vague, even by experts in the field. The assumption seems to be in creating a steady progression of ever more powerful, compact, and advanced accelerators in the coming decades and centuries. Whether there may be a practical upper limit to how much antimatter can be artificially produced, or if enough could ever be produced to make antimatter an economically practical power source, is a complete unknown. One can be heartened somewhat, however, that antimatter progression has grown at near exponential rates since the discovery of the first artificially produced antiparticle in the 1950s.
Even if the creation and storage of antimatter fuel remains difficult, the availability of a such potent, compact energy source may outweigh the production cost in certain circumstances. These may include deep space propulsion and power, weapons, and high-end scientific applications.
Problems with antimatter go beyond just production limits. It has also proven particularly difficult to store for significant amounts of time. For obvious reasons, it cannot be allowed to come into contact with normal matter, so antimatter particles must be stored in Penning traps, which suspend them in vacuum by a combination of electrical and magnetic fields. Charged particles respond readily to these fields, but add the complication of repulsing each other, which in turn makes storing them even more difficult, especially in large numbers.
Uncharged antiparticles such as antihydrogen require even more complicated radiative atomic traps that use the dipole moment of the atoms to suspend them, or lasers to hold a very small amount of particles in place by balancing them in between the intersecting beams. Larger antiatoms such as antihelium could be made into ions (antiions?) which could respond better to conventional penning traps, but would also run into repulsion problems.
Cooling the antimatter to cryonic temperatures by various techniques could prove useful, especially if antihydrogen could be made into a somewhat more manageable solid like antihydrogen ice.
Another hitch in using antimatter as a power source is that not all energy released by matter/antimatter annihilation is readily usable. Depending on the type of reaction used (positron/electron, proton/antiproton, hydrogen/antihydrogen, etc), up to half the energy released may be neutrinos, with the rest being gamma rays and pions.
Pions, being charged particles, can be readily used to generate power by surrounding the reaction chamber with a like-charged electrical field. The pions and field repel each other, and the expansion and contraction of the field provided by the ‘push’ of the pion flux against the field can be used to create electrical current. Gamma rays would be more difficult to harness, but can either be focused with advanced mirrors, or at least used to heat a working fluid.
The neutrinos, however, are small, nearly massless ghost particles that can flash through a light year’s worth of solid lead and never impact anything. Any neutrinos created by matter/antimatter annihilation is pretty much considered lost. Even at a loss of 25 to 50 percent of its energy output, however, antimatter still produces far more power gram for gram than any other type of fuel.
Antimatter reactors would produce a great deal of radiation from pion and gamma ray leakage, and would require very heavy shielding and safety measures, much more so than modern day nuclear reactors.

ADVANCED PORTABLE ANTIMATTER TRAPS
Tech Level: 14
Diagram of a modern day Penning trap.
Advanced techniques in manipulating powerful electromagnetic fields to trap highly energetic plasma, such as those being researched for use in plasma rockets and fusion generators, can also be modified to build better, more compact, and more efficient antimatter traps. Also useful would be more advanced particle cooling techniques and more efficient portable energy sources that keep the unit working longer, another technology being actively purseud.
Though advanced portable traps will not make antimatter economically practical, they will go a long way in making antimatter actually useful for a number of applications where profits may not be the primary concern, such as deep space propulsion and weaponry.

ANTIMATTER INJECTION GENERATOR
Tech Level: 14This is the simplest and most easily achievable type of antimatter reactor. Antimatter particles such as antiprotons are injected into a working medium like water or liquid hydrogen. Just enough antiparticles are used so that the energy released by their annihilations superheats the liquid, turning it to gas or steam; this in turn is used to turn turbines to produce electricity.
If enough antimatter is used, the liquid fuel may also be superheated into plasma and can be used for space propulsion and other applications.
The antimatter may either come from a reserve in a penning trap or may be manufactured on the spot as needed in an attached accelerator. The potency of the generator can be adjusted by releasing more or less antimatter into the fluid. More antimatter means more annihilations and much more heat added to the fluid, which results in much more energetic steam.
This technology assumes that antimatter production is considerably more successful at Tech level 14 than it is today, and that the generator will have access to adequate amounts of antimatter fuel for its needs. The cost of antimatter is still expected to be prohibitively high for means of creating a widely-available economical energy source, but in certain specialized circumstances (such as being in deep space with no other fuel available) these type of generators may prove invaluable.
At higher Tech Levels, if antimatter ever becomes relatively cheap to manufacture, these type of devices may become commonplace.

ANTIMATTER INDUCED FUSION
Tech Level: 15This is a modification of a propulsion scheme considered for the AIMSTAR project, detailed in the article for Antimatter Rockets linked to at the bottom of this page.
A penning trap is set up to hold a large population of antiprotons. Pellets of fission/fusion fuel are "shot" through the trap, basically compressing onto the outer layer of the antiparticle mass in the trap. The energy of the antimatter annihilations initiates a fission reaction, which in turn sparks a fusion burn in the compressed fuel mix. This in turn creates superheated plasma which then can be used to generate power.
After each such "burn" the antiprotons in the penning trap are allowed to reset back to their original configuration, minus about 0.5% of their original mass, which was used up in the burn cycle annihilations. After every 50 burns, new antiprotons are injected into the magnetic bottle to reload the trap. The reactor would fire at about 200 burns per second.
Fuels being considered include a deuterium-tritium (DT) mix and a deuterium-helium-3 (DHe3) mix. The DT fuel provides a higher burn and more energy, but the tritium for the DT mix is much harder to obtain (at least at projected Tech level 15 capabilities) than helium-3 and the reaction produces far more radiation than the DHe3 fuel. Helium-3 can be readily mined from the surface Moon and (presumably) other space rocks near the Sun that lack an atmosphere, which would include Mercury, Phobos, Deimos, and perhaps countless asteroids. Tritium would either have to be manufactured or skimmed from the atmospheres of gas giants.
The main advantage of this type of reactor would be its potential low mass and compact size for its theorized enormous energy output, especially given the capabilities of Tech Level 15 technology. This would make it valuable for space-based applications beyond simply propulsion. Again, though, the main limiting factor to creating this type of reactor may be the economics of antimatter production and storage.

MATTER/ANTIMATTER ANNIHILATION REACTOR
Tech Level: 18These are the types of antimatter power sources usually depicted in science fiction sources, most famously to power the starships in Star Trek. They generally use direct annihilation of equal parts of matter and antimatter, and harvest the full power of the pion flux and gamma rays generated. They are also sometimes called MAM or M/AM (Matter/AntiMatter) reactors.
The warp cores in Star Trek are probably the best detailed type of MAM reactor in science fiction. They are a bit misnamed; the warp core in and of themselves do not warp space, but instead provide the power for the engines to do so.
A warp core MAM reactor uses streams of tightly-contained deuterium and antideuterium annihilations to superheat a working medium surrounding the reaction, creating a highly energetic plasma which is in turn is used to actually power the ship’s needs (phasers, sensors, warp drive, etc.) They produce an extreme amount of power; one source quoted the maximum power output of the Enterprise Dat 4.77 million terawatts over a 12 hour period. Compare this to the total world output of energy of modern day Earth, which is typically about 15 terawatts. Of course, the Enterprise D would have to consume thousands of tons of antimatter to produce this, but given the ship’s size and very high Tech Level (about 21) it doesn’t seem unreasonable for them to have that much stored on board or to have the means to produce it.
One potential problem that arises with this type of reactor is keeping the flow of the reaction steady. The explosions caused by the mutually-annihilating particles are so intense they could disrupt the incoming flow of reactants, diminishing the production of power, and perhaps stopping it altogether. The same technologies used in advanced fusion reactors, extremely powerful magnetic fields and moving the reactants at very high velocities (perhaps even near light speed) in order to overcome the back pressure of the annihilation flux, may be required. The system may also be rapidly pulsed. Because the pions and gamma rays travel outward from the annihilation point at the speed of light or very near to it, the remnants of the explosion would clear the reaction chamber quickly enough that the engine could probably be pulsed thousands if not millions of times a second without a significant loss in efficiency.

ANTIMATTER FACTORIES
Tech Level: 19These are devices, either advanced particle accelerators, colliders, nuclear reactors, advanced quantum manipulators, or some esoteric combination of all of those, designed to produce large amounts of antimatter for a variety of applications. In the universe of Star Trek, antimatter is produced at large, advanced, planet-bound facilities, but starships carry their own miniature antimatter factories for use when their stored supply runs low.
It may still require vast amounts of input energy as large populations of subatomic particles are still smashed together at near light speed. Or it may require an as yet unguessed-at quantum trick, using sophisticated knowledge of the quantum world but low amounts of energy to simply flip the charge on an electron from negative to positive. The latter technique how antimatter is produced in quantity in the Star Trek universe.
Because of the vague nature of how these devices would work, their placement on the Tech Level scale is fairly arbitrary. Its placed at where antimatter technology begins to become commonplace as typically portrayed in science fiction, usually late in the Space Opera era (Tech levels 19-20.)
Today, the entire world output of antimatter can be measured in nanograms per year. An antiparticle factory would be able to produce millions upon millions of times that, creating thousands of tons per year, perhaps more.
Given sufficiently advanced technology, its seems possible that antimatter production can eventually be ramped up to such spectacular levels. But whether it could ever be made economical, where it would cost less in resources to produce the antimatter than the value of the energy one could extract from it, is another matter. While antimatter production at this Tech Level may prove to be truly impressive, it may still not be able to penetrate the break-even point.
However, this may only be true of direct hard costs. The indirect, ‘soft’ benefits of such a technology may still make it worth the investment. The availability of so much antimatter, even if comparatively expensive in and of itself, would transform any society that had access to it. It would find applications in many different fields and open up many potential markets, technological and otherwise. For example, it may well become the deep space propulsion technology of choice, especially in the vast voids of interstellar space where low fuel mass and high energy returns would be paramount. That in turn may open up new resources (such as Oort cloud comets or quicker access to other star systems) which in turn would generate profitable returns. So even though the direct return on investment, production cost vs energy return, may not reach the break-even point, the larger benefits of the technology in the society at large may make investing in the technology worthwhile.



ORBITAL TRAVEL


SPACEPORTS


An artist's conception of Spaceport America, the world's first fully commercial spaceport being built in New Mexico to support space tourism flights.

SpaceportTech Level: 9
Oceanic SpaceportTech Level: 13
Orbital SpaceportTech Level: 14
StarportTech Level: 16
Spaceports have existed since the 1950s. Cape Kennedy in Florida, Edwards Air Force Base in Arizona, and the Baikonur Cosmodrome in Kazakhstan are all examples of spaceports. In science fiction, they are an iconic mainstay of most futuristic settings, especially in space opera adventure tales. Mos Eisley (‘a wretched hive of scum and villainy’) from Star Wars is a very well known example.

SPACEPORT
Tech Level: 9A spaceport is any centralized travel hub designed to handle the launching and landing of space vehicles. While the name has been applied to a wide variety of space facilities, a formal spaceport must handle three main functions:
--It must be able to oversee both the launching and landing of space vehicles. Any facility that can only handle launchings or only handle landings are more properly called Launch Facilities or Landing Facilities, respectively.
--It must be able to repair and perform maintenance on such vehicles on its grounds.
--It must be able to handle the transport of passengers and/or cargo to and from the aforementioned space-going vehicles.
In other words, they must be to spacecraft what seaports are to watercraft and what airports are to aircraft. Facilities may range from sparse and utilitarian, to sprawling and cutting edge.
Spaceport location is often key to its success. Because of all the different resources required for operation of a spaceport, they often need to be at the center of a large network of transportation and distribution. But this is also dependent on the needs of the nation or organization building and maintaining it. During the early Cold War, the need for secrecy about the specifics of this emerging technology was paramount, so new spaceports were located in isolated places where access could be easily controlled.
Even as the need for secrecy faded, spaceports continued to be built in more remote locations, both for the sprawling room that the building, maintenance, support, launching, and landing of spaceships need, and to help separate civilian populations from potential disaster should a launch or landing go wrong (as illustrated by the launch disaster in Xichang in China in 1996.) Even the world’s first fully commercial port, SpacePort America, is located in a desert.
This may not always be the case in the future. As spaceships become cheaper and more reliable in the coming decades, developers may want to cut costs and locate spaceports closer to already existing transportation and population hubs. Spaces that are close to, or even adjacent to, facilities such as airports, seaports, train lines, major highways, and so on would be highly desirable. Having the spaceport located close enough to a major city would also be a plus, allowing it to take advantage of a larger potential labor pool and allowing commercial development in the area that the spaceport could take advantage of. Older airports and seaports could even be converted into spaceports in whole or in part.
Spaceports that include megastructure launch systems (such as a space elevator or launch tower) may be located on mountaintops in order to ease altitude requirements of such artifacts. However, in most other cases spaceports will be located on large, flat expanses.
Some spaceports in science fiction are converted seaports, and have the spaceships moor in the water for servicing and maintenance, and launch directly from there. This is a motif in some science fiction anime, and helps to rationalize away why some of the ships in that genre are built with a strong resemblance to ocean-going warships (i.e., the need for mooring in water requires certain buoyant design characteristics.) Sometimes the spaceship may even have to fully submerge to be serviced in an underwater facility, as in the classic Gatchaman anime series.
A location as close to the equator as possible is also advantageous for a spaceport, as Earth’s rotation there can give a bigger velocity boost to launches than elsewhere on the planet.
The exact layout and facilities at any individual spaceport will vary wildly from one to another. But major features may include:
-- A nexus of major roads or railways in the region in order to facilitate the large influx of different kinds of resources most spaceports will need. Since some assembly or maintenance facilities may be off-site, railways that can handle large multi-ton spacecraft components, or maybe even whole spacecraft, may be necessary.
-- If the spaceport is located near a coast, it may have an adjacent seaport to help handle incoming and outgoing resources, cargo, passengers, and component parts. In the case of water-berthed spacecraft, the spaceport may be converted directly from an already-existing seaport.
-- Runways to accommodate increased air traffic delivering resources, passengers, cargo, and component parts. This may be a fully developed and already existing airport located adjacent to the spaceport. In some cases, such as with spacecraft that can launch and land on runways, existing airports may simply be converted into spaceports, such as the case with the Mojave Air And Space Port.
-- Landing pads to accommodate vertical take off and landing spacecraft, as is often seen in many science fiction sources.
-- Hangers, bays, or assigned lots for storing spacecraft, and in some cases large buildings for assembly or disassembly of spacecraft from component parts, which also may serve as hangars or bays.
-- Maintenance Facilities. These may be done in individual storage hangers or bays, or spacecraft may need to be moved to a centralized facility. Most spaceports will have a mixture of both, with routine maintenance and minor repairs being done in-hangar while overhauls and major damage would require moving the craft to a centralized, more extensively equipped facility.
-- Fuel storage facilities. As most spacecraft fuel is potentially dangerous, and some need special consideration such as cryogenic temperatures, these are usually stored a good deal aways from the main spaceport facilities proper.
-- Orbital-range or better sensor arrays, including radars, radio telescopes, visual telescopes, infrared sensors, and more. In some cases, however, some of these may be located off-site in order to take advantage of already-existing facilities.
-- Orbital range or better communication facilities.
-- Centralized command and control, aka ‘mission control.’ In airports this is the Control Tower; a spaceport may need to have a more extensive facility than a single tower, but the same principle applies. Most spaceports will also have one or more back-up facilities for use in an emergency.
-- Dedicated emergency services to deal with crash landings, fire, and other unfortunate circumstances.
-- Dedicated security forces.
-- Its own dedicated power source or generators.
-- Cargo and passenger terminals, to facilitate the loading and unloading of both. Unlike in most modern airports, passengers an cargo will likely be ferried out to and from the spacecraft, rather than having the spacecraft taxi right up to the terminal, in order to protect the terminal should something go catastrophically wrong with the spacecraft.
-- Adjacent commercial and tourist districts. In the real world, spaceports accessible to the public are huge tourist draws. In science fiction, they are often the major hubs of commerce and trade. In both cases, many entrepreneurs will set up shop close by, often leading to districts dedicated to serving the spaceports’ personnel, passengers, and visitors. These will often include hotels, bars, restaurants, shops, museums, tourist attractions, ‘red light’ districts and more often centralized along the major roads leading to the spaceport. The quality of these commercial districts will vary wildly depending on many factors, from spit-polished corporate showcases to grungy and seedy underworld dives.
-- Adjacent military facilities. Some (but not all) spaceports may be considered major military assets by their owning nation or organization, and will have a number of appropriate military assets and units placed nearby in order to protect it. This will be above and beyond the normal security forces most spaceports will have. Military facilities may range, depending on the spaceports’ strategic importance and perceived vulnerability, from a few barracks and a motorpool housing spaceport-attached troops, to a full-scale fully-outfitted base adjacent to the spaceport. In some cases, the spaceport may be created from an already existing military base and still serve in that capacity, as is the case with both the Vandenberg and Edwards Air Force Bases in the US.
-- Space defenses. In science fiction settings, spaceports are almost always the single most important group of targets in any invasion or assault from space, either to be captured for use by the invading force or to be destroyed to deny their use by the enemy. While conventional military facilities attached to the spaceport may guard against conventional assaults, more sophisticated defenses may be needed to guard against orbital bombardment and/or invasions. These may include missiles silos, abm facilities, armored turrets sporting sophisticated weapons like lasers and particle beams, and maybe even deep bunkers housing meson guns. Spaceports may also keep combat drones and dedicated combat spacecraft on hand for such situations as well.
-- Quarantine facilities. The earliest manned spaceflights made post-flight quarantine a mandatory practice, for fear that astronauts may inadvertently bring back a harmful space-borne infection. Today, this is no longer the case, as we know space itself is sterile and the chance of contracting an infection from a vacuum-resistant microbe can be statistically zeroed out. However, in science fiction settings, especially those with extensive and regular contact between far-flung human-populated worlds (due to fundamentally different molecular biology, alien biospheres seem very unlikely to give rise to human-threatening microbes) cross-world disease vectors become a real danger. Passengers and cargo will likely be extensively screened for potential microbial invaders, and if they’re detected, will be forced into quarantine for a certain amount of time until the infection can be dealt with. Threats from invasive animal and plant species (from both human and alien worlds) may also require quarantine procedures.

OCEANIC SPACEPORT
Tech Level: 13These are spaceports built on deep-ocean platforms or artificial islands. They require the technology to be able to build large, anchored, deep-water surface facilities far from any landmass, which is estimated to come online at Tech Level 13. The reason for building a deep-ocean spaceport may include:
-- For security reasons.
-- To take advantage of certain weather patterns. For example, one proposal for building a space elevator outlined anchoring it onto a massive deep-ocean platform some distance off the western equatorial coast of South America, at a location known for having very mild weather and winds on average than most other places along the equator.
--To take even greater advantage of water-berthing for spacecraft, especially (as is the case in some scifi sources) when some of the spacecraft may be very massive and require deep water for their berthing.
-- On science fiction worlds, there may be no landmasses sufficiently close to the equator, so a deep ocean platform may be the best solution.
Oceanic spaceports, because of potential restrictions on surface space, may be able to handle only vertical launches and landing, or ocean launches and landings. Runways may prove problematic.
Support facilities may be located under the main platform beneath the waves, or may be housed in a variety of surface ships of varying size that move from floating berth to floating berth to service each spaceship in turn. Or, it may be a combination of both approaches.
Some very advanced oceanic spaceports may be completely submersible for stealth reasons, rising above the waves only when needed to launch and receive spaceships.

ORBITAL SPACEPORTS
Tech Level: 14
The Spacedock above Earth in the the Star Trek universe, which fucntions as an orbital spaceport. Image copyright Paramount.
This is an orbital facility that performs all the functions of a surface-based spaceport. There is some superficial resemblance between orbital spaceports and deep-space support bases, but the main difference is that a deep-space base is designed to be primarily self-sufficient while an orbital spaceport is dependent on the world it orbits.
Orbital spaceports can become a reality only after launch costs fall to sufficient levels to allow an extensive infrastructure to be put in place in space. While Orbital Vector predicts launch costs may begin to fall substantially starting at Tech Level 13, sufficiently extensive facilities in space to allow a full-service orbital spaceport would likely not exist until one Tech Level later.
Orbital spaceports would likely consist of one or more centralized space stations, maybe even a full-on space colony such as a Bernal Sphere or Stanford Torus, in order to house personnel and facilitate the processing of passengers and cargo. Spaceship berthing, maintenance, fueling, and so on will most likely be done outside of the station in full vacuum in order to maximize space within the station and efficiency of the procedures. The spaceport may have fully pressurized berths for spacecraft, but would likely use them only for very extensive repairs or major overhauls.
The advantages of an orbital spaceport are numerous. Some spaceships (such as the starshipEnterprise from the Star Trek universe) are simply not designed to land on a planet or even enter an atmosphere, and must be serviced while in space. Not having to land and launch from deep in a planet’s gravity well or atmosphere may also reduce turnaround time for spacecraft if time may be critical. An orbital port could also take advantage of cheap but plentiful solar power. Fuel and other consumables may also be extracted and transported cheaper from in-space sources such as the Moon instead of lifting them directly from Earth.
But perhaps the best reason for an orbital spaceport is to provide an additional buffer against quarantine dangers, such as from off-world viruses or invasive species. If a dangerous lifeform is detected on an incoming ship, it has a much less chance of inadvertently escaping into the world’s biosphere from an orbital facility than from a ground-based facility. Orbital spaceports therefore might have more extensive quarantine facilities than their ground-based counterparts, as their main purpose would be to deal with this threat.
In science fiction settings that have to deal with a great deal of interworld traffic, a planet may have both orbital and ground spaceports, perhaps working in tandem. For example, in the Traveller RPG setting, most high-tech worlds funneled their space traffic through both an orbital spaceport and a ground-based spaceport. The orbital spaceport, or "up port", inspects all incoming spaceships and cargo, specifically looking for dangerous microbes, invasive lifeforms, and contraband. While some exchange of passengers and cargo is handled in the orbital facility, the bulk of that is left for the ground-based spaceport, or ‘downport’ after the ship passes inspection in space.

STARPORT
Tech Level: 16Starports are basically spaceports that can accommodate and service starships as well as normal spaceships. The exact nature of a starport would depend on the nature of interstellar travel technology available, but otherwise would closely resemble the spaceports that preceded them. In fact, many already-existing spaceports would likely be converted or expanded into full-blown starports as needed.



AEROSPIKE ROCKETS


L to R: The X-33; Aerospike engine test-firing; Aerospike engine diagram



Aerospike and Linear Aerospike RocketsTech Level: 10


Aerospike rocket engines are not really new; the concept has been around for decades, the groundwork having been laid down by Boeing’s Rocketdyne Power and Propulsion Unit in the late 60s and early 70s, when a number of prototypes were test-fired. Advanced aerospike engines were extensively developed and test-fired all the way up through 2000 as part of NASA’s X-33/VentureStar program.
The following description is taken almost verbatim from the official NASA linear aerospike engine newspage, which unfortunately has been taken down since this article was first posted.
"Unlike conventional rocket engines, which feature a bell nozzle that constricts expanding gasses, the basic aerospike shape is that of a bell turned inside out and upside down (much like an ice cream cone with a rounded "point"-P.L.) When the reconfigured bell is "unwrapped" and laid flat, it is called a linear aerospike.
"The linear aerospike features a series of small combustion chambers along the unwrapped bell, also called the ramp, that shoot hot gases along the ramp’s outside surface to produce thrust along the length of the ramp, hence the name ‘linear aerospike.’
"With the aerospike, the ramp serves as the inner wall of the virtual bell nozzle, while atmospheric pressure serves as the "invisible" outer wall. The combustion gasses race along the inner wall (the ramp) and the outer wall (atmospheric pressure) to produce thrust.
"The key to a conventional bell nozzle's level of performance is its width. At high pressure -- i.e. sea level -- the gasses are more tightly focused, so a bell nozzle with a narrow interior surface works best. At low pressure -- i.e. higher altitudes -- a wider interior works best as the gasses will expand farther.
"For example, the initial stage of the Saturn rocket which carried the Apollo astronauts to the Moon featured a narrow nozzle to produce an ideal straight-edged column of exhaust at sea level. However, the command module which orbited the Moon featured a much wider bell nozzle better suited for controlling the combustion gasses in the vacuum of space.
"Since the width of the bell nozzles can’t change to match the atmospheric pressure as the rocket climbs, bell nozzles are normally designed to provide optimum performance at one certain altitude or pressure. This is called a "point design," and engineers accept the performance loss the nozzle will encounter at any altitude other than the one it was designed for.
"The aerospike eliminates this loss of performance. Since the combustion gasses only are constrained on one side by a fixed surface -- the ramp -- and constrained on the other side by atmospheric pressure, the aerospike's plume can widen with the decreasing atmospheric pressure as the vehicle climbs, thus maintaining more efficient thrust throughout the vehicle's flight."
To sum up, the aerospike engine exhaust point has a conical projection in the center and no outer nozzle. This conical projection, or "ramp," helps to shape the exhaust. In the lower atmosphere, the high air pressure around the ramp constricts the exhaust flow into a relatively tight column. At higher altitudes, as the air pressure thins out, the rocket exhaust spreads outward from the ramp. This arrangement allows for steady engine efficiency and performance at all altitudes.
The ramp does not have to taper all the way to a point. Many aerospike rocket designs have used a truncated ramp--where the conical projection is seemingly cut short and rounded. The remainder of the "spike" is formed by waste gas exhaust released from the ramp plane. The pressure exerted by this gas exhaust serves as a "virtual" ramp face, helping to mold the flow of the rocket exhaust just like a physical ramp. NASA'a mid-80s PHOENIX SSTO project postulated using a ramp that looked like little more than a rounded bump. The exhaust of waste gasses forced out from the center of this "bump" formed a pressure "spike" that functioned exactly like a physical ramp.
Because of their aerospike design, the engines developed for the X-33 program were 75% shorter compared to conventional rocket engines of comparable output. This means less engine weight and less engine support structure required, allowing for a lighter vehicle and lower launch costs than conventional rockets.



SUBORBITAL PASSENGER TRAVEL


Virgin Galactic's suborbital passenger vehicle SpaceShip Two. Image copyright Virgin Galactic.

Antipodal BomberTech Level: 9
Suborbital Passenger RocketplaneTech Level: 11
Suborbital Military TransportTech Level: 12
Suborbital Passenger ScramjetTech level: 14
A suborbital flight is any launch that reaches at least the edge of space (about 100 km/60 miles up), but does not have enough velocity to enter a full orbit. Once reaching a target altitude, a craft may deadfall or glide back to earth, or it may use powered flight to maneuver to a particular destination.
Suborbital flights are mostly simple ballistic trajectories. The vehicle will launch upward and angle itself so that its momentum would carry it up and over the edge of the atmosphere. The bigger the initial launch velocity, the higher the arc and the farther vehicle can theoretically go. However, the vehicle will want to avoid going fast enough to enter orbit, as the whole idea of the technique is to use the craft’s own momentum to carry it back down to its destination with minimal additional thrust.
Suborbital flight has usually been a precursor to full orbital travel. Most nations, militaries, and private organizations that have ongoing space programs have usually achieved suborbital trajectories before attaining full orbital flight with their spacecraft. (Though there are some exceptions, most notably the Soviet manned space program.)
The first suborbital launch occurred in 1944, when a German V-2 rocket achieved an altitude of 189 kilometers. The first manned suborbital flight came in 1961, by American astronaut Alan Shepard. Two years later, the X-15 rocket plane, piloted by Joseph A. Walker became the first reusable aircraft to achieve the suborbital milestone. And most recently, in 2004, Mike Melvill piloted SpaceShipOne in the first privately-funded suborbital flight. ICBMs and other missiles often use suborbital trajectories to reach their targets in a short amount of time, and is a well-proven technology.
Suborbital flight has usually been a precursor to full orbital travel. Most nations, militaries, and private organizations that have ongoing space programs have usually achieved suborbital trajectories before attaining full orbital flight with their spacecraft. (Though there are some exceptions, most notably the Soviet manned space program.)
Currently, Virgin Galactic and others are planning on tourist-oriented suborbital flights to the edge of space and back. In the near future, a number of interests are looking into the possibility of using suborbital travel for point-to-point jaunts, allowing passengers to makes trips to anywhere on Earth in a handful of hours or less.
There are some limits and complications, however. The upper ceiling for suborbital arcs for passenger flights is about 500 miles up. Beyond that point lie the Van Allen radiation belts, and passengers at that altitude could be exposed levels of radiation that may be hazardous to their long-term health. Because of weight considerations needed for aircraft, a suborbital passenger vehicle probably cannot be too heavily shielded.
In order to extend its range, specifically designed suborbital craft may use a maneuver called "skip-glide." The vehicle would control its angle of descent, and use its flat-bottomed shape to ‘skip’ off the outer edges of the atmosphere, much like a stone skipping off the surface of a pond. It could do this repeatedly, using skip-glide hops of hundreds or even thousands of kilometers long to reach anywhere on the globe. Of course, the passengers may be in for a rough ride, as each ‘skip’ would bring quite a bit of potential turbulence and put considerable thermal stress on the aircraft (it is, after all, a type of aborted re-entry.)

ANTIPODAL BOMBER
Tech Level: 9An antipodal bomber uses skip-glide and another maneuver called boost-glide to reach almost anywhere in the world to deliver its payload.
Antipodal bombers were first envisioned in Germany in the early 1930s, and the Nazi regime in its latter days had tentative plans to research the technology, but nothing except some preliminary designs were ever made.
The idea was revived in the late 1950s with the X-20 project, nicknamed "Dyna-Soar" (short for Dynamic Soarer.) Unfortunately, budgetary constraints in the mid-1960s prematurely ended the project before it could even begin test flights. A decade later, however, much of the design work and testing for the X-20 concept was later revived for the Space Shuttle.
Both iterations called for an advanced, delta-winged, rocket-powered bomber to be boosted to the edge of space on top of a more conventional vertical launch vehicle. The size and power of the launch rocket would determine the initial velocity and altitude of the bomber-glider.
Because of the powerful boosters available to it, including the Titan and Saturn rockets, the X-20 version could also have theoretically achieved orbit. In this case it would use a "boost-glide" technique to reach its target. It would initiate a de-orbit burn, still using its shape and the proper angle to ‘skip’ off the outer atmosphere. At its lowest point, it would deploy its payload, in most cases nuclear bombs. After the skip, it would use its rocket motors to push itself back into orbit. This was thought to be a rather brutal maneuver for the pilot, as he would be pulling a very uncomfortable number of G’s both de-orbiting and accelerating up to altitude again.
To aid in a boost-glide mission, the X-20 was often depicted as having an attached rocket booster on its aft end, to give it the extra fuel and thrust needed to make repeated burns both into and out of the edge of the atmosphere. This booster would be jettisoned prior to final descent.
Many consider the antipodal bomber, and especially the X-20 program, one of the great ‘what-if’ scenarios of modern manned space flight. If the project had gone ahead, it would have given the US a reusable winged spacecraft fifteen years before the Space Shuttle, and would have been a potential game changer for space flight for decades afterward.
However, it was intended mostly for military reconnaissance and long-range nuclear strike missions, which have since proven much cheaper to accomplish with automated satellites and ICBMs, respectively. So even if the X-20 project had gone forward, it would have had a hard time finding the proper niche in which to justify its continued great expense.
Still, the technology for the antipodal bomber is long since proven, and it would be interesting to see if any of the emerging fledgling space powers will ever resurrect the technology for other applications.

SUBORBITAL PASSENGER ROCKETPLANE
Tech Level: 11Rocketplane technology goes back to the dawn of the space age. Chuck Yeager used one in 1947 to break the sound barrier. Several rocketplanes, particularly the X-15, carried out suborbital flights in the mid-1960s. Most recently, in 2004, rocketplane SpaceShip One became the first privately-funded effort to send a man into space.
Virgin Galactic and other companies including Space Adventures, Starchaser, Blue Origin, Armadillo Aerospace, XCOR Aerospace, and Rocketplane Limited are actively pursuing the development of suborbital tourist vehicles. Virgin Galactic, with the development of its SpaceShip Two system, seems to be closest to producing the first commercially available flights.
SpaceShip Two uses an updated version of the X-20 scheme, with the rocketplane launching from an advanced high-altitude carrier jet. So far there are only plans for it to take passengers on short excursions out of the atmosphere. Basically the carrier craft would spiral up, the rocketplane will launch on a steep trajectory, and passengers would experience a few minutes of weightlessness before the return flight would commence. It would land at the exact same facility from which it was launched. Other commercial space venture developing tourist flights are planning vehicles with similar capabilities.
Future versions of these early suborbital efforts hope to introduce point-to-point travel and ferrying passengers up into full orbit to dock with small "space hotel" stations.

SUBORBITAL MILITARY TRANSPORT
Tech Level: 12ICBMs are a long-proven suborbital technology that can deliver a payload anywhere in the world in under two hours. But does the technique have to be limited to just carrying nuclear bombs? A new joint DARPA-Air Force project called SUSTAIN (Small Unit Space Transport And InsertioN) is researching the use of suborbital flight to deliver US marines and vital support equipment to any point on the globe they may be needed, many hours ahead of what conventional transports are capable of. The actual vehicle, which is still in the design phase, is code-named Hot Eagle.
Though the project has yet to settle on an initial design, some experts speculate that it may evolve similarly to Virgin Galactic’s SpaceShip Two concept. A high-altitude carrier aircraft would lift the suborbital transport into the upper atmosphere, and after detaching, Hot Eagle would use its own rocket motor to boost itself into a suborbital trajectory. It would then glide down to its destination. Including the carrier aircraft flight, the system could deliver a squad of up to 13 marines and equipment to any suitable landing site in the world in under four hours.
However, the question invariably comes up, what can a squad of 13 marines, or even several spaceplanes worth of them, be able to accomplish in any wartime situation dire enough to require immediate intervention? The Hot Eagle spaceplane, no matter its final design, will likely still depend a decent landing strip at the very least in order to touch down safely, meaning its not very likely it could inject the troops directly into a hot zone. Both the heat from reentry and its sonic booms as it approaches the ground will make stealthy insertion non-viable. Plus, because of the weight considerations needed for suborbital craft, it couldn’t be that well armored, and very likely would not have enough fuel to lift itself off again.
So, basically, once it touched down, Hot Eagle and the squad it carried would be there for the duration, vulnerable to attack, and most likely on its own. Sending troops as well as a very expensive spaceplane into a potentially dangerous situation without support does not sound like a sound tactical strategy, no matter the situation.
Rather, instead of being used for troop insertion, Hot Eagle may finds it true use rather as a means of rapid response to disasters and non-military crises. If an outbreak of a deadly disease is reported, for example, Hot Eagle could deliver experts, vital medicines, and equipment needed to deal with the outbreak in a much more timely manner than other means of transport. A number of other disasters and accidents that could benefit from rapid transport of experts and specialized equipment to the affected area could include nuclear accidents, earthquakes, tsunamis, mine collapses, and chemical or biological terrorist attacks.

SUBORBITAL PASSENGER SCRAMJET
Tech Level: 14Dedicated suborbital craft such as SpaceShip Two and its kin are likely to be precursors of true reusable orbital launch vehicles, as both orbital flights and ferrying passengers to space hotel stations become mainstays of the space tourist trade decades from now.
Its likely that these new reusable craft would be pressed into service both for orbital insertions as well as suborbital travel. In fact, from Tech Level 13 on or so, suborbital and orbital interface capabilities will probably merge completely in many vehicles. This is mostly due to the fact that the difference in performance and cost between orbital launch vehicles and suborbital-only craft is narrow to begin with, and it will probably become more practical and economic to produce spacecraft types that can fulfill both roles as the technology advances.
However, this may not always be the case. Dedicated suborbital travel may be taken up by light scramjets, similar to the way smaller jets fly the routes that big airliners eschew. A scramjet is a Supersonic Combustion Ramjet, or SC ramjet. In a conventional jet or ramjet engine, baffles or turbines slow the flow of air through the engine to allow the combustion of fuel. In a scramjet, the airflow isn’t hindered at all, and the engine allows for combustion as air and fuel flow through the engines at supersonic speeds.
Theoretically, super high-performance scramjet engines could achieve well over Mach 15, but more practically scramjets will probably max out around Mach 8 or 10. However, when combined with a rocket engine in the same housing (this is called a combined cycle engine—the engine is capable of acting as a regular jet, a scramjet, and a rocket), the scramjet is capable of bursting up to enough speed to take it into a suborbital trajectory.
The big advantage of scramjets over previous suborbital designs is they do not need a carrier aircraft or separate booster. They can be launched from and land directly at any conventional commercial runway. Because of that, the technology could open up affordable suborbital travel to the mainstream. However, because it uses much of its onboard fuel for take-offs and landings, a scramjet passenger craft would not have a great deal of reserve for maneuvering in orbit, even if capable of reaching such speeds. Thus, all but the largest scramjets will likely remain suborbital craft.



DROPSHIPS



Crew Return Vehicle/Crew Transport VehicleTech Level: 10
Indivdiual Re-entry ModuleTech Level: 12
Drop ShipTech Level: 13
Military Drop ShipTech Level: 15
Military Drop Ship DecoyTech Level: 15
Dropships are a broad category of spacecraft whose primary purpose is transporting cargo or personnel from orbit to the ground as quickly and as efficiently as possible. Most other orbital vehicles discussed in the Orbital Travel Section are dedicated to getting cargo from the ground into orbit; a drop ship’s main purpose is to move payloads in the opposite direction, from space to the surface.

BASICS

Orbital re-entry has been completed successfully on hundreds of space missions, both by manned and unmanned vehicles. Even so, it remains the most dangerous part of any trip into space, as the ship has to deal with many extremes as it careens back to the surface.
First and most famously, there’s the heat of re-entry, caused by the extreme air friction the craft encounters as it screams through the outer atmosphere. The heat on the outside of the vessel can reach in excess of 2500 degrees celsius.
The most common way of dealing with this heat so far has been with an ablative heat shield on the surface of the vessel expected to take the brunt of re-entry. These are usually made of heat-resisitant metals and/or ceramics, designed to vaporize away millimeter by millimeter from the heat.
Another tried and true method, first introduced on the Space Shuttle, is to use tiles made of durable ceramic insulators that can withstand extreme temperatures without deforming. These tiles conduct heat very, very slowly, so much so that only the outer layers of the tiles will become superheated during re-entry.
Secondly, there’s the structural stress the vessel undergoes as it enters the atmosphere at a typical re-entry velocity of about 7 km/sec, or roughly 20 times the speed of a bullet. Typical manned missions pull around 2 to 3 g’s of force, and unmanned payloads have been known to take up to 10 g’s and beyond. This combined with the heat and the buffeting the ship takes as it encounters thicker and thicker layers of air can threaten to rip a craft apart.
There’s also the angle of descent to consider. If a spacecraft comes in at too steep an angle, the air friction will become overwhelming and the ship will burn up no matter how well protected it is. If it comes in at too shallow an angle, its great velocity will "bounce" it off the outer layers of atmosphere much like a flat stone off the surface of a pond, sending it careening back out into space. Most spacecraft, therefore, have a fairly narrow "window" of re-entry to adhere to in order to avoid disaster.
Most of a spacecraft's re-entry velocity is usually bled off from blazing through the upper atmsphere, but once it slows down enough that air friction is no longer a major problem, it has to still worry about making it to the ground in one piece. Slowing the vehicle to a soft landing can take several different strategies. By far the most prevalent way of landing is with a parachute or a parasail, deployed only after it reaches the significantly dense parts of the atmosphere. Retro-rockets, so loved by golden-age science-fiction, are also used, but usually in conjunction with a parachute. A third method, pioneered by the Space Shuttle, is to give the spacecraft an aerodynamic shell and wings and allow it to glide back to Earth much like a conventional aircraft.
A few other soft-landing methods have been experimented with, but have yet to see widespread use. Aerobraking shrouds have been used on Mars missions, and are being developed by the ESA and private interests. These shrouds are usually meant to work in concert with retro-rockets or true parachutes, however. The Roton SSTO concept introduced the idea of using helicopter-like blades to soft-land a spaceship, and had three successful in-atmosphere tests.
Most of the space capsules from the Cold War Era were in effect small, one-use drop ships, meant primarily to get their crews back to Earth safely after they had been shot into space.

CREW RETURN VEHICLE/CREW TRANSPORT VEHICLE
Tech Level:10
These were to be small vessels designed to help personnel safely evacuate space faclities in low orbit during an emergency. A prototype, the X-38 Crew Return Vehicle (CRV), was being pioneered by NASA primarily for use as a lifeboat on the International Space Station until budget concerns put the project on indefinite hold in April 2002. The project at the time was 80% complete and had undergone extensive flight-testing, proving the viability of the concept.
The CRV uses a aerodynamically optimized lifting body borrowed from the X-24A project of the 1970s. It also has a rear engine module, with which it uses for a de-orbit burn. When this is exhausted, the engine module is jettsoned and the CRV glides back to Earth unpowered like the Space Shuttle and a steerable parafoil parachute for its final descent phase. Its life-support system was designed to support six passengers for up to nine hours, though de-orbit would take two hours at the most.
The CRV with engine module attached.
Though the project is dead at NASA, the European Space Agency (ESA) has expressed interest in using the design as a Crew Transport Vehicle (CTV), basically an updated space capsule if and when that organization ever begins to accomodate manned missions.

INDIVIDUAL RE-ENTRY MODULE
Tech Level: 12
Videll-style IREMs in action. Art (c) Blair Reynolds.
The Individual Re-Entry Module (IREM) is a bare-bones re-entry system designed primarily as a one-use emergency evacuation measure for facilities in Low Earth Orbit. They appeared in Allen Steele’s novel Orbital Decay and in several Traveller RPG articles, particularly "Rapid Repo" by Greg Videll in Megatraveller Journal#3.
The IREM consists of a hemispherical, foamed, ablative heat shield typically no more than two meters across, a sling to hold the single passenger, a gimballed, manually controlled thruster system for deorbiting, and a parasail for soft landing. Steele’s version was enclosed; Videll’s was open with a barebones framework. In both cases, no provisions were made for life support, as it was assumed that the passenger would be wearing a spacesuit.
Designed more with economy than safety in mind, using an IREM is risky at best. The passenger will typically pull 2.5 to 3.5 g’s for up to ten minutes during descent, and he or she must keep the IREM properly aligned the entire time or risk being incinerated by the heat of re-entry.
Videll’s version came in kit form, complete with an inflatable heat shield mold and a spray canister of ablative heat-resisitant foam, and could be assembled in about 20 minutes or so. He also mentioned that IREMs could be used for military purposes, for dropping personnel onto a planet’s surface more stealthily than a full dropship, and by extreme sports enthusiasts, who would use specialized ablative foams that would produce a variety of colored trails during re-entry.

DROPSHIPS
Tech Level: 13A true dropship is designed to repeatedly carry large payloads from orbit to the ground. Unlike most orbital craft discussed in other articles in this section, a dropship is optimized for carrying cargo down, instead of up, and works most efficiently in that capacity.
Dropships may or may not be able to achieve orbit on their own once on the ground, depending on their exact design, especially if their means of ascent depends on fuel-intensive technologies such as standard chemical rockets. Getting to orbit takes a much greater expenditure of energy than going from space to the ground. They may need to use a seperate booster, drop tanks, or be towed by a dedicated lift vehicle in order to get back into space. If they can achieve orbit on their own, much of the weight carried as payload down to ground may be taken up by fuel on the way back up.
Dropships are usually depicted in one of three different configurations: capsule, spheroid, or spaceplane.
A drop capsule is very much like a modern-day space capsule, basically a small vessel of either conical or spheroidal configuration designed to carry only a few tons or a small group of personnel to the ground. Usually a drop capsule uses retro rockets or a parasail to initiate a soft landing. Many of the SSTO concepts discussed in that section could be modified to work as a drop capsule.
A spheroidal dropship is designed to carry large, bulky payloads that may weigh hundreds or even thousands of tons to the ground. They are shaped generally as spheres or ovoids, with a blunt "nose" at one end and a cluster of powerful rocket engines at the other. They enter the atmosphere nose first, where the heaviest heat shielding is located, then flip and use their powerful retro rockets to initiate a soft landing once in the lower atmosphere.Fortress-class dropship from theBattletechuniverse. (c) FASA Corporation
A spaceplane dropship is what its name implies; a space plane, either powered by rockets or a combined cycle scramjet/rocket engine, optimized to carry cargo from orbit to a planetary surface. These are capable of gliding to the ground unpowered much like the Space Shuttle, but can also facilitate powered landings if needed because of either adverse conditions or excessive payload weight.AnAvenger-class dropship from theBattletechuniverse. (c) FASA Corporation.

MILITARY DROPSHIPS
Tech Level: 15
The dropship from the movie Aliens with weapon booms deployed. (c)Twentieth Century Fox
Military dropships have been featured in many science fiction sources. Dropships can be seen in the novel and movie Starship Troopers, the movie Aliens, in the RPG/Video Game universe of Battletech, in the many various incarnations of the anime series Gundam, and in the recent Playstation 2 game titled, appropriately enough, Dropship.
Military dropships are differentiated from their civilian cousins in several significant ways. First, as it is assumed that they will come under fire, they are likely to be heavily armored and armed. Also because of the danger from enemy fire, military dropships may be expected to undergo heavy maneuvering during the decent phase, even if surrounded by a re-entry corona. Finally, some military dropships may carry heat sinks or refrigeration systems to allow the ship to withstand greater re-entry heat, and thus take advantage of steeper descent angles and quicker orbit-to-ground landing times.
Military dropships by necessity would almost certainly be able to achieve orbit on their own, as they may have to drop their cargo and ascend fast while under fire. Dropship armaments may be in the form of hardened turrets or barbettes, or they may be deployed on retractable arms as per the dropship in the movie Aliens.
Because of their inherent maneuverability, spaceplane configurations would probably predominate military dropship designs, though Spheroidal configurations may be used for unusually bulky payloads or for dropping payloads where there is no significant atmosphere, such as the Moon.

MILITARY DROPSHIP DECOYS
Tech Level: 15If a major military landing operation is expecting heavy resistance fire from the ground, they may employ multiple dummy dropships to draw fire away from the real ones. These vessels would have the same profile and EM signature as the real ones, but would only be cheap shells outfitted with guidance systems, ballast, and maneuvering engines.



INFLATABLE REENTRY SHIELD


Inflatable Reentry ShieldTech Level: 11
Recently, inflatable heat shields have been the focus of a joint venture between the European Space Agency and the International Science and Technology Centre in Moscow. The system was developed to return small payloads from the International Space Station, independently of the Space Shuttle. Once fully developed, it can be used as a ready-made, multi-use reentry system suitable for any number of unmanned payloads.
The Inflatable Re-Entry and Descent Technology (IRDT) module, when inflated, looks like a large flattened cone with flanges along its outer rim, similar to a shuttlecock, with its payload just inside the cone’s tip. The inflatable material is densely packed with multiple layers under high pressure, with inner layers able to take up the heat burden of the outer layers and help dissipate the accumulated heat. The IRDT module’s shape is also designed to act as an aerobrake, making the use of a parachute redundant, at least for any unmanned applications of the device.
Two modules of the IRDT system were tested in re-entry trials using a Russian Soyuz/Fregat launcher in 2000, with modest success. Further development and research continues.



ORBITAL LAUNCH RAILS



The Rocket Ark from the movie When Worlds Collide escapes the end of the world with the help of a launch rail. Image copyright Paramount Pictures.

Deadfall Launch RailTech Level: 9
Pressurized Launch RailTech Level: 9
Electromagnetic Launch RailTech Level: 10
Launch rails are familiar to just about any model rocket enthusiast. Launch rails are also used in some military missile launchers as well as on aircraft carriers. They help the missile/aircraft stay on-target until it achieves a large enough velocity (about 100 ft/sec) that aerodynamic forces will keep its flight steady. Military launch rails often also use "cradles" to hold the launching missile, which are accelerated independently along the length of the rail to give the rocket an extra velocity oomph.
Scaling lainch rails up for use in an actual space launch is an old idea from the dawn of the space age, usually taking the form of a length of elevated track that uses various means to accelerate a rocket-carrying cradle. The concept was abandoned for space travel as they turned out to be more expensive and more trouble to engineer than they were deemed worth.
Its possible that emerging nations or private interests without the resources to obtain more advanced technologies may use launch rails to help their fledgling space efforts. Also, while orbital launch rails are somewhat impractical on Earth, on other terrestial worlds such as the Moon and Mars, which have greatly reduced gravity and no or very little atmospheric pressure, they could prove to be a practical and economically-viable option for future settlers.

DEADFALL LAUNCH RAIL
Tech Level: 9
A deadfall is the simplest kind of launch rail. One positions the rocket intended for launch on a specially-built wheeled cradle at the top of an incline, and lets it zoom down along a reinforced track. At the bottom, the track curves up again to launch the vehicle into the air. Think of a huge oversized ski-jump, except the "jumper" here is a large multi-ton rocket. The rocket’s engines may be ignited at the the top of the deadfall in order to boost the velocity gained, but the exhaust may damage the rail. A more practical method would be to ignite the engines the moment the rocket is clear of the rails and is starting its ascent.
The gain in velocity from such an arrangement would most likely be a pittance compared to the overall velocity needed to gain orbit (around 25,000 mph), but in cases where every drop of fuel is precious it could prove invaluable. Such was the case in the classic sci-fi movie When Worlds Collide, where the rocket ark used a deadfall launch rail to begin its Hail-Mary flight for survival.

PRESSURIZED LAUNCH RAIL
Tech Level: 9
The cradle carrying the rocket is propelled forward by channeled pressurized gasses behind it, either from a huge build up of steam (as is used on modern aircraft carriers) or from an explosive chemical reaction, similar to the way bullets are propelled down a rifle barrel. These types of powered rail-guided systems are also called launch catapults.
While a marked improvement in velocity gained over a deadfall launch rail, a pressurized launch rail still only adds a negligible amount of velocity to the overall speed needed to achieve orbit, approximately several hundred miles per hour at most. Pressurized launch rails need not be curved like deadfall launch rails, but instead can be angle up in straight lines, even almost completely vertical, depending on the exact design.

ELECTROMAGNETIC LAUNCH RAIL
Tech Level: 10
Magnetic levitation technology is used to propel a launch cradle along a specially-constructed track. This is the same technology used in electromagnetic monorail trains, such as those found in Japan.
Because the launch cadle and its track do not actually touch, just "levitate" close to each other thanks to magnetic repulsion, friction is reduced to near zero, allowing impressive accelerations. Advanced electromagnetic launch rails could give an initial boost of close to the speed of sound to its payload.



LAUNCH GUNS



The HARP gun from the 1960s US Navy Project


Advanced HARP Launch CannonTech Level: 9
Nuclear Launch CannonTech Level: 9
Ram Accelerator Launch GunTech Level: 12
Railgun Electromagnetic LauncherTech Level: 12
Coilgun Electromagnetic LauncherTech Level: 13
Circular Coilgun Manned Vehicle LauncherTech level: 14
Launch Guns are an idea that can trace itself back all the way to the Age of Reason, when Sir Isaac Newton used an imaginary super cannon on top of a mountain to illustrate how a cannonball could circle the Earth if given a big enough powder charge. Even centuries later, when people were first dreaming of serious space travel, their notions turned toward guns instead of rockets or flying machines. Jules Verne’s From Earth to the Moon and H.G. Wells’ Things To Come basically used enormous scaled-up cannons to launch mankind’s first forays into the Great Dark.
Launch guns have a number of major engineering obstacles. The projectile will lose velocity continuously on ascent, so its initial muzzle velocity has to be enormous. And because gun muzzles can only be made so long, this means the projectile will have to endure tremendous g-forces on launch, measuring in the thousands of g’s. Obviously, this completely rules out using much of this technology for manned missions.
The projectile must also be built to withstand the extreme frictional heating it will undergo in the dense lower atmosphere, as well as the accompanying air resistance. Some calculations estimate that a projectile launched from sea level would lose over 20% of its muzzle velocity just in the first 16 meters of light. It would be far more practical to build a launch gun at high altitude (at least some fifteen thousand feet up, where the lesser air density would reduce the muzzle energy needed to obtain orbital velocity by about a third.)
Protective launch canisters and jackets would be used to the protect many payloads, and can be shed after launch to boost velocity, much like oversized discarding sabot ammunition. Launch guns can also be used this way as a "first stage" for launching independently powered vehicles, such as small rockets.
Launch guns are best suited for microsatellite launches, putting payloads of 100 kilograms or less into orbit.

ADVANCED HARP LAUNCH CANNON
Tech Level: 9
In the 1960s, the US Navy used surplus 16-inch guns for its HARP (High Altitude Research Project) program. The goal was to launch small probes to high altitude for atmospheric sounding applications. HARP used fin-stabilized projectiles with a discarding sabot. These projectiles eventually achieved an ultimate altitude of 180 kilometers, past the edge of space. Electronic components and sensors used in HARP projectiles demonstrated that such things could be built to withstand the 10,000 g’s of force they had to endure upon launch.
Unfortunately, funding died for HARP before it could implement much more ambitious designs (some of which were already being built when the project was cancelled) that could have put payloads into actual orbit. The Martlet 2G-1 projectile design used a discarding-sabot single stage rocket that could have delivered a two kilogram microsatellite into orbit. The Martlet 3 and Martlet 4 projectiles, in the design stages at the end of the program, were two and three stage rockets respectively that could have put payloads of up to a ton into LEO.
The HARP project is significant in that today almost any moderately-industrialized country (or even wealthier private interests) could recreate the project and move ahead with its more advanced designs. It could, in fact, be one of the cheaper means for a non-space power to "bootstrap" itself up quickly into competition with the known space powers, at least in the field of small satellite launches. However, building a "supergun" in today’s political climate, especially after the Iraqi debacle with one in the early 1990s, might be a dicey prospect for any polity even if the gun is meant purely for space launch purposes.
A variation of the HARP gun is the light gas gun. Instead of using a conventional chemical explosive, the light gas gun uses highly compressed "light" gasses such as helium or hydrogen to propel a projectile.

NUCLEAR LAUNCH CANNON
Tech Level: 9
Any country capable of building nuclear weapons has a door into space. An expensive and politically and environmentally dangerous door, but a door nonetheless.
In 1957, the US engaged in underground nuclear testing of a device called Pascal-B as an experiment to contain radioactive fallout. It was routine as far as such things go, except that Pascal-B was placed at the bottom of a narrow 500-foot shaft, and was topped with concrete and a four-inch-thick steel plate "cap." When the bomb detonated, a high-speed camera recorded the metal cap blasting upward into the atmosphere. Some estimate that if it survived passage through the atmosphere, it had enough velocity to launch out of the solar system. Its estimated that today the Pascal-B cap, if its still intact somewhere out there, may have passed the orbit of Pluto and is heading into interstellar space.
The Pascal-B incident is as clear as one can get to demonstrating the feasibility of launching payloads with a nuclear-powered "cannon." Basically, a small nuke (typically 5 kilotons or less, but larger bombs can be used) is detonated at the bottom of a long, narrow, reinforced vertical launch shaft, its force used to blast a multi-ton payload into space. Like with the HARP guns, the projectile would be wrapped in a protective sleeve which it would shed after launch, like a discarding sabot round. The projectile may also be an independently powered vehicle such as a small rocket for increased velocity.
This may seem a potentially destructive way of launching a spacecraft, but this isn’t the case. In 1954, two steel spheres covered with a protective coating of graphite were suspended only a few meters from ground zero of a nuclear bomb. After the explosion, the spheres were found fully intact miles away, with only a thin layer of the graphite ablated away. (This self same incident, incidentally, also indirectly led to the creation of Project Orion.) The protective sleeve of a hardened sabot would be more than enough to protect most payloads.
The cannon itself would not be so lucky. In order to channel the blast properly, the detonation chamber and "muzzle" would have to be fairly constrictive, and no construction material would be able to survive even a small nuclear detonation that close. So even though the projectile could launch without major damage, the cannon itself would most likely self-destruct after only a single shot. There would also be the problem of nuclear fall-out, though diminished somewhat from most of the explosion being contained underground.
But even though nuclear launch cannons would be one-shot affairs, their advantage is that they can launch much heavier payloads than any other type of launch gun mentioned here, perhaps even matching the capacities of modern rockets, depending on the yield of the launching bomb.
Its highly unlikely anyone today would build a nuclear launch cannon with all the less expensive, less dangerous, and less controversial options available. However, they could be useful in certain specialized circumstances. They’re fairly easy to construct assuming one already has the bombs, and since they’re primarily underground they would be relatively easy to conceal or pass off as another type of construction. They therefore could be useful in launching large concentrations of military assets quickly into space all at once, for planetary defense or for a surprise first strike. They could also be a relatively inexpensive way of launching large payloads from an airless, high-gravity world, where their environmental damage would be a non-issue.



RAM ACCELERATOR LAUNCH GUN
Tech Level: 12
The Ram Accelerator scheme was first proposed by Abraham Hertzberg and colleagues at the University of Washington in 1983. Though ram accelerators have been experimented with in laboratories, no field tests have yet been tried.
The ram accelerator consists of a long, sealed tube filled with a mixture of fuel and oxidizer, such as hydrogen and oxygen. The projectile zooms through the tube, compressing the fuel/oxidizer mix against the sides of the tube and combusting them to produce thrust, much like a sealed-system ramjet. A thin membrane on the end keeps the fuel mix in the tube but is easily penetrated by the projectile.
The longer the tube can be made, the more acceleration the projectile can pile on. Different sections of the tube can be separated by thin diaphragms, in order to take advantage of different fuel-oxidizer mixtures that work better at different velocities and pressures.
Because there is no tremendous build up of explosive pressure, Ram Accelerators have the potential to be used more rapidly for repeat launches than HARP launch guns, and would have a longer overall working life. The University of Washington group, which has so far been able to use a Ram Accelerator to launch a 4 kilogram payload to over 4000 kph, hopes to use the technology someday to launch microsatellites into low Earth orbit.

RAILGUN ELECTROMAGNETIC LAUNCHER
Tech Level: 12
A simplified diagram of railgun operation
A railgun consists of a pair of long, electrically conductive rails, mounted in an insulating barrel, with the rails connected to a rapidly switching high current source. An armature on the projectile to be fired completes the circuit, resulting in a magnetic force that drives the projectile down the barrel. This armature is usually actually a plasma arc ignited at the base of the projectile. More simply put, the projectile "rides" the magnetic field it creates as it connects these two rapidly-flicking electrical sources down the length of the barrel. Think of the electrical arcs riding up paired antennae in a mad scientist’s laboratory, only moving much faster and propelling a bullet at the top of its arc.
The muzzle velocities railguns are capable of are astonishing. Railgun systems in laboratories have achieved projectile speeds exceeding 21,000 kph.

However, railguns have proven to have a number of drawbacks and the designs based on plasma arcs have difficulties with uncontrolled arcing around the projectile or in the muzzle. Switching such high currents has proven tricky in practice. But the biggest obstacle to developing railguns for practical use is the fact that the rails suffer from deformation and erosion after only a few launches at best, meaning they would have to be constantly replaced. Having to constantly replace the rails could easily prove to be a logistics and financial nightmare in launch gun version, making them an unlikely candidate to ever be scaled up for this use.

COILGUN ELECTROMAGNETIC LAUNCHERS
Tech Level: 13
Coilgun operation
Coilguns consist of a series of pulsed electromagnetic coils that accelerate a metal projectile to high velocity. They are more mechanically complicated than railguns, but since there is no direct contact between the projectile and the coils they avoid the erosion and arc-over problems of railguns.
Each coil section along the barrel’s length is switched on rapidly in sequence, pulling the projectile forward, then switched off as the projectile passes so the next coil section can grab it with its magnetic field. In some advanced designs, the coils behind the projectile also switch polarity, using magnetic repulsion to further accelerate the projectile along.
Unlike railguns, coilguns can be made arbitrarily long, allowing for greater potential velocities using gentler accelerations. The main engineering obstacle to this technology is not so much producing enough power or strong enough magnetic fields, but overcoming timing and switching problems.
Because the projectile zooms so rapidly through the barrel, the magnetic fields switching on and off have to be precisely timed. Also, the current and voltage needed to produce the fields do themselves take time to build to strength and to fade away, especially in the bullet-time microseconds the projectile will typically be in the launch barrel. This can result in a loss of velocity, both from less than optimal field strength as well as slow-fading fields behind the projectile tugging on it and slowing it down. Precision timing programs and hardware are therefore an essential component of any coilgun, and one of the main reasons why they have proven much harder to engineer than their conceptual cousin, the railgun.
NASA has designed and built an experimental coilgun that can accelerate a 10 kilogram projectile to 39,600 kph.

MANNED CIRCULAR COILGUN LAUNCHER
Tech Level: 14
The main problem preventing manned missions from using launch guns are of course the massive g-forces a projectile would have to endure upon launch.
However, coilguns do not have to necessarily be constructed in straight lines. In fact, circular particle accelerators use similar principles and are a decades-old technology. It would therefore be possible to have a manned ship accelerated slowly in a very large circular coilgun, then diverted to a straight track when its ready for launch. However, the radius of such a launcher would have to be huge, in order to mitigate the effects of motion sickness and vertigo on the crew, on the order of at least a few kilometers in diameter. In fact, the larger the radius is the better. In one science fiction story, one such coilgun wrapped around the moon’s circumference, in order to launch crews and cargo to destinations into near-interstellar space at small fractions of lightspeed.
It might also take quite a while to build up the velocity needed for launch--several hours to several days, depending on the exact velocity desired and how gradual the acceleration is. The circumlunar coilgun described above took over a month to accelerate a manned vehicle up to launch speed.



LIGHTCRAFT

A lightcraft with laser-ignited air burst.

Lightcraft Micro Satellite LauncherTech Level: 13
Lightcraft Cargo LifterTech Level: 15
Lightcraft Passenger Vehicle
Tech Level: 15
Lightcraft are an unusual but undeniably workable idea that use beamed light from an external source to help "push" a specially designed vehicle into orbit by exploding the air under it. Experiments with such vehicles are being actively sponsored by NASA, the US Air Force Research Laboratory, and private interests like the Foundation for International Non-government Development of Space (FINDS) and Lightcraft Technologies, Inc.(LTI) In 2000, Lightcraft Technologies, Inc launched a lightcraft weighing 1.8 ounces to a height of 233 ft. using a US Army 10-kilowatt pulsed carbon dioxide laser.

LIGHTCRAFT BASICS

The actual lightcraft vehicles being used in experiments today are small, cone-shaped devices with a specially-designed parabolic mirror on their aft ends. The vehicles "ride" along a pulsed infrared laser beam fired from the ground. The reflective surfaces on the underside of the craft focuses the beam into a ring, where it heats air to a temperature roughly five times the surface of the sun (over 50,000 degrees C), causing the air to expand explosively for thrust. The forward motion of the craft feeds new air into the focusing ring for the next pulse. The lightcraft are spin-stabilized, launched at about 10,000 rpms.
Getting a lightcraft to orbit seems to mostly be about ratcheting up the launching laser’s power and focusing ability, enabling it to "push" heavier payloads farther.
One interesting bit of trivia from the LTI test in 2000 is that the experimentors worked closely with NORAD to time their tests so their laser would not accidentally "blind" any satellite passing overhead. Whether this danger could have an influence on the development or proliferation of lightcraft technology remains to be seen.

LIGHTCRAFT MICRO SATELLITE LAUNCHER
Tech Level: 13
A micro satellite is any satellite that weighs 100 kilograms or less.
Eventually LTI and others researching lightcraft would like to build an actual satellite launcher. While the laser-launch abilities of the concept will probably not be able to haul the multi-ton payloads into orbit as more conventional orbital insertion vehicles, it does have the advantage of getting many smaller payloads (100 kg or less) into orbit significantly faster.
When a lightcraft runs out of usable air density (about 30 km up), it can switch to an on-board store of liquid hydrogen to use as laser-combusted propellant for the final boost into orbit. A micro satellite weighing 1 kg would need about 1 kg of hydrogen fuel to make orbit.
The researchers envision eventually orbiting 100 kg payloads with parabolic focusing mirrors of about 1.4 meters in diameter. The launching laser for such a payload would be on the order of 100 megawatts. Alternately, the launching beam need not be from one laser but many grouped together creating a single combinedpulse of equivalent power; this latter arrangement may be more efficient in focusing the apprpriate power on the lightcraft at varying altitudes.
Masers (microwave lasers) have been mentioned as alternative to visible-light lasers in launching lightcraft. Masers do not have the energy density of their visible-light cousins, and thus lightcraft mirrors would have to be made larger, but they are also considerbly less expensive and easier to scale up power-wise.

LIGHTCRAFT CARGO LIFTER
Tech Level: 15
A far more ambitious and exotic concept, involving a reflective lifting body which uses beamed power from orbit to create an airspike and electrohydrodynamic thrust to lift cargo into orbit.
The following description is taken near-verbatim from an article published in Scientific American Presents: The Future of Space Exploration (May 1999), titled "Highways of Light" by Leik N. Myrabo, one of the scientists who pioneered the lightcraft concept at the US Air Force Research Laboratory:
"I have also designed a more sophisticated beamed-energy craft, operating on a different principle...These craft would be better for carrying large cargoes because they can create thrust more efficiently.
"A mirror in the craft focuses some of the incoming beamed energy (from an orbital solar-power station) at a point one vehicle-diameter ahead of the vehicle. The intense heat creates an explosive "air spike" that diverts oncoming air past the vehicle, decreasing drag and reducing the heating of the craft.
"The craft taps some additional beamed energy to generate powerful electrical fields around the rim, which ionizes the surrounding air. It also uses superconducting magnets to create strong magnetic fields in that region. When the ionized air moves through the electric and magnetic fields in this configuration, magnetohydrodynamic forces come into play that accelerate the slipstream to create thrust.
"By varying the amount of energy it reflects forward, the lightcraft can control the airflow around the vehicle. I demonstrated reduction of drag by an airspike in April 1995 in a hypersonic shock tunnel at Rensselaer Polytechnic institute, though with an electrically heated plasma torch rather than with laser power. Tests aimed at generating magnetohydrodynamic thrust, using a 15-cm diameter device, have just begun. A person-sized lightcraft of this type driven by microwaves or by a 1000 megawatt pulsed laser should be able to operate at altitudes up to 50 km and to accelerate easily to orbital velocities."
Myrabo’s cargo lifter would be dependent on orbiting solar-power satellites, basically thin "wheels" constructed in orbit about 1 kilometer in diameter and at most a few millimeters thick. One side would be the solar collector, while the earth-facing side would hold billions of miniature solid-state microwave transmitters. This arrangement could theoretically provide up to 4.3 gigawatts of power to any "lifting" beam aimed at the lightcraft. The lightcraft vehicle could be fed this power gradually, allowing for a five-minute ground-to-orbit flight of no more than 3 g’s of acceleration, similar to what the Space Shuttle experiences upon launch. Or the solar-power satellite could dump all its power into a single, 54-second burst that could zing the lightcraft into orbit at 20 g’s.

LIGHTCRAFT PASSENGER VEHICLE
Tech Level: 15
Dr. Leik Myrabo more recently launched the Lightcraft Project at the Rensselaer Polytechnic Institute, which envisions using the above concepts to create an even more advanced 12-man lightcraft orbiter that would regularly ferry cargo and passengers between the ground and space facilities. These lightcraft would be supplemented by ion engines and maglev devices in order to facilitate take-off, landing, and in-space maneuvering. The interior is filled mostly with a helium/oxygen mixture, to make the craft buoyant in the lower atmosphere and to assist in take-off. Passengers would be immersed in liquid suspension tanks for the quick but high-g boosts to and from space.



GRAVITIC DRIVES


The interstellar Mothership from Close Encounters of the Third Kind glides effortlessly over Devil's Tower using an advanced gravity drive. Image (c) Columbia Pictures.

Gravity Pulse Projectile DriveTech Level: 17
Gravity NeutralizerTech Level: 17
Gravitic RepulsorTech Level: 18
Gravitic EnhancerTech Level: 18
Point Source Gravity Projector DriveTech Level: 19
Gravitic DisplacerTech Level: 19
Combined Gravity DriveTech Level: 20
Gravitics is the name given to applied artificial gravity technology. Though artificial gravity generation and manipulation is still entirely theoretical, the term pops up occasionally in science fiction.
Artificial gravity technology saturates modern popular science-fiction, appearing in Star Trek,Star WarsStargate SG-1Babylon 5, Asimov's Foundation and Robot stories, Brin's Uplift novels, Niven's Known Space stories, and a large host of others. For on-screen sources, this is mostly a financial consideration; its much cheaper and easier to show actors walking around in Earth-normal gravity on spaceships than to spend large percentages of the budget showing them floating about in simulated zero gee. Hence, gravitics in those fictional sources are a mature and ubiquitous technology to rationalize away this limitation.
In novels, games, and similar sources, many authors also take this route, to avoid the headache of realistically having to depict the complex effects of microgravity on all objects in their stories. Other authors, conversely, just love playing around with the concept of artificial gravity, making it one of the prize toys in their imaginary playgrounds. Also, because cheap gravitics is so prevalent in movie and TV sci-fi, many authors in the modern era just include it in their stories as one of the familiar signature technologies of the genre.
On the science side, gravity is the most well known and easily observed of the four fundamental forces of the universe, yet it is also by far the least understood. While gravity control has always been a popular feature of science fiction, it is also one of those speculative technologies whose exact workings in stories is always left conveniently obscure.
So far, the only way we know how to create a gravity field is the same way nature does it--by piling a lot of matter in one place. In order to produce a gravity field equivalent to the planet Earth's, one needs a mass equivalent to Earth. However, as that could be very detrimental to many practical space flight applications, other means of controlling this force would have to be found.
It would also be possible to create an 'artificial' gravity field by taking a mass of superdense matter and accelerating it to near-light speed, where it would accumulate more mass as it approached c. The gravity field would grow proportionately with its velocity, and if set in a ring or coil, like that of a particle accelerator, it would generate a gravity field that could be moderated by controlling the mass's velocity. Since it would be zipping through the coil so fast, the effects of its gravity would be felt all along the length of the coil without an appreciable drop off in performance. However, this scheme would require truly vast amounts of energy to achieve and sustain, usually at levels that would not be practical for most civilizations to apply.
A method that still relies on a great deal of still-fuzzy theory is to use quantum manipulation to create gravitons--the theoretical carrier particle of gravity. If a dense enough population of gravitons can be created for a given input of energy, practical artificial gravity generation may become possible.
Lacking the ability to do this, devices may instead concentrate on manipulating or enhancing existing gravity fields instead of generating their own from scratch. Again, this would probably rely on some advanced form of quantum particle manipulation we can only guess at with our humble twenty-first century understanding. Manipulating existing gravity fields could allow engineers to strengthen or weaken fields within a given radius or range, or even shuffle fields about, allowing the force to be felt at angles differing from where the field actually originated from.
Another aspect that could arise from the artificial manipulation of gravity is the emergence of technology that utilizes a true anti-gravity, or repulsive, force. There is emerging evidence that anti-gravity of a sort may be at work in the cosmos in the form of so-called "negative matter" or "dark energy", but its effects are very weak and can only be observed over very large--as in intergalactic--distances. An intricate understanding of gravity could also lead to the knowledge needed to understand and manipulate this new force of nature--if it is ever proven definitively to actually exist.
Antigravity would act like gravity, but would repel as opposed to attract. It could counteract gravity fields of like strength.
Another major consequence of gravitics is that gravity fields bend the fabric of space-time. By generating or manipulating gravitons, a gravitic drive may also be warping and stretching the fabric of the universe itself. This in itself would open the door to huge number of impressive and world-changing technologies we can only begin to guess at. In science fiction, gravity control is often seen as one of the major precursor technologies for producing FTL travel, as the ability to warp space for one superluminal scheme or another usually depends on expert manipulation of gravitic fields.
Gravitics technology, no matter what form it took, would have a profound affect on human civilization in a number of ways. This article is concerned only with its ability to act as a space drive; other articles dealing with other aspects of gravitics will be addressed in future articles. Links leading to more thorough and technical explanations on the science of gravity are in the Further Information section below.

GRAVITIC PULSE PROJECTILE DRIVE
Tech Level: 17If the quantum manipulation of gravity becomes a reality, one of the most basic feats the technology would be capable of would be creating short-lived, artificial pinpoints of gravity using advanced particle accelerators. This leads the way to an odd but workable means of space propulsion. This scheme would use one-shot disposable particle accelerators--which by Tech Level 17 should be relatively cheap and easy to produce compared to today--mounted on projectiles which are shot ahead of the ship.
These accelerators would be pulsed to overload, creating the largest gravitic field they were capable of. The ship, trailing behind, would "fall" into this briefly-lived gravity field. As the field fades, another projectile is shot out, repeating the process until the ship builds up to the desired velocity by constantly falling in one gravity pinpoint after another. If timed exactly right, with one projectile's field taking up the slack just as the preceding one fades away, the ship can accelerate with no perceived discomfort experienced by the crew.
The efficiency of this particular drive scheme would depend on a lot of variables, the most significant of which is how powerful a gravity field the projectiles could create and how long each field would last. The gravity field created would have to exert a force on the ship greater than that exerted from shooting the projectile ahead of the ship, or else there will be no net gain in forward velocity.
Projectiles with weak fields would have to be pulsed close to the vehicle, while those with larger ones would need more distance to avoid pulping the crew. Unmanned vessels can of course use much more intense gravitic fields closer to the ship in order to induce more intense accelerations.
It should be noted that as the ship is drawn to the gravitic field, the gravitic field will also be drawn to the ship. The pinpoint source of gravitons will fall toward the ship even as the ship accelerates toward it. However, if the gravitic projectiles are engineered and employed properly, the artificial gravity field should fade to nothing before it "impacts" and affects the ship's forward motion.
Another hazard would be dealing with the debris of the projectile. The particle accelerator itself would still be in the path of the ship, held to its relative position by the very gravitic field it generated. It would have to be deflected in some way, or as an alternative, the ship can be canted at an angle to the accelerator's gravity field and "fall" into a curved trajectory toward it. The next projectile would be shot at a different angle to correct the forward motion back into the desired direction, allowing the vessel to scribe a languid spiral through space as opposed to a straight line. This might be a bit unpleasant for the crew, however, as being in the ship could feel like a long, extended roller-coaster ride.
In many ways, this is the converse of the Orion Nuclear Pulse Drive. Instead of pushing the vessel with nuclear explosions with projectiles from the stern, the ship is "pulled" with gravitic pulses with projectiles from the bow. Like the Orion scheme, the number of projectiles needed would depend on how intense the acceleration provided by the projectiles would be, and on the type of mission the ship is expected to undertake. For interplanetary hops, probably only a few hundred projectiles would be needed. For interstellar voyages, tens if not hundreds of thousands of gravitic pulses may be necessary.
Even though particle accelerators will likely be much easier to produce at higher tech levels, there is no guarantee that they will be significantly cheaper. This scheme could end up as quite expensive if a great many gravitic pulse projectiles are needed. Given the other, much more efficient and powerful drives that will be available at the tech levels when this propulsion scheme becomes available, gravitic pulse projectiles drives will likely be used only in specialized circumstances.



GRAVITY NEUTRALIZER
Tech Level: 17
An alien surface-attack saucer from the movieIndependence Day floats mysteriously over a city using what may be a powerful gravity neutralizer. Image (c) Centropolis Entertainment and Twentieth Century Fox.

This device exerts a field that prevents gravitons from outside the field from interacting with mass inside of it. In other words, it cuts off the vehicle wholly or in part from any surrounding gravity field. A gravity neutralizer is also sometimes called a gravitic shield, as it "shields" the mass within from outside gravitational influence.
One method suggested for doing this is to surround the mass with a material that has anti-gravity properties, such as negative matter. However, generating or cultivating enough negative matter, still a theoretical substance, could prove very difficult. One would need as much negative matter for the shell as normal matter in the vehicle in order to completely shield it from surrounding gravity fields.
Another method may use advanced quantum manipulation to generate dense amounts of anti-gravitons along the outer shell of the mass to be lifted. The anti-gravitons cancel out the incoming gravitons from the surrounding masses. The more energy fed into the system, the more anti-gravitons are generated, and the more normal gravitons are intercepted and neutralized.
Gravity neutralizers need not totally cancel out the surrounding gravity field; it can do so partially in order to lighten the load of the vehicle and make it easier to move through other means.
Note that neutralizing local gravitational influence should not be confused with neutralizing the vehicle's mass. A 50-ton gravity-neutralized vehicle will still resist changes to its state of motion as any normal 50-ton vehicle, aside from any direct effects of the gravity shielding. By their nature, gravity neutralizers usually have to be coupled with another means of locomotion in order to get the vehicle to go where the pilot wants.
If used to completely neutralize the local gravity field, air pressure and wind will slowly push the vehicle upward. In many ways, a gravity-neutralized object will act like a hot air balloon, but with no upper ceiling to how far it can rise. If left completely on its own, a gravity neutralizer would eventually drift the vehicle up into orbit or beyond after a number of hours.
Masses can enter or leave the gravity-neutralizing field freely. More or less mass within the field may require more or less energy by the device to maintain its effect.
There seems to be a bit of a debate as to what happens to any object that's directly above a gravity-neutralizing field. The gravity field directly below it is cut off, so does that mean it will float off into space as well?
The answer is no. While the gravity directly below it is cut off, its assumed that close to the surface of a planet the planet's mass is still acting on it from other angles. Imagine yourself fifty feet above a gravity-shielded object anchored to the ground. The gravity neutralizing field is 10 feet in diameter. Directly below you there is no gravity, but gravity is still reaching you at an angle from the surrounding landscape. The amount of gee forces you experience may be less than normal, and you may certainly feel lighter, but there is little chance of you floating up unchecked.
The higher the gravity-neutralizing field is, however, the more of the planet's gravitational influence is blocked on an object above it by the field. So if you maintain your fifty-foot distance above the gravity-shielded object, you will feel gravity falling off more and more sharply as you and the vehicle ascend.
Usually a secondary means of locomotion will be employed with a gravity neutralizer in order to make the vehicle much more useful. The neutralizer will negate its weight, allowing its secondary means of propulsion to work much more efficiently without expending the energy it would normally need to fight gravity as well. A good metaphor for this type of vehicle would be a blimp--the gas envelop provides the lift, while a secondary source of propulsion--propellers--make the ship go where the pilot wants. Unlike a blimp, however, the neutralizer would work on any type of vehicle that can mount it, whether it be a car or an airplane or a spaceship.
Gravity neutralizers are ineffective as a means of locomotion away from large source of gravity, like a planet. They will prove useless in deep space, for instance. However, they should be able to work equally effective with natural as well as artificial sources of gravity.

GRAVITIC ENHANCER
Tech Level: 18The opposite of a gravitic neutralizer, an enhancer multiplies and intensifies the effects of gravity on a mass, by creating many more gravitons around it than it could generate normally. This increase in graviton flux enhances the attraction between the gravitic-enhanced object and any other nearby mass or gravitic field.
Gravitic enhancers' role in propulsion is mostly to work in conjunction with more advanced artificial gravity propulsion schemes, allowing them to generate and use gravity fields of immense strength, but would sometimes be used singularly in specialized circumstances, such as in orbital bombardment by kinetic energy weapons.

GRAVITIC REPULSOR
Tech Level: 18
The Martian War Machines from the 1953 film version of The War of the Worlds used what appeared to be gravitic repulsors to stay aloft. Image (c) Paramount Pictures.
As our understanding of gravity grows sophisticated enough to allow its outright manipulation, it may lead to the discovery and understanding of a true anti-gravity force. Even though the existence of an anti-gravity force is currently postulated by many theorists, it has yet to be proven to actually exist. However, if it does, gravitic technology would seem to inevitably lead to its manipulation and artificial generation.
As previously stated, both "negative matter" and "dark energy" are theorized to have anti-gravity-like properties. A gravitic repulsor may need to have some quantity of one or the other, either with the exotic substance acting on its own or its effects intensified in some way. Alternately, quantum manipulation may produce carrier particles of the force in sufficient quantity to produce noticeable counter-gravity effects.
True anti-gravity will act like gravity, but in reverse. It will push away matter instead of pulling it closer. The intensity at which it pushes will also be dependent on the inverse square law, unless artificially intensified or redirected in some way. Also, it needs a mass to push against--like a gravity neutralizer, the effectiveness of a gravitic repulsor as a means of propulsion will fall away quickly the farther from a planet it gets, and will be useless in deep space.
In many ways, a gravitic repulsor will act like a gravity neutralizer, basically counteracting the effects of gravity under the vehicle to allow lift. Gravity is not completely neutralized, however; even though gravity directly beneath the repulsor is countered, gravity from the planet's surface will still be acting on it from other angles. Unlike a neutralizer, however, a repulsor can be used for lateral propulsion in its own right. By canting one or more repulsor units on a vehicle at an angle to the ground, the vehicle will gain a lateral acceleration as the repulsor "pushes" against the gravity field of the planet.
A good way to think of them is as a bit analogous to hovercraft. The turbofans on a hovercraft provide lift, and turned at an angle toward the ground, will provide part of their thrust for sideways motion as well. Of course this metaphor can only go so far. A repulsor-fitted vehicle could lift itself into orbit just like a neutralizer, whereas a hovercraft can only lift itself up a few meters. Also, repulsors need a mass to actually push against; angling one completely sideways so that its not pushing against the planet at all would negate its effect.

POINT SOURCE GRAVITY PROJECTOR DRIVE
Tech Level: 19This drive scheme is also informally called the "push-me-pull-you" drive. It was one of the main concepts behind Michael P. Kube-Mcdowel's Trigon Disunity novel series.
In many ways, this is similar to the Gravitic Pulse Projectile Drive. But instead of shooting gravity-generating particle accelerators ahead of the ship, the ship's advanced gravity generators project a pinpoint field of gravity ahead of the ship. The ship falls into this artificial gravity well, and as the ship moves forward, the gravity point source moves with it and maintains a constant distance. A bit similar to a mule following a carrot on a stick tied to its back, a ship employing a Point Source Gravity Projector Drive is constantly "falling" into the gravity field it is projecting ahead of itself.
Like with the similar gravitic projectile scheme, it would probably not be one continuous field that the ship would maintain, but rather thousands of pulsed gravitic pinpoints that would begin fading as soon as they popped into existence. A quick succession of these pinpoint sources--maybe dozens or even hundreds a second--would ensure a smooth ride with no perceived discomfort to the crew. Otherwise the ship would have to be constantly pushing the gravity field ahead of it, which may or may not have pseudo-mass and therefore momentum and inertia of its own, requiring much more energy to move than the acceleration it would provide.
The pinpoint gravity sources would be drawn toward the ship even as the ship is drawn toward them, but ideally the pinpoint sources would fade back into the quantum foam quickly before they get close enough to adversely affect the velocity of the ship or any internal systems. The efficiency and power of the drive would depend on how powerful the pinpoint gravity fields would be, and how often it could pulse them. As with any other drive scheme, more powerful drives can be used by unmanned ships that do not have to worry about the fragility of a human crew.
This drive is able to eventually achieve significant fractions of lightspeed in velocity. If the gravity-point sources exert a continuous gravitational influence of one G, the ship will eventually approach the speed of light after one year or so. However, the drive would also seem to have an efficiency ceiling, where its effectiveness as a drive would begin to drop off significantly. The faster the ship goes, the more mass it would accumulate. Assuming the strength of the projected gravity point-sources have a constant strength, that means that as the ship becomes more massive, the less effective the point sources will be at pulling it forward. Rather, the ship will begin pulling much harder on them than they will on it, and its acceleration will drop off significantly and even disappear completely if the process is allowed to continue indefinitely.
The way around this limitation would be to use greater and more intense gravity point sources the faster the ship goes. This means that, like any other sub-light drive, it would have to expend more and more energy to move the ship the closer to luminal velocity the vessel got. The ultimate upper velocity a Point Source Gravity Projector Drive can go will depend on the strength of the point sources it can project. In the Trigon Disunity novels, ships using this drive were able to routinely achieve 50% to 80% light speed before the drive began dropping off in efficiency.
Another concern is the effect the gravity point sources will have on the interstellar medium the ship will be trying to plow through. The gravity point sources will actually accelerate particles in the ship's path toward the vessel, significantly increasing the hazard of both radiation and potential impacts on the vehicle. So even though the drive could theoretically achieve significant fractions of lightspeed, in most practical instances the drag and hazards of the interstellar medium at those velocities may limit it to 'merely' 10%-15% or so of lightspeed.
There is some concern about whether this drive could ever be made to work, even given the future reality of advanced gravitic technology, as it would seem to violate the Law of the Conservation of Momentum. However, the Point Source Gravity Projector Drive is a reaction drive, only here the reaction is a mutual pull--two gravity fields interacting with each other--instead of the mutual push of exhaust and projectile. The problem here is finding a way to create the pinpoint gravity sources outside of the ship that doesn't adversely affect the ship's forward momentum. The need for highly advanced quantum and gravitational field manipulation in order to do so is one of the reasons the Tech Level for this drive is as high as it is.

GRAVITIC DISPLACER
Tech Level: 19An odd device found in some science fiction sources, a gravitic displacer takes an existing gravity field and redirects it effects, or "displaces" it, outside of its originating mass. In other words, if a gravitic displacer was directed at you, the gravity field the mass of your body generates would no longer be centered on you, but rather at a chosen spot near you, say six feet to your right. Your body, which still retains its original mass and inertia, would therefore be gravity-neutralized and would begin floating up.
As a means of locomotion, a gravitic displacer is usually used on a small section of planet a vehicle is resting on. It displaces the field acting under the ship, projecting it overhead instead of from below, allowing the ship to rise, as if it were 'falling' up. (In fact, this is exactly what it is doing--the gravity field that would normally make it fall down is simply being projected in a different location.) The displacer can then project the field ahead of the ship, propelling it forward as its pulled by the redirected gravity field. Used in conjunction with a gravitic enhancer, this can prove to be a very effective means of propulsion around a planet.
The true oddness of this concept comes from how some have proposed using it as a means of deep-space propulsion. The displacer would take the majority of the ship's own natural gravity field and project it ahead of the ship just like a Point Source Gravity Projection Drive. Combined with a gravitic enhancer that could intensify the field thousands-fold, it would provide a practical means of giving the ship a substantial amount of acceleration. All of the ship's natural gravity field can't be used, otherwise the ship would be completely gravity-neutralized and there would be nothing for the displaced field to "grab" onto.
Whether this could be ever made possible is unknown; the deep-space gravitic displacer drive would seem to severely violate the Law of the Conservation of Momentum, even more so than the previous drive. There is no outside force or energy acting on the ship at all--no star-hot exhaust to push it nor independent gravity field to pull on it--only its own gravity field split into two, literally the ship lifting itself up by its own bootstraps.

COMBINED GRAVITY DRIVE
Tech Level: 20
Tooling around the everglades with what appears to be a Combined Gravity Drive, from the the movie Flight of the Navigator. Image (c) Walt Disney Corporation.
The Combined Gravity Drive is representative of a number of advanced and incredibly versatile gravitic engines found in science fiction.
Basically, this is an all-in-one artificial gravity propulsion system. It can act as, and combine the effects of, a gravity neutralizer, a gravitic repulsor, a gravitic enhancer, and a point source gravity projector. Employing multiple such units allows the ships to perform some truly astounding feats of flight nearly impossible with other drive schemes. Computer control-systems read what the pilot wants of the vessel, and employs the various effect of its gravity drive to deliver. Need to hover mysteriously in place? The drive neutralizes 90%+ of the ship's gravity, just enough to counter the effects of air pressure pushing it up, so it can float effortlessly. Need to rapidly climb at 20 Gs? The drive creates a powerful repulsor field to push the vehicle rapidly against the planet surface below. Need to then stop on a dime? The drive creates a gravity-point source behind the vehicle strong enough--perhaps on the order of hundreds of G's--to suddenly stop it dead in mid-air. And so on and so forth. Of course protecting a biological pilot and crew from the effects of such rapid and potentially devastating accelerations and decelerations would be the responsibility of other systems.
Two on-screen sources show what appear to be advanced gravitic drives of this sort. The first isClose Encounters of the Third Kind, where the aliens' glowing subcraft zip about the landscape, pulling wild maneuvers effortlessly within meters of the ground. The second is Flight of the Navigator, as an alien ship uses its advanced propulsion system to fly rings around its human-created counterparts. In fiction, the alien Kzin in Niven's Known Space stories used miraculous-seeming "gravity polarizers" to conquer a large swath of interstellar space until they ran into those pesky humans.








3 comentarii:

  1. I'll right away take hold of your rss as I can't to find your email subscription hyperlink or e-newsletter service. Do you've any? Kindly permit me recognize so that I may subscribe. Thanks.
    Panasonic NN-H965BF Luxury Full-Size 2.2-Cubic-Foot 1,250-Watt Microwave Oven, Black

    RăspundețiȘtergere
  2. Hello. I'm the author of the picture of the hollowed out asteroid, named terrain3a.jpg

    I rendered that back in 1999 with a 3D modeling program called truespace 4. Imagine my surprise when I googled 'hollowed out asteroid' and came across a picture I'd actually done.

    I'm not worried about getting credit or anything, keep using it, I don't care. I just thought that was neat that somebody actually found it and used it and labeled it as 'artist unknown'. That's me, the unknown artist! :)

    RăspundețiȘtergere
  3. Technology Submission - State of the Art - Novel InFlow Tech - Featured Project Development; |/ ·1; Rotary-Turbo-InFlow Tech / - GEARTURBINE PROJECT Have the similar basic system of the Aeolipilie Heron Steam Turbine device from Alexandria 10-70 AD * With Retrodynamic = DextroRPM VS LevoInFlow + Ying Yang Way Power Type - Non Waste Looses *8X/Y Thermodynamic CYCLE Way Steps. 4 Turbos, Higher efficient percent. No blade erosion by sand & very low heat target signature Pat:197187IMPI MX Dic1991 Atypical Motor Engine Type. |/·2; Imploturbocompressor; One Moving Part System Excellence Design - The InFlow Interaction comes from Macro-Flow and goes to Micro-Flow by Implossion - Only One Compression Step; Inflow, Compression and outflow at one simple circular dynamic motion / New Concept. To see a Imploturbocompressor animation, is possible on a simple way, just to check an Hurricane Satellite view, and is the same implo inflow way nature.

    RăspundețiȘtergere