Sci-Fi Tech Coming to a Reality Near You
By thelizman in Technology
Sun Jul 20, 2003 at 04:15:30 AM EST
Tags: Science (all tags)
"Where are the flying cars we were promised?" Avery
Brooks is quite excited, and not in a good way. In a commercial for IBM,
he rails about how disappointed society is because all the great technological
advances we were promised a few decades ago still aren't here. No flying cars,
no space travel, no giant domed cities, no anti-gravity belts, no warp drive
...aside from the remote control, mankind has not made a significant advance
in technology in the last 50 years. The next 50 years promises to be different.
In this article, I will examine disruptive technologies that will likely impact
us in the next half century, some of which we've been waiting for impatiently,
others we've barely conceived of yet.
Sci-Fi's Track Record
So far, Science Fiction has done a better job of predicting trends and disruptive
technologies than mainstream science has. Jules Verne, who ranks amongst the
likes of Shakespeare as one of the most translated authors of all time, enjoys
such a distinction. He wrote about the notion of a space capsule in "From
Earth to the Moon" nearly 100 years before Yuri
Gagarin and Alan
Shepard rode in them. "20,000 Leagues Under the Sea" introduced us to the
notion of an electric submarine less than a decade after the first submarine
was put into service, and half a century before battery powered U-boats terrorized
the oceans. In his book "Diary of an American Journalist, 2890", Verne described
video phones, moving sidewalks (as found in many airports today), cities packed
with towering sky
scrapers, calculators, and levitating
Verne was notable for the scope of his vision, but he is by no means alone.
Robert A. Heinlein's sci-fi one-off novel "Starship
Troopers" introduced us to the Mobile Infantry Power Suit - a high tech
armored suit which integrates the soldier, his communication gear, and weaponry.
In 1991, the US Army set upon a US $2 Billion research program into the Land
Warrior System, and today elements of the land warrior system are already
being deployed in the field with Special Forces and Ranger units.
Even not-so-serious fiction has produced technology concepts that are impacting
society only recently. The Dick
Tracy cartoon series started running in papers in 1931. One of the "gee
whiz" gizmo's in Tracy's inventory was a video-phone watch. Today, cell phones
that can transmit still pictures are common place, 3G networks are promising
full motion video, and true to form, Casio introduce a watch
with a built-in camera and color display. In other circles, the pop-culture
icon Douglas Adams introduced the
"Sub-Etha Network" in his "Hitch
Hikers" series. This network allowed people armed with portable
tablet computers (something Xerox PARC came up with in the early 1980's)
to communicate electronically, and collaborate
on a encyclopedic database of facts.
Other sci-fi technologies have yet to come to fruition. The handheld laser blaster
is a standard of nearly every sci-fi work since Buck Rogers. In truth, lasers
are still too large, bulky, and complicated to serve as weapons. Ironically,
lasers have emerged as having far more peaceful uses, from supermarket checkouts
to storing data on discs. Enter the next 50 years, where the fantastic is on
the precipice of becoming reality.
You have to have lived
in a cave for the last thirty years to not know about Star
Trek, and at least some of the aspect of Gene
Roddenberry's fictional universe. A physicists nightmare, Star Trek presents
a panoply of technologies more or less based in science fact to serve as the
backdrop to stories meant to explore the human condition. As it was originally
conceived in the late 1960's as a "wagon train to the stars", the enabling technology
was "Warp Drive". The how-to of warp drive was never specifically discussed
in early episodes of Star Trek, but it's clear that the science existed at the
time to inspire Roddenberry and other sci-fi writers of the time to imagine
the means to make faster than light travel possible. In "The
Physics of Star Trek", Lawrence Krauss describes how warp drive works. While
1905) dictates that nothing that has mass can travel faster than the speed of
light, it does not prevent something with mass crossing vast distances of linear
space in short amounts of time. The Mexican physicist Miguel
Alcubeirre gave serious attention to, and even legitimized the idea of "warp
drive" in 1994
paper published while he was attending the University
of Wales, College of Cardiff.
The idea is to use a device to create a warp bubble. At the front of the bubble,
space is being compressed, or contracted. At the rear of the bubble, space is
being expanded. Any mass inside the warp bubble would then be pushed about. Interestingly,
within the bubble, space-time remains unchanged. There are no inertia effects,
and no space-time dilation. To an observer outside of the warp bubble, you would
whisky by at fantastic faster-than-light speeds. Inside the bubble, you would
see the universe whiz by, but for all intents and purposes you are fixed at a
point in your local space. You're sitting still.
Alcubierre's warp-drive paper has passed peer review, and is based in sound
science. Alcubierre himself is considered at the fore of cosmology and theoretical
physics, being mentioned in the same breath as Stephen Hawkings. In theory,
an Alcubierre drive would enable practical interstellar travel ...except for
three problems. The first is the amount of energy required to warp space time
- it is the energy contained in the mass of a small planet. The second problem
is the fact that there is no technology - that we know of - which allows the
manipulation of space time. The third problem (which was actually covered in
a Star Trek episode) is the environment impact of manipulating space time. The
danger in creating rifts in space-time evokes untold horrors ranging from regions
of space where physics no longer apply, to the complete and total destruction
of our universe.
Enter the not-so-credible world of UFOs, government conspiracies, and the legendary
Area 51. To be more precise, the adjacent S-4 complex at the Nellis Test Range
in Nevada. In reality, Area 51 has been acknowledged off-the-record as being
a secret testing ground for captured Soviet aircraft, and high tech DARPA aircraft
projects such as the stealth fighters and bombers. The alleged S-4 site has
only been alluded to in scant reference in declassified government documents,
and if it weren't for a character by the name of Robert Lazar, nobody would
have ever thought much of it.
The story of Bob Lazar
reads like a sci-fi novel itself. Employed by a government contractor in 1988,
Lazar details a shady covert world where proxy corporations are used to hide
the activities of advanced Department of Defense facilities. In particular,
Lazar's specialty was propulsion systems. He was asked to work on vehicles at
an undisclosed complex, and under tight scrutiny. The government, for its part,
denies ever having employed Lazar, but later admitted he had worked as a low-level
consultant to the Air Force. As an employee at S-4, Lazar claims he worked on
reverse-engineering technologies related to captured space craft, which he had
initially assumed were of Soviet design. Later, Lazar says he came to the realization
that the craft were extraterrestrial in nature.
With respect to the technology of the craft, Lazar claims that they use a gravity
generator that is powered by a nuclear reactor. The generators produce gravity
waves which can either cancel out the gravitational field through which it is
projected, or it can generate its own gravitational field. The operation of
the drive on these craft is very similar to Alcubierre's proposed warp drive.
However, in spite of Lazar's credentials and authority on the issue, his claims
are dubious. The government denies knowledge of S-4, and most of Lazar's claims
If Lazar is to be believed, then we do have the technology in our posession
to manipulate space. Aside from supposed alien technologies, we are left to
our own devices, and leading edge research does show some promise for human
kinds. In August of 2001, a Russian physicist named Eugene Podkletnov published
a paper regarding an experiment
wherein a mass suspended from a balance scale over a plate of super conducting
materiel, appeared to lighten when a magnetic field was applied. Podkletnov's
work still has to undergo peer review, and an attempt at peer
review by NASA has proven inconclusive. However, at least one scientist
has reproduced Podkletnov's work. John Schurer replicated
the experiment, and showed a 5% reduction in apparent weight of a plastic
disc in the same setup. In general, scientists disagree with what is causing
the effect, but Podkletnov himself postulated that the superconductor changes
the interaction of certain forces such as gravity and magnetism. In a similar
development, the University of South Carolina announced,
and subsequently withdrew a paper in which they describe a device they claimed
could generate a gravitational field to either direct or counter an gravitational
In spite of these developments, there still exists to practical application
of a gravity manipulating technology. However, it is not impossible to effect
gravity. The real question involves the application of power. In order to create
a practical gravitational field, it would require the energy bound up in the
mass of a small planet, if not a star. A source of energy that plentiful has yet to be explored...or maybe it has.
Cold Fusion on the Desktop
T. Farnsworth is a name every couch potato should know, but probably doesn't.
It's certainly a name revered by Amature Radio Enthusiasts (aka "Hams"), for
Farnsworth invented the high frequency vacuum tubes which made high frequency
radio communication possible, and even practical. Nearly every part of the television
up until the 1980's owed credit to Farnsworths research into vacuum
One innovation that Farnsworth is not so well known for is based on a problem
he had in developing the high frequency vacuum tube. The problem is known as
multipacting and generally speaking it's something to avoid. Multipacting
is when particles (in this case, electrons), begin to clump together. In so
doing, the particles enter a higher energetic state. The danger with this in
terms of vacuum tubes is that excessive multipacting will result in a plasma
field trapped between the elements of the tube, and melting them. Understandably,
engineers worked to avoid this phenomenon.
Farnsworth, however, was intrigued by the multipacting process, and particularly
about the ability (indeed, the tendency) to focus the multipacted electrons
to a given point. This ability, known as Inertial Electrostatic Confinement,
would allow the containment of a high energy plasma within a given space. Contemporary
fusion experiments at the time could not contain the high energy plasma. Once
the plasma touches the reactor walls, there was tremendous erosion of the wall,
as well as drastic loss in power caused by the cooling of the plasma. Today,
fusion reactor designs attempt to use large arrays of electromagnets to contain
the plasma, the most notable being the Tokamak.
Farnsworth began his experiments in the Farnsworth Television Labs at ITT, and
built several different designs. His initial design used a cylindrical chamber
with cylindrical electrodes through which the fuel was injected at high velocity.
The ions would race towards the core, where they were contained by the electrostatic
pressure from the electrodes. The impact of new ions being fired into the core
kept the hottest plasma towards the center, where the fusion reactions occurred.
The rate of reaction was measured by counting the neutron emissions.
Several models of the initial Farnsworth
Fusor were built, but their output was limited by one factor: The use of
particle accelerators to inject new fuel into the core was a slow way of introducing
new fussionable materiel. A new design would have to be created to further the
The arrival of Robert Hirsch provided a dramatic shift in fusor design. He did
away with the particle accelerators and multiple electrodes. Hirsch instead
proposed a design where two spherical electrodes - one inside the other - would
be surrounded by the fuel in a normal gaseous state. The ions needed for fusion
would be provided by the coronal discharge of the electrodes. The electrodes
would draw the ions to the center of the fusor where they would begin fusing.
This fusor, properly known as the Hirsch-Meeks fusor, became the focus of research
at the labs, and a series of models leading up the the Mark III showed very
high rates of fusion approaching the "break even" point where the process would
produce more energy then it consumed.
Unfortunately, other forces were at work. Farnsworth Television Labs had been
purchased by ITT in 1949. In 1961, ITT placed Harold Geneen in charge of the
company, and Geneen set about making ITT a profitable company by purchasing
other profitable companies and selling unprofitable assets. Since Farnsworth's
labs weren't producing anything, they got the axe. Farnsworth then turned to
the Atomic Energy Commission, but was snubbed. Fission was the promise of the
day, and Fusion was seen as technically unfeasible. The fusor nearly died.
In the 1980's, big fusion projects resulted in a series of dismal failures.
Fission reactors were producing tons of toxic and radioactive waste, the disposal
of which was a sensitive issue. Highly publicized containment failures - first
Mile Island in 1979, and later at Chernobyl
in 1986, shook the public confidence in nuclear power. Amongst all this enters
Miley of the University of Illinois who
revived the fusor.
George Miley proposed a device based on the Farnsworth-Hirsch fusor, and sought
funding from the Department of Energy. The controversy over the cold fusion
announcement by Pons
& Fleischmann pushed the idea of fusion further from mainstream science,
but Miley was able to secure funding from Chrysler (now Daimler Chrysler) to
produce fusors as a commercial neutron source.
Today, fusor research is largely the domain of the science
hobbyist, with most new innovations coming from outside the scientific community.
A number of hobbyists work with fusors creating a vast body of experimental
observations. However, it is unlikely that this community will be the source
of the breakthrough reactor design which makes fusors commercially feasible
sources of power. The problem with fusors and other non-equilibrium fusion
devices is that energy leaks out in the form of radiation, most notably X-Rays.
A method must be found to redirect that radiation back into the reaction in
order to make the reaction yield a net surplus of energy. Using light isotopes
such as deuterium as nuclear fuel will still produce a net energy output, but
that output is not sufficient for power generation.
Free Energy, in the GNU Sense of the Word
The field of Quantum physics presents us with a dazzling array of theories which
describe how the universe functions on a subatomic scale. These principles seemingly
defy logic, and often fly in the face of other principles of physics. Often
these theories seem destined to remain simply theories because we lack the technical
ability to prove them experimentally. But one aspect of quantum physics has
been experimentally proven, and the implications are exciting.
In 1948, the Dutch Physicist Hendrick Casimir was working colloidal solutions
at Phillips labs. A colloid is a type of mixture in which small particles are
suspended in a liquid. A colleague of Casimir had found that the traditional
explanation for some behaviors of colloids didn't fully account for what was
observed. Through an extraordinary series of explanations, Casimir found that
there existed a new force - an attractive force between two masses separated
by the smallest of distances, and that this force arose from fluctuations in
the quantum vacuum.
The Casimir effect
would have to wait until 1958 to be demonstrated by another scientist at Phillips
labs, Marcus J Spaarnay. Spaarnay set up an experiment where two reflective
surfaces were placed in close proximity. One surface was fixed, the other was
attached to a balance beam. Spaarnay was not able to directly confirm the Casimir
effect, but effectively demonstrated that the experiment did not disprove the
effect under the specified conditions. A direct causal relationship had to wait
even longer. In 1997, Stephen Lamoreaux of the University of Washington did
the most conclusive experiment to date. In the Lamoreaux experiment, a spherical
lens of quartzite was attached to a pendulum, and suspended over a copper plate.
The force was measured as the two surfaces were brought closer to each other.
When the plate and lens were separated by a few microns, the Casimir force slammed
the two together, inducing a twist into the pendulum that was measurable to
within a mere 5% of predicted forces. The Casimir effect had been physically
The force that drove these two experiments is not itself a source of infinite
free energy. Rather, the underlying physics demonstrated in the Casimir effect
gave rise to the study of the Quantum Vacuum, and provided greater insight into
something known as the Zero
Point Field, and the Lorenz-Invariant
Vacuum. Here is where you may be asked to suspend disbelief long enough to absorb
First, we'll look at the concept of a vacuum. Most people associate vacuum with
what is called the Classical Vacuum - an area from which all matter has been
removed. However, a Classical Vacuum is not empty space. The void is still filled
with energy in the form of photons and radiation waves. In order to achieve
a true vacuum, you have to cool the space down to absolute zero - the point
at which all things freeze. In theory, at absolute zero there can be no energy
whatsoever. In actual practice, however, it was discovered that that energy
still existed, and thus the vacuum was not truly a vacuum, but a Lorenz-Invariant
Vacuum. The energy that remained was dubbed the Zero Point Energy field. In
theory, the source of this energy is incomplete snippets of energy waves. These
half-filled photon states cannot exist as energy, and they cannot exist as particles,
except under conditions that are defined by a degree of uncertainty. In short,
the very small spaces between atoms are constantly bubbling with elementary
particles that appear out of nowhere, then dissolve back into nothing. This
is often called the Quantum Foam.
Energy in the Zero Point Field is the left over remnant of the creation of the
universe, and it is believed to be what is pushing the galaxies farther apart.
It is isotropic - meaning it has the same measurable quantities in all direction,
and it is homogenous - meaning that it is everywhere equally. Scientist also
believe that the zero point field contains fragments of all possible wavelengths,
from negative infinity, through zero, and on into infinity. Theoretically, that
means that a potentially infinite wellspring of energy exists, and this is part
of where the controversy begins. Free energy defies the laws of thermodynamics
which essentially state that the universe is growing colder and disordered as
time goes on. Infinite energy would mean that the idea of entropy - in which
everything eventually breaks down - is not valid. However, these concepts are
academic, and it would take volumes to consider both sides of the argument and
their proofs. All we need to accept is that the available energy from the Zero
Point Field is more than we puny humans can ever need.
But how does this Zero Point Field actually cause two plates to slam together
in a laboratory? That is, after all, the definitive proof that the ZPF if a
source of energy. The "hows" of the Casimir effect was put forth by Dr. Peter
Miloni of the Los Alamos National Laboratories,
who postulated that the forces arose from an imbalance between available energy
states between the plates, and those outside the plates. Essentially, you had
a finite number of possible energetic states pushing against the surfaces of
the inside of the plates, but you had the infinite number of energetic states
of the entire universe pushing on the other side of those plates. The break
even point at which those plates are pushed together depends on the surface
area of the plates, and their distance of separation.
But the experiments of Spaarnay and Lamoreaux are hardly templates for a useful
device for exploiting the potential of the Zero Point Field. The Law of Conservation
of Mass and Energy dictates that you cannot get more energy out of a system
then you put in to the system. In the case of Spaarnay and Lamoreaux, the energy
that slammed the two plates together is a fraction of the energy needed to both
separate them, and to align them again. A useful method for tapping the Zero
Point Field rests in a different technology - a technology nearly every citizen
on this planet uses daily.
Stick a wire into the air, and shove the other end into the ground, and a current
will flow through the wire. This is caused by radio waves striking the wire. Put
a coil, a diode, and a earphone together, and you can hear these radio waves.
Today, it's simple everyday science. In the days when Tesla and Marconi experimented
with radio, it was magic. Electricity itself was still a new and mysterious force,
but radio waves themselves were completely alien. There was scarcely a body of
theoretical research about the transmission of energy through the mysterious aether.
But this didn't stop scientists of the day from experimenting with these forces.
A dentist named Mahlon
Loomis made what is believed to be the first transmission and reception of
a radio broadcast. The transmitter consisted of a kite anchored within a puddle
of salt water. On the receiving end was another kite also anchored to a puddle
of saltwater via a galvanometer.
Upon applying a large current to a wire attached to the transmitting kite, the
galvanometer at the receiving kite jumped to show a current flowing through the
wire. The year was 1866, nearly 30 years before Marconi
experimented with wireless telegraphy, and gained international fame as the "inventor"
of radio communication.
We know slightly more about the Zero Point Field today than did Loomis, Stubblefield,
Marconi, or Tesla
in their day, but it seems that the answer to tapping the potential of the Zero
Point Field is as simple as it was for tapping the electromagnetic field of
radio waves. Here is the hitch: the most useful frequencies - i.e., those frequencies
from which practical electrical currents can be extracted through such a system
- are in the region of 1040 Hz. By comparison, Gigahertz radar is
1010 Hz (or 10,000,000,000 hz), visible light is 1014
Hz, and gamma radiation is 1020 Hz. There exists no antenna technology
that can receive frequencies that high. Luckily, there is a way around this,
a sort of "cheat" if you will. Mix two waves of differing frequencies
together, and you get a "beat" frequency that is lower that either
of the two original frequencies. It is possible to create a mechanical structure
that mixes two frequencies together, and then use a more conventional antenna
design to harness the power contained in that beat frequency. If the idea sounds
good, then that probably explains why it's already been patented (#5,590,031)
by Doctor Frank Mead of Andrews Air Force Base.
Of course, with the promise of such a technology, the first question is why
it isn't front page news. Once again, we are just slightly behind in technology
for such an application. While Mead has experimentally validated such a device,
a working prototype that can produce practical amounts of power will require
manufacturing processes of greater precision than we are capable of today. All
of science is dependent upon the various fields in science. Just as electronics
contributed to fusion with the development of fusor technology, the future of
zero point energy will have to wait on innovations in micron-scale manufacturing
coming out of the semiconductor industry.
In the hit movies Terminator 2
and Terminator 3, the arch nemesis
was a robot made from "mimetic alloy" - basically a liquid metal that could take
on the properties of any other substance. The mimetic alloy existed entirely in
fiction, and was created using special effects, but the idea was... to put it simply
... cool. Years later, another work of fiction introduces us to a solid materiel
with similar properties. In his books "The
Collapsium" and "The Wellstone",
Wil McCarthy introduces a materiel
that is more or less based on science fact, and not science fiction. Wil McCarthy
himself has an impressive record, having worked for Lockheed Martin as a rocket
scientist (navigation and propulsion systems), he holds a degree in Aerospace
engineering, and has done some post-graduate work in astrophysics. Prior to becoming
an author, he worked as a robotic engineer. In writing his books, McCarthy drew
upon both his own expertise, and that of colleagues. But first, your CD-ROM drive.
Lasers are big, bulky, and consume lots of power. At least they did for four
decades. Then, quite suddenly lasers got incredibly small and efficient. A laser
that required an entire room of equipment to function in 1950 now dangles from
a key chain. The breakthrough is the quantum
well. Think back to physics, and you may remember that all matter is made
of energy in a specific state. Some forms of matter, particularly electrons
and photons, can readily alternate between a particle state and a wave state
depending upon certain conditions. A quantum well is a physical construction
that forces an electron to act as a wave instead of a particle. Quite simply,
it is a layer of materiel so thin that an electron cannot fit inside of it unless
it converts its wave state. In order to get out of this state, the electron
must achieve an energy state sufficient to get out of the well (or, in keeping
with the weirdness of Quantum Mechanics, it can just disappear and reappear
outside of the well). When the electron finally achieves this energy state,
it jumps out of the well, and then emits a photon so it can return to it's normal
state. That photon becomes the laser light that little kids point at the back
of their teachers heads in class.
A quantum well is a one dimensional device. Add a second dimension, and you
get a quantum
wire (which will lead us to even more powerful lasers and radio antennas).
Add a third dimension, and you get quantum
dots, which is where it gets interesting . A quantum dot is a space created
in a semiconductor materiel where free electrons are pooled. In optimum conditions,
the electrons arrange themselves in a valence structure as if they were part
of an atom, except without a nucleus. It's the electrons that determine the
chemical behavior of an atom, so in effect each quantum dot is a programmable
atom - its behavior being determined by the number of electrons pooled into
the quantum dot and the geometry of the well. While quantum dots create the
possibility of electronically simulating combinations of elements, it also creates
the exciting field of atomic engineering wherein new elements on the periodic
chart are created by flipping a switch. In addition, designer atoms can be created
by altering the geometry of the atom, allowing for entirely new properties.
In the appendix of "The Collapsium", Wil McCarthy explains:
"Lastly, the quantum dots needn't reside within the physical structure
of our semiconductor; they can be maintained just above it through a careful
balancing of electrical charges. In fact, this is the preferred method, since
it permits the dots' characteristics to be adjusted without any physical modification
of the substrate.
The implications of such a materiel would revolutionize society. Materiel's
created in software could be copyrighted, and entire industries would rise up
Matter". Wellstone as portrayed by McCarthy could change from concrete
to the trademarked Bunkerlite (the strongest materiel known to man according
to the narrator in this book), then to glass. A vandal throwing a brick at a
window would find the brick bouncing off a concrete wall, which then turns back
to glass. By building addressability into the silicon matrix, you could even
create discrete structures within the materiel, such as electrical circuits,
even whole electronic computers.
So picture this: a diffuse lattice of crystalline silicon, superfine threads
much thinner than a human hair crisscrossing to form a translucent structure
with roughly the density of balsa wood, a structure which, like balsa wood,
is mostly empty space. Except that with the application of electrical currents,
that space can be filled with "atoms" of any desired species, producing
a virtual substance with the mass of diffuse silicon, but with the chemical,
physical, and electrical properties of some new, hybrid material."
Of course, wellstone would not be the end of natural materiel's. A wellstone
iron rod would not have all the strength of iron. Beat it with a hammer, and
instead of deforming it would shatter into bits of silicone. Strike a wellstone
concrete wall just right, it may just short out and change into wellstone glass.
The applications of wellstone (or a materiel like it) are exciting. Unfortunately,
it is still a horizon technology that requires further examination. However,
the technology to bring it about exists now. Wellstone could be created by the
same processes that create the silicon based semiconductors found in every electronic
McCarthy, Wil. The Collapsium. New York: Bantam Spectra, 2000.