Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

[P]
Free Information and the Grey Goo Problem

By Carcosa in Technology
Wed Dec 13, 2000 at 02:36:21 PM EST
Tags: Technology (all tags)
Technology

The power of individuals to harm the many has never been higher, and this seems to be part of a trend: technology has empowered individuals to the point where, with sufficient knowledge, they can produce weapons of mass destruction, and as technology increases, this problem will increase proportionally. How do we reconcile freedom of information with advances in biological science, nanotechnology, and even electronics?


It used to be that you could buy dynamite at the hardware store. Farmers used it for blasting stumps, and it wasn't really controlled by the government because it wasn't really much of a problem. People would buy it, they'd blast their stumps, and occasionally they'd blast their limbs into stumps, but that was considered an acceptable risk. Then, it became apparent that to allow unrestricted access to dynamite posed a danger: criminals could use it, there were other ways of pulling stumps, and there was the risk of accidents. So dynamite got regulated.

Nowadays, you can buy things like petri dishes and chemistry equipment fairly easily. An amateur can assemble a lab with all she needs to perform fairly advanced chemistry, and possibly produce munitions with a 1940 level of technology, given sufficient information and determination. Information's available on the net and from various publishers (Loompanics, Amok, Paladin Press) and even in many used bookstores it's possible to find things like special forces field manuals relating to improvised munitions.

And there are problems, and there are men like McVeigh who are willing to sacrifice other peoples' lives for their own distorted principles. And the government's regulating what it can, but explosives are very simple things. When you can make something like that out of urine, how do you really stop people from producing devices to hurt others?

And what about nearfuture tech like HERF devices? How do you stop someone from rewiring a microwave transmitter to throw a pulse designed to fry out electronics? If they have the information and the determination and some rudimentary level of technical knowledge, it might not be difficult to perform such a feat. Say it could fry or disrupt electronics in a hundred foot radius. When you consider information hubs, and modern data storage methods, and lines running through central trunk sites that affect literally millions of people, a day-long service disruption has massively widespread effects: like an electrical shock to a nerve ganglion, systems far away are affected.

I'm not going to talk about biotechnology here because the implications are obvious.

Think about middling-far future technologies like Nanotech, which is tantalizingly close to being within our grasp. Nanotechnology is a desktop science, a very fundamental manufacturing paradigm that would be possible to perform, given sufficient information, on an extremely shoestring budget. Let's say you have a nanotech assembler, a machine capable of fabricating machines of arbitrary design on a molecular scale. There are people you don't like, for one reason or another. Maybe you've decided that the whole biosphere needs to go. So you produce a little device that can replicate itself. It's tiny, miniscule, sub-microscopic. It's the size of a virus. It's programmed to replicate itself for a little while, then stop replicating for a month or so, then start replicating itself again.

You give it a pile of dirt to eat, in a large glass tub. Within seconds, the dirt's transformed into an unbelievably fine black dust. You take the dust and you throw it into the air near an airport or in a subway, or you mix it into a supply of food or drugs or clothing. Your agents are carried all over the world by travellers and by the wind. Then a month later they wake up again and mindlessly they begin devouring all organic molecules they come in contact with, turning them into a substance like themselves.

It's Ice-Nine. It's potential armageddon. It's the big dirt nap for everything on the planet, far worse than nukes.

Like it or not, nanotech is coming. It's going to be military first; any new technology must be first tested by using it to hurt people. Then we're going to see nanotech tools in the commercial sector. There may be assisting developments in the technology tree, perhaps in chemistry or bioscience, that will make it easier to develop and implement nanosystems; the ingression of nanoscale manufacture into our lives may well happen very quickly, because much of the R&D work is done. But what if Russia gets it first, or Iraq? What'll become of the Great Satan and all its servant nations then? So the military has a mandate to develop this kind of science, and do it quickly: we must not allow one of our enemies to develop an assembler first.

So how do we control this kind of technology that can spread massively and on its own power, devices that can manufacture themselves for free and cause damage on a Terra scale? In the sort of personal empowerment the Cyberpunk writers talk about, they often somewhat sidestep the problem of fundamental human evil, and of homicidal madness.

You'll notice that the common denominator in many of the preceding paragraphs was the word INFORMATION.

How many incidents of domestic bioterror, or electronic HERF pulse terror, or nanotech-assisted crime-- and it'll be gruesome and mediagenic-- will it take for the public to cry out for the government to do something, anything, to keep them safe?

How can the government possibly regulate this sort of technology once it spreads? How can they prevent lone wolves, or small organizations, from procuring technology infrastructures that cost very little to produce and don't require anything weird, like uranium? And of course the government will know that they can't do a thing about it, but they'll pass widespread legislation anyway, to "keep us safe" and to consolidate the powerbases they've held for hundreds of years that are now suddenly looking like beachfront mansions in Miami. The kind that you see down by the Keys, half-submerged.

Did you know that Aum Shinrikyo appears to have detonated a nuclear device in the Australian outback in 1993? These are the same benevolent angels, remember, who blew a SARIN device in the Tokyo subway in 95, IIRC. Not the sort of person you want to trust with supertechnology, or the information to produce it.

Point is: If information wants to be free, and we can't stop it from being free now that it's spread its electronic tributaries and wave tendrils throughout our society... What are the implications for human society? How do we reconcile Kuro5hin's ideals of information freedom and human privacy and personal empowerment with issues of human evil and the rise of technology, which WE ARE ABETTING, capable of producing grievous effects? And don't just reply "There'll be technological safeguards built in" because you all know as well as I do that it's far easier to destroy than create, and that initiative rests with the sudden attacker.

Sponsors

Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure

Login

Related Links
o Kuro5hin
o Also by Carcosa


Display: Sort:
Free Information and the Grey Goo Problem | 61 comments (61 topical, editorial, 0 hidden)
grey goo unlikely (3.60 / 15) (#1)
by streetlawyer on Wed Dec 13, 2000 at 12:16:31 PM EST

Since so far, the human race has abjectly failed to create any machine anything like as efficient as many of the things found in nature, even on a macroscopic scale, would someone perhaps suggest to me how it is that this "grey goo", made up of machines on the same scale as bacteria and dust mites would be any more of a threat to the world than, well, bacteria and dust mites? Engineering a nanobot in the simon-pure atmosphere of a laboratory is one thing; engineering one that will stand up to a blast of flea's digestive juices is another.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
counter example (2.83 / 6) (#12)
by motty on Wed Dec 13, 2000 at 01:46:25 PM EST

uh, how about "the internet" as an example of a machine which is "anything like as efficient as many of the things found in nature". it's a vague counter-example i know, but it's no more vague than "many of the things found in nature." while the protocols and the nature of the traffic have evolved over time and continue so to do, the thing itself here seems to me to be the single most efficient and stable machine humans have ever created.

i don't mean "efficient" in terms of whether or not it is currently capable of allowing you to download DVD movies easily, or even large mp3s over a 56K connection. i don't mean "stable" in terms of "never goes wrong ever" - both would be obviously wrong. i mean "efficient" in terms of whether or not the system continues to work and successfully continues to allow arbitrary computers to send arbitrary packets to other arbitrary machines on the network. which it does, has done for a number of years now, and continues to do despite massive growth. i mean "stable" in terms of the fact that the net continues to be there, day in day out - we might get bad days when an undersea cable gets chopped, or AOL's email goes down, and all that traffic suddenly gets routed by an unusual route - but the network itself as a gestalt has a pretty good record of minimal downtime.

the thing that the internet has in common with life on this planet is redundancy. just as the packets can be routed around damaged parts of the network, so life ensures its continued existence by continually covering a large number of bases at once in terms of having a wide range of lifeforms with the characteristics required to survive a wide range of potential futures. eg - cockroaches and certain kinds of bacteria (IANABiologist, so feel free to correct me here) can apparently survive nuclear blasts. so even if we do annihilate ourselves and most of everything else by that means, we *still* haven't managed to be stupid enough to come up with anything that will *actually* kill all life on this planet.

as for how "grey goo" might be a threat to the world, the trick will be to construct a grey goo constructor with the kind of level of redundancy that the net has, or greater. there's no need to engineer a nanobot that will stand up to a flea's digestive juices if, for example, so many get created that there wouldn't be enough fleas on the planet to reduce the (putative) nasty 'bot population significantly.
s/^.*$//sig;#)
[ Parent ]

the internet is many orders inferior to a brain (3.28 / 7) (#13)
by streetlawyer on Wed Dec 13, 2000 at 01:49:40 PM EST

in all sorts of terms. Your comment about nanobot replicators is misplaced; viruses already do this trick, and while they survive, they do not destroy all life on earth, and that's after millenia of breeding.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]
the internet is of quasi-organic complexity (3.00 / 3) (#35)
by motty on Wed Dec 13, 2000 at 08:21:21 PM EST

of course the internet is inferior to a brain. i never suggested otherwise. i only suggested that the internet was of quasi-organic complexity as an attempt to respond to your (rhetorical?) suggestion that no human machine had yet approached the kinds of efficiency seen in nature.

my comment about nanobots was not misplaced. you sought to suggest that there was nothing to fear from a nanobot because it was unlikely to survive being swallowed by a flea. i pointed out that there could be a scenario of a nanobot population where the edibility or otherwise of nanobots vis-a-vis fleas was not an issue. seems reasonable.

your point about viruses, though, would appear to be hanging on the end of your last comment with only the most tenuous of links to the previous discussion about nanobots. I am still not a biologist, but aren't viruses essentially parasitic, meaning that strains of virus that tended to destroy all their hosts would be unlikely to survive without them, and as you say, they have been evolving this way for millenia. a whole new kind of nanobot, though, would have to go through a number of iterations of the near extinction-resurgence cycle to reach a level where it could be safely described as parasitic, and could easily wipe out all kinds of stuff in the meantime.

oh the temptation to have used 'virii' instead... :)
s/^.*$//sig;#)
[ Parent ]

Parasitic nanites (2.50 / 2) (#39)
by spiralx on Thu Dec 14, 2000 at 05:27:32 AM EST

Okay, so not so much in the biological sense of the word, but I think it would be highly unlikely that nanites would be engineered to be able to replicate at will. The simplest solution I can see is to make their replication dependent upon being able to use a certain type of molecule not present in nature, meaning that they can only replicate when a source of this is provided intentionally. In this manner you can control the amount of nanites you wish to produce.

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

Parasitic Nanites (5.00 / 1) (#45)
by Bernie Fsckinner on Thu Dec 14, 2000 at 02:49:38 PM EST

But what if the chemical is an essential pert of human metabolism?

[ Parent ]
Well... (none / 0) (#48)
by spiralx on Thu Dec 14, 2000 at 05:31:47 PM EST

... that would be pretty foolish now wouldn't it? :)

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

Some more (semi) counter examples (3.66 / 6) (#30)
by StrontiumDog on Wed Dec 13, 2000 at 07:18:43 PM EST

Since so far, the human race has abjectly failed to create any machine anything like as efficient as many of the things found in nature, even on a macroscopic scale,

Give us some time, give us some time, it's only been 50 years or so that we've really started building devastating fuckers. That said, I think nukes are nice examples of powerful macroscopic destructive forces, which have been kept in check largely by human restraint and a generous helping of cowardice. Would that we were Klingons.

mites would be any more of a threat to the world than, well, bacteria and dust mites

May I remind thee, of the effects of the Bubonic Plague or the Spanish Flu. Ask the American Indians, who were decimated by Olde Worlde diseases (some sources estimate up to 80%, yes that figure's from memory, methinks it came from Guns, Germs and Steel, no I'm not going to look it up). I consider that threat enough. I don't demand that homo sapiens go extinct before I consider evil germs a threat; the thought of a repeat of 1918 is enough to give me the willies.

[ Parent ]

Who do you trust? (2.85 / 14) (#2)
by enterfornone on Wed Dec 13, 2000 at 12:18:59 PM EST

Would you rather this information in the hands of a few governments and corporations, or in the hands of many. If it is in the hands of all then everyone has to be accountable to everyone else. If it's in the hands of the few they are accountable to no one.

--
efn 26/m/syd
Will sponsor new accounts for porn.
I dunno ... (3.00 / 3) (#27)
by StrontiumDog on Wed Dec 13, 2000 at 06:38:18 PM EST

Would you rather this information in the hands of a few governments and corporations, or in the hands of many. If it is in the hands of all then everyone has to be accountable to everyone else. If it's in the hands of the few they are accountable to no one.

On the one hand, your sentence makes sense. On the other hand, it makes sense only if the damage that can be caused by one individual is limited. Put another way: the chances something will go wrong increase with the number of people involved. That's the problem with arming everyone in New York City with tactical nukes, for instance: sure most people are more trustworthy than the gubmint, no doubt about that, can't trust the fscking gubmint, but there are 15 million people in NYC and with 15 million nukes someone's guaranteed to press the little red button. I guarantee ya someone will, if only to piss off the apartment manager or to see what the little red button does. That's a different ballgame from the gubmint holding all 15 million nukes. In that ball game all New Yorkers get to vote on whether the button gets pressed or not, and barring dimpled chads it only gets pressed when 50% + 1 votes come in.

[ Parent ]

stupid argument. (3.00 / 3) (#32)
by Nyarlathotep on Wed Dec 13, 2000 at 07:48:32 PM EST

That is a flawed argument. The truth is that the ammount of damage done by corperations and governments increases when they have no competition, i.e. when Joe Islamic Fundamentalist is prohibited from learning about biotech and starting a bioengenring company to undo the damage Monsanto did with it's terminator genes.

Anyway, terrorism and "Grey Goo" are totally insignificant (only impact a few hundered people) when compaired with the costs* of not openning up biotech as it's quite easy to argue that the costs are allong the order of starvation and/or economic enslavment for billions.

If the U.S. lissens to luddit bastards like Bill Joy and really do try to keep biotechnology from the third world, then companies like Monsanto will exploit the third world in way never before imagined. If we let this hapen then we really would deserve to have all the chemical, biological, and nanotech wepons the rest of the world can develop thrown at us.


Campus Crusade for Cthulhu -- it found me!
[ Parent ]
Nuts with nukes (2.00 / 2) (#43)
by DaveP37 on Thu Dec 14, 2000 at 12:27:25 PM EST

but there are 15 million people in NYC and with 15 million nukes someone's guaranteed to press the little red button. I guarantee ya someone will, if only to piss off the apartment manager or to see what the little red button does.

If that's all the better you think humanity is, maybe a little forced [unnatural] selection via nuclear weapons is what the race needs.

Eliminating the gene that makes people like to live in large cities would certainly change the nature of the human race. Some might argue that such a change would be for the better.

No, I'm not advocating we start a eugenics program, but given the large numbers of mis-, mal-, and non-adapted humans there are, it's something to ponder.



[ Parent ]
On the Gray Goo problem. (4.65 / 20) (#3)
by Christopher Thomas on Wed Dec 13, 2000 at 12:29:42 PM EST

It turns out that a "grey goo" scenario wouldn't work the way you fear; the "goo" would actually behave more like lichen or crab grass.

The problem is energy. Fabrication takes energy, because of the energy delta between the raw material and the product, and because of inefficiency during fabrication, and because of the large difference in entropy you'll have (you're trying to turn moderately structured matter into extremely highly structured matter, and to maintain that fantastic level of structure in the face of environmental and thermal degradation; Gibbs Free Energy arguments make this spontaneous only with the expenditure of effort).

Guess what? The nanomachines will be limited to the same sources of energy as biological life - sunlight, and oxidation of hydrocarbons. This suggests that, barring a huge jump in efficiency over biological life, they'll have roughly the same characteristics. Lichen has no social conscience; it tries to replicate itself anywhere there is a favourable environment. Yet, lichen doesn't overrun the earth. Similarly, I think that "gray goo" will end up complementing the biosphere, rather than supplanting it.

As for the argument that nanomachines _will_ be vastly more efficient than biological machines... I remain skeptical. Bacteria, among the earliest of the self-replicators, have been evolving for billions of years. If a vastly more efficient construct could exist, IMO it probably would by now.

Thus, while "grey goo" would probably have noticeable effects, I seriously doubt that it would be the world-eating plague that fiction postulates it to be.

Arguments about efficiency from evolution (4.16 / 6) (#16)
by meeth on Wed Dec 13, 2000 at 03:18:50 PM EST

The problem with arguments like 1) bacteria have had billions of years to evolve, 2) they aren't amazingly efficient, so 3) nanomachines aren't going to be amazingly efficient, is that the processes of evolution and human design are fundamentally different. Evolutionary change isn't purposeful (that is, "mother nature" does not have design criteria in mind when making changes), so that if a species reaches a local maxima in reproductive fitness, they may stay at that maxima rather than going to a global or higher local maxima. Since human design is purposeful, we are more likely to work towards those higher local maxima.

Anyway, this isn't to say that nanotechnology is going to be efficient, just that it might be. My guess is that, like AI, the engineering/programming difficulties are going to be more severe than nanotech boosters believe.

[ Parent ]

Precisely (3.83 / 6) (#18)
by cpt kangarooski on Wed Dec 13, 2000 at 03:56:38 PM EST

This is why we don't see cheetahs with wheels - evolution doesn't have any specific direction.

On the other hand, if people in a lab really wanted to create a nanite that could acquire energy from a wider variety of sources than any bacteria, they probably could, given time. While I expect that small-scale nanotech will be quite difficult (it's really working with terrifically complicated large molecules) the larger scale stuff that's more akin to building viruses and bacteria will be easier. There are after all a lot of examples to look at.

--
All my posts including this one are in the public domain. I am a lawyer. I am not your lawyer, and this is not legal advice.
[ Parent ]
Here's to thermodynamics! (4.16 / 6) (#17)
by xtal on Wed Dec 13, 2000 at 03:26:54 PM EST

Yes! Amen! Someone else has finally seen the problem with nanotech - how are those machines going to get their energy. That's why bacteria didn't evolve into Grey Goo - they're hopelessly specialized for their particular method of getting energy input. Change the environment a bit, and everything dies. That's why the oceans aren't great big gobs of Green Goo (algae), and one could argue that the tropics are a lot more hospitible than a lot of cities.

Nanotech, which self replicating, is complex. Complex things of all forms are subject to deterioration over time - even Diamonds left in a room, for eons, will eventually disappear into thin air because of the differences in entropy (specifically, they'll become carbon dioxide). Mind you, we're talking over billions of millions of years, but you get the idea.

You could argue this further. Assume that there are other advanced civilizations out there (I love how scientists will jizz all over themselves thinking about bacterial life on other planets, but the concept of other alien scientists is something to laugh at.. yeah, whatever). So, assume someone out there must have done this before. How come the whole universe hasn't been reduced to grey goo?

Nano machines for doing repair in the body will certainly come into existance. The level with which they can expand will be limited by whatever energy source they use to do their work, and will likely be highly highly highly specialized for a specific task - like assembling carbon dust in to carbon fiber sheets. As for generic replicators overrunning the planet.. if it was indeed possible, nature would have done it already with bacteria. Although, I suppose there's the arguement for man acting as nature's agent to create a non-carbon based lifeform, but I digress.

Worry more about people making plain old biological virii (the US government is one of the world leaders here). Those have the potential to exterminate all (human) life on earth, no questions asked. The human genome project will make it even easier to find security holes (heh) in our immune systems for others to exploit. Ugh. It's a good thing that the most lethal virii are the most fragile.. (see above about entropy), or they kill their hosts so quick they can't spread very far (ebola). Now, a resiliant form of aids that's light enough to be airborne..


[ Parent ]

why the whole universe isn't gray goo... (3.80 / 5) (#22)
by Malor on Wed Dec 13, 2000 at 04:07:10 PM EST

Well, even if you postulate alien scientists unleashing gray goo, interstellar space isn't exactly loaded with things to 'eat'. :-) Probably you wouldn't see more than a couple of star systems get 'eaten' before the alien civilization figured it out and stopped going to those stars.

And if they were all in one solar system, well, they'd probably all die gruesome deaths but unless they actively launched probes loaded with goo to other stars, the problem would remain isolated.

Another thought: maybe someone HAS come up with a gray goo, and figured out how to get it to replicate across solar systems. Maybe that's why the night sky is so quiet.... :-)

[ Parent ]

Energy density (3.80 / 5) (#21)
by speek on Wed Dec 13, 2000 at 04:05:08 PM EST

Then it's just a question of energy density. 500 years ago, a single individual could muster the energy of 1 ox and apply it to a single problem. Nowadays, individuals can zap things in a microwave with 1400 Watts of power. We drive cars with 200 horsepower. Surely you don't think the energy densities available to individuals is going to stop progressing now, do you? So, what happens when an individual can command energies equivalent to a nuclear bomb? Sure, they can't destroy the earth in one gray goo go (sorry, couldn't resist ;-), but they can do scary amounts of damage.

I think the problem will be real enough, soon enough, that you are going to see governments getting more involved in regulating what individuals have access to freely. Privacy is going to go away. I don't see any way around it.

--
al queda is kicking themsleves for not knowing about the levees
[ Parent ]

root mean squared (2.66 / 3) (#24)
by weirdling on Wed Dec 13, 2000 at 06:18:29 PM EST

However, each nanobot you create has a fraction of the parents power, so the decrease will be frighteningly fast such that bot a splits to two or three hundred bots, series b, which in turn splits to two or three hundred bots, etc. By level three, we've got 27 million bots, and each one has one 27 millionth the original power. Now, unless there is an external energy source available to them, which would seriously jeopardise the ease with which they can be surreptitiously deployed, for each series c bot to have 1 watt-hour power, the original bot would have to have at least 27 million watt-hours, and, counting the innefficiencies of production, I'd bet closer to a billion watt-hours...

I'm not doing this again; last time no one believed it.
[ Parent ]
presumably nanobots would be refuelable (4.00 / 4) (#33)
by planders on Wed Dec 13, 2000 at 08:14:01 PM EST

root mean squared laws probably wouldn't apply to a 'grey goo' scenario. the nanobots would presumably use a basic chemical fuel that it could gather from the environment. stored power might be availible for unusual situations on a moderately complex nanodevice (one approxmiately the size of a living bacterium as opposed to one composed of just a few protein-sized molecules) but most devices would get fuel for reproduction from the environment.

[ Parent ]
cannot stop the flow (2.30 / 10) (#4)
by gregholmes on Wed Dec 13, 2000 at 12:32:06 PM EST

Obviously, the flow of information can't be stopped by any method that wouldn't be worse than the problem. So, the solution is to develop detection and countermeasures for such technology.



Bill joy bullshit (3.23 / 17) (#5)
by Nyarlathotep on Wed Dec 13, 2000 at 12:36:09 PM EST

This is the same argument that Bill Joy makes and it is totally full of shit. The choice here is pretty simple folks:

1) Trust the individual who *may* (V#ERY small probability) do a nasty thing which invluences a about hundred people. You should remember that homemade biological wepons do not have the delivery system or quantities of millitary biological wepons, so we are not even talking about wiping out cities here people.

OR

2) Let only the corperation and government have control of the new technology. We *know* *almost* *every* corperation and government who controls this technology will do nasty stuff with it. Now, the corperations nasty stuff will generally have a slightly smaller impact to one involved individual, but they will impact billions of individuals. Plus, the one thing really preventing them from doing many of these nasty things would be availablilty of the technology to their consumers

Example: The government lissens to Bill Joy and makes it illegal to teach bioengenering to non-americans. Well this is a pretty fine way to make shure that Monsanto can continue to force third world contries to buy it's seeds. Anyway, this case shows that the choice (2) means: condem millions of citizens of third world countries to economic slavery to Monsanto and/or starvation. I think I'll take the risk that one of those people will build a biological wepon and set it off near me.

Campus Crusade for Cthulhu -- it found me!
Biological weapons (3.00 / 1) (#36)
by Spinoza on Wed Dec 13, 2000 at 09:25:03 PM EST

Here's a thought. The US government hasn't had a biological weapons program for quite some time. (Of course, this is unproven.) The Soviet Union did, right up until it's collapse. Some of the scientists who worked on those weapons ended up in the US, working with the US government to develop countermeasures to their weapons. Wonder what happened to the scientists who didn't move to the US?

It's a bigger problem than you appear to percieve. The delivery systems for bio-weapons are not nearly so hard to create. Anthrax varients can be dropped from any aircraft (say, a crop-dusting plane flying over a city of 50,000?) The spores survive for days and can spread for a hundred miles. This is a far cry from affecting 100 people, and probably well within the ability of some terrorists.

Furthermore, Ken Alibek, who was one of the top soviet bio-weapons scientists, mentions in this article, his suspicions that the soviets developed a smallpox/ebola hybrid. He claims that this hybrid would have been highly contagious and invariably fatal. The contagion alone would be a highly effective delivery system. His estimates on how much this research would have cost is "a few million dollars" based on costs of similar research. Of course he is engaged in some degree of speculation here. Also, this is Russian, not US, biotechnology, and as such, is probably already available to unstable governments and some terrorist organisations.

The point here is that the US government may be justified in doing what they can to limit the spread of this technology after all. They can't do much to stop ex-soviet scientists from selling their knowledge, but they can prevent further education in those sciences, to some degree. Still, it's hard to think of something more scary than ebolapox.

There's two sides to this argument, and both seem to result in human suffering.

[ Parent ]

The problems with limiting information. (3.90 / 11) (#6)
by Christopher Thomas on Wed Dec 13, 2000 at 12:47:29 PM EST

You seem to be proposing limiting information as one possible solution to the problem of rogue individuals wreaking mass destruction. The problem is that there are at least two factors that make this extremely difficult and maybe impossible:

  • Entropy.
    Call it "information wants to be free". Call it "asymmetrtic difficulty". The problem is that it is much easier to spread information than it is to prevent the spread of information. Remember the "Anrchist" files that were floating around the BBS scene years ago? Remember before that, when sheets of jokes and urban legends would propagate via blurry photocopies? Remember the water cooler, where information propagates the old-fashioned way? It would be extremely difficult to remove a piece of knowledge that's already been released, and extremely difficult to stop leaked knowledge from becoming widespread. Thus, I question whether even the idea of limiting knowledge is practical.

  • The level of restriction you'd need.
    The second problem is that making explosives really isn't that complicated. The methods can be derived from high school chemistry knowledge (albeit with a safety risk). Similarly, you'd have to remove all knowledge of radioactivity to prevent nuclear technology from being rediscovered, and radioactivity finds its way into many aspects of our lives (hey, where is this bit-rot in ceramic-packaged memory chips coming from?). Heck, building _guns_ is easy - look at the number of spud-guns around. Sure, they're the poor man's weapon, but I sure wouldn't want to get hit by one. In summary, to remove the ability to produce destructive weapons, you'd have to bring the general public back to a medieval level of knowledge. This would substantially degrade their quality of life.

Given the difficulty in elminating the problem, I think we'd be better off trying to a) restrict the necessary materials for some of these stunts, and b) structure society such that it would be difficult to cause a catastrophic failure (redundant distributed communications network, good ventilation and isolation in buildings to limit chemical attack, good fire codes, etc.).

To ameliorate nuke and nanotech problems, you're probably best off just distributing the population geographically. This happens already naturally, and will be even better down the road when space colonization finally becomes practical. A corollary of this is that population density must be kept low enough that the expected number of people building, for example, nukes in any nukeable region is much less than one. This will reduce problems to manageable levels.

You touch on some real problems there ... (4.00 / 2) (#29)
by StrontiumDog on Wed Dec 13, 2000 at 06:58:50 PM EST

... as Pandora found out, once my wardrobe is opened it's impossible to stuff the clothes back in again. But the solutions you propose won't work; the grey goo problem exists on a galactic scale with Fred Saberhagen\Joe Haldeman's berserkers: anywhere humans go, bloodthirsty nanos can go too. Distributing people geographically will help somewhat against small scale terrorist nukeings, but it is scarcely effective against large scale stuff.

[ Parent ]
Restrictions as a means... (4.33 / 3) (#31)
by Mad Hughagi on Wed Dec 13, 2000 at 07:22:47 PM EST

I have some major concerns with restrictions. A couple points to note:

- Restrictions imply that there is a good side, and while I support the fact that the whole of humanity is the end moderator we currently live in a very fragmented world where many people hold their own associations paramount. In the end it comes down to judging who is going to be allowed to do things, and as such there lies the problem of power abuse. Just because one group believes it is best suited to following a certain area of research in a responsible manner doesn't mean that another group doing the research for a different means won't come to the same conclusions. Restrictions are only one step closer to a totalitarian state, and it is fairly evident in history that they were a major part of most dictatorships. The intent with which the research is applied is the key, not the act of researching in and of itself.

- Restrictions only apply to people who follow the rules in the first place. Just because a certain group forces restrictions on it's members it doesn't imply that another group will follow these restrictions, even if it is in compliance at the surface level. Just look at the failed efforts to curb nuclear proliferation in southern asia - even though most of the western nations opposed developements into this area and definately denied access to this technology it still happened (I'm not making any judgement here, just giving an example). Another case in point where restriction has failed miserably is the war on drugs - regardless of which side you are on I think it is safe to say that it has failed terribly and this is why people are starting to explore different methods of dealing with drug abuse. Restricting research is a restriction of information, and while it may work on the majority, there is nothing to prevent subversive (not meant in a bad way) groups from carrying out whatever research they please. With the increasing influence of multinational corporations (some of which have more power than most nation states) I see this as being an even greater dilemma.

So, in the end I would have to say that imposing restrictions on research is a failed cause. If we are to effectively protect our existance on this planet as a global entity then it will require a great deal more than control measures since they can never be absolute. Personally I believe that cultural / social development is our only hope. Until we can collectively agree that the good of humanity, and the Earth for that matter, are the most important factors in making a technological decision we will have to deal with groups that intend to wield technology for their own benefit. When I hear of things like this it always reminds me of Carl Sagan (in my opinions one of the greatest individuals of this century). He had very lofty dreams (that I share) of a mankind that could make it past the 'self-destruction' stage, I only hope that one day people will be able to look back and say that we made the right choices. It might be a bit too utopian to consider, but personally I think it's the only way we can go.


HUGHAGI INDUSTRIES

We don't make the products you like, we make you like the products we make.
[ Parent ]

I'm glad somebody gets it! (3.00 / 1) (#41)
by 0xdeadbeef on Thu Dec 14, 2000 at 11:13:17 AM EST

Or, at least, thinks like I do.

I'll go one step farther and point out that the kind of control measures necessary to prevent the development of this technology would create a tyranny that justifies the development of this technology... as a weapon. Nothing breeds self-righteous and suicidal militants like an oppressive government, and that's excatly what we will have if the egalitarians lose and the authoritarians win.

in other words, guns : carbombs :: nanotech weapons : grey goo

The solution is to let everybody have the techonology. Sure, every few years some nut will try to destroy the world, but at least they're be a million others with the techonlogy to stop him.

[ Parent ]
You can't STOP a bullet with a gun (5.00 / 1) (#50)
by zakalwe on Fri Dec 15, 2000 at 09:04:25 AM EST

The whole problem with the grey goo situation is that those million others with the same technology can't stop them. If you release a self replicator, you have exponential growth, and even if another self replicator could stop a released one, by the time countermeasures could be taken, the original replicator would have enough of a head start that it would always outnumber the countermeasure.

There are really only 3 solutions to this problem:

  1. Authoritarian Restriction

    Advantage: Restrict the technology to as few people as possible, in the hope of minimising the chance that one of them is a nutcase. With just 10 people having the technology, theres a good chance they won't destroy the world. With 6 Billion, theres going to be someone crazy enough to do it.

    Disadvantage: Its an authoritarian 1984 type government, with all that that implies. Even if you consider the survival of humanity worth this, there's still no certainty that it will stop the problem (eg. what happens when the oppressed masses rebel, and the government decides its only option is to use nanotech?)

  2. Kill all the scientists, Burn the books, go back to a medieval society

    Advantage: Good change of success if done now, before the technology is even developed. The human race will probably survive until the next major meteor strike at least.

    Disadvange: You need to ask? No computers!

  3. The human race suddenly matures and resolves never to use the technology for evil. All is well in the world.

    Advantage: Brotherhood of cosmic harmony. No danger of eliminating ourselves.

    Disadvantage: And we accomplish this how? And what happens when some alien race doesn't do this, but accidently (or otherwise) destroys the universe?
    (ObSF: Forever Peace - Joe Haldeman)

None of the alternatives are terribly pleasant, and I'm not sure that 1 and 2 are any better than just letting the race bewiped out, but just saying "Don't worry, it won't happen" is wrong - it could happen, and ignoring the possibility is stupid. Personally, I'm just hoping the problem doesn't arise in my lifetime.

[ Parent ]
Self-replicators are hard to build (3.92 / 13) (#7)
by error 404 on Wed Dec 13, 2000 at 01:03:21 PM EST

at any scale.

The challenge in the grey goo is to make a machine that can make copies out of pretty much arbitrary materials under pretty much arbitrary conditions. Not an easy task at any scale.

Has anyone built one yet - any size? No, robots that can assemble similar robots from parts don't count. I'm talking about raw materials to complete, working machine, without intervention. Including power.

I suppose you could point out that eventualy, maybe decades or centuries from now, the technology will be there. But so will other technologies. Humans have always been capable of hurting other humans. And since the harnesing of fire, humans have been able capable of mass destruction. Humans have always muddled through, keeping each of us from killing off the whole group with a combination of social convention and power and pleasure and fear and restriction of information and resources and, mostly, making a culture that is more pleasant (at least for those who are capable of causing Big Trouble) than being dead. So far, it has worked well enough (for sufficiently low values of "enough"). One of these days it won't or the Sun will run out of hydrogen or a big rock will hit the planet and we will become extinct. Life is like that.


..................................
Electrical banana is bound to be the very next phase
- Donovan

It doesn't have to be every single thing... (3.33 / 6) (#19)
by Malor on Wed Dec 13, 2000 at 03:59:35 PM EST

All a grey goo device would have to do is bind all the oxygen it could find to whatever was nearby. Oxygen reacts with most things, and with a goo pushing it, you could deprive pretty much the entire globe of oxygen in a few days. That would be quite effective at destroying humanity. Some pockets might survive for awhile, but there probably wouldn't be enough of an industrial base left to split any new oxygen before the reserve supplies ran out.

The big limitation on any nanotech device is going to be *heat*. Nanodevices do work, and work generates heat as waste product. Plus, many chemical reactions (like binding oxygen to things) also release heat.

This is a guess, as I'm not trained in the field, but I'm betting that ultimately heat dissipation is probably going to be the Achilles' heel of nanotechnology.

[ Parent ]

OK, not arbitrary materials, oxygen + ? (3.75 / 4) (#23)
by error 404 on Wed Dec 13, 2000 at 06:02:09 PM EST

Still, you need a machine that can copy self without intervention. Has to be able to start with raw materials in a fairly arbitrary form like dirt, sand, seawater, or rock and obtain energy from the environment.

And I don't see that happening soon on any scale.


..................................
Electrical banana is bound to be the very next phase
- Donovan

[ Parent ]

Is the Human creature inherintely Evil or Good? (3.41 / 12) (#8)
by Dakkon on Wed Dec 13, 2000 at 01:03:56 PM EST

In the sort of personal empowerment the Cyberpunk writers talk about, they often somewhat sidestep the problem of fundamental human evil, and of homicidal madness.

You're entire acrticle rests on this single statement, and it is a statement that I happend to have some dissagreement with.

Before I get into that however, let me say that I think that the hypothetical situation which you have created does have a certain possiblity and probability to it. Nonetheless, there have been doomsayers in the past about many different new technologies, dogma's, philosophies and so forth. How many of these doomsayers have been right? Not too many I would geuss.

I also think you sidestepped a very important consideration with regard to nanotechnology. You noted that with regard to certain technologies you need certain strange things such as Uranium that are less than easy to obtain. By the same token, you note that the resources needed for making nanobots is quite a bit simpler. How is the government supposed to control this when all you need is a pile of dirt? The problem I see with this thought, is what makes you think that people are going to have easy access to a nanoassembler? Surely you don't think that such a device falls under your "shoestring budget" anecdote? Sure pile of dirt needed to build the nano devices is easy to get, but an assembler would surely cost billions of dollars to build. I see feel no concern that a terrorist group would be able to gain use of one of these devices. As for an enemy nation building such a device, well, we've managed to keep most nations in the world from obtaining nuclear warheads, stopping the spread of nanotechnology should be even easier. Constructing a nuclear warhead is childs play compared to the manufacturing capabilites and infrastructure needed to build a machine capable of constructively manipulating atoms.

This brings me to a point that makes this all moot. From your quote, one can assume that you have a fundamental belief in one of two things. Either, mankind is inherently evil, or there is a level of evil inherent to mankind such that there will always be those who are so deranged that they will stop at nothing to harm others.

I am, for lack of a better term, what is commonly referred to as a "Christian". I have some fundamental and critically important differences with mainstream Christianity, but that is irrelevant for the moment. This is why I disagree with the conclusion/thesis of your article. I choose to believe that mankind as a whole is maturing. I choose to believe that God, through his providence, is slowly but surely leading us to a more peaceful and loving future where all men are treated equally and where hate, and crime, and immorality are so universally disdained that we need not worry about locking our doors at night, let alone extreme things such as terrorism.

I'm certainly not trying to force this belief on anyone, but this is what I believe, and as such, I'm not concerned about paranoid hypothetical extremist situations such as this. Basically, I feel that by the time nanotech and other technologies of this level become available mankind will have matured some so that this will be less of a concern.

Dakkon

Man is maturing (2.33 / 9) (#15)
by komisch on Wed Dec 13, 2000 at 02:03:28 PM EST

You say that you believe that mankind is maturing and that God is leading us to a more peaceful and loving future. My question for you is this: Why do you believe this? How do you explain the horrors that have swept this world such as millions of Jews being killed, mass genocide in Russia during Communism, religious wars that threaten to destroy the Middle East? This sort of thing has happened throughout all of history so what makes you think that in the next 50 or so years man is going to mature so much that we will not need to worry about the technology being used for evil? I believe that mankind can mature but I also am a realist and believe that man as a whole is flawed and needs help to acheive maturity. But I do not believe that in the next 50 years we will see a massive change in mans nature that will make genocide and the use of technology to kill others a non issue.
"You are repose and gentle peace, You are longing and what stills it..." Friedrich Ruckert
[ Parent ]
50 years? (2.83 / 6) (#20)
by Dakkon on Wed Dec 13, 2000 at 04:02:11 PM EST

I don't happen to recall saying or implying anywhere in my post that it would happen in 50 years. But then, I also don't think that we will have nano tech in 50 years. I never said that it was a fast process.

You have to look at the whole of human history to see it. Not just a given 50-100 year segment, but 1000 years or more. Look a the hight of the Roman Empire. Rampant slavery and debauchery. People would stay up all night drinking, eating, purging, and sleep all day only to do it again the following night. Sure much of the war and hatred that goes on currently in Asia Minor and eastern Europe is horrible. But it's nothing compared to the absolute loatheing the some races felt for one another during earlier centuries. How about the Mongel hordes? Killing for the hell of it. Sure they wanted land and wealth, but there has been plenty of evidence of the unbelievably cruel acts they commited. Hell, you brought the up the Genocide in Russia and the Holocaust. Do you think there is a chance in hell that ANY country could ever get away with something that evil ever again? China is bordering on genocide admitedly, but even they don't dare go to far. Is mankind maturing? You bet your ass they are. Are we going to miraculously start behaving like completely moral creatures with complete love and respect for one another in a mere 50 years? HELL NO! But in 100 years? 150 years? 500 years? 1000? Yes. I believe we will.

How do you explain the horrors that have swept this world such as millions of Jews being killed, mass genocide in Russia during Communism, religious wars that threaten to destroy the Middle East?

I don't know why I am letting myself be baited by this, but I'll answer anyway.

To start with, there is an important question that must be answered that is the tenent for my entire system of belief. What makes us Human? My answer is free will. To be perfectly clear about this, God has given us the gift of free will, freedom of choice, freedom to do as we please, think as we please. This is what makes us human. If that freedom is taken away from us, then we are no longer human. If God forces us to do something, such as force us to love Him, force us to love our fellow man, force us to believe in Him, then our freedom has been taken away, and we are no longer human. There are of course other things that go along with being human, such as the capacity to give and recieve love. But free will is the most fundamental quality of humanity.

Now, suppose for the moment that God forbids anyone from harming another person. Well, what if that's what I want to do? What if I don't give a damn, I want to hit that person because he made me mad! If I don't have the choice to do that, then my free will has been taken from me. And don't throw arguments about society forbiding me to do that, there is a BIG difference there. If society tells me not to do something, I can choose to ignore them and disobey the law. If God tells you to do something, can you disobey Him? Not unless he allows you to do so. And why on Earth would God allow you to disobey Him? Because if he didn't permit bad things to happen, if he didn't allow people do disobey him, then he would be taking away our free will, and thus what makes us Human.

Dakkon

[ Parent ]
Plebian (2.33 / 3) (#42)
by 0xdeadbeef on Thu Dec 14, 2000 at 11:54:16 AM EST

People would stay up all night drinking, eating, purging, and sleep all day only to do it again the following night.

And you think that has changed? Apparantly you aren't invited to the good parties. :-)

[ Parent ]

Unfair Rating Alert! (1.00 / 2) (#53)
by unfair_rating_alert! on Fri Dec 15, 2000 at 11:24:15 PM EST

The previous comment is currently rated at a 1.83, which shows that it's been rated by several individuals, yet it's insightful and relevant!

---- Canned Text ----

This comment was provided by unfair_rating_alert!, a troll account created strictly to look for intelligent comments unfairly rated below 2.00. You may not agree with the contents of the previous post, however, if you're fair you should agree that it didn't deserve a less than 2.00 rating. To preserve the integrity of this troll account no comments from here will be rated as it's simply too easy to open multiple accounts to stack a rating. The purpose of this account is not to affect or change individual ratings, not but to show bias within the rating system. Therefore, this account will not post topical or editorial content, rebuttals, story submissions, rate comments, or vote on story submissions. Readers are encouraged to reconsider a rating and act according to their conscience.

[ Parent ]

Er, about the nukes ... (3.80 / 5) (#25)
by StrontiumDog on Wed Dec 13, 2000 at 06:24:12 PM EST

Well written comment, but the following sentence ...

As for an enemy nation building such a device, well, we've managed to keep most nations in the world from obtaining nuclear warheads, stopping the spread of nanotechnology should be even easier.

... Well, what can I say? In 1945 only the US had the Bomb. In 1946 only the US and the Soviet Union. Now it's the US, Russia, Kazakhstan, Georgia, Ukraine, France, Great Britain, China, India, Pakistan, possibly Israel; among the countries that do not yet have the Bomb but could build one on short notice are Japan, Germany, and Canada; Argentina, Brazil and South Africa have all had rudimentary nuclear programmes; Iraq, Iran and North Korea need active thwarting to prevent them from becoming nuclear powers.

One can only hope stopping the spread of nanotechnology will be as easy.

[ Parent ]

Actually, it's worse than you think. (4.00 / 1) (#55)
by Miniluv on Sat Dec 16, 2000 at 05:16:15 PM EST

Israel does in fact have the bomb, and if Israel does so does South Africa. SA may or may not have turned warheads, if they were constructed, to the UN when they held their first mixed race elections and the apartheid ended. I link these two because historically they've shared weapons technology.

AFAIK Canada does not posses fast-breeder reactors to produce weapons grade Uranium or Plutonium. They might be able to get the Uranium necessary for a single stage atomic device, but those aren't useful above a tactical level. Sure they'd make decent terror devices, but a maximum yield of ~40KT isn't that serious of a threat. Japan is in the same boat to my understanding, as are the South American countries, though they may have received either material or reactor design and funding from the USSR during the cold war.

Design and fabrication aren't the issues anymore when it comes to nuclear weapons, it's purely a matter of acquiring three difficult to get components. Weapons grade Uranium and Plutonium, and Tritium. The first two are highly restricted in their sale in any quantity, as you do not need the same quality for nuclear power production as you do nuclear weapons fabrication. Tritium is available in small quantities with a fair degree of ease, but you need significant quantities to produce a multi-stage nuclear device.

All of the above is, however, irrelevant because nuclear weapons aren't the only weapons of their ilk. Chemical and biological agents are the coming thing in mass destruction. With genetic engineering becoming so much easier and more effective, imagine what would happen if someone fortified Ebola with some cancer genes to increase survivability in the wild, and perhaps mix in something from the common cold to increase transmitability? Nuclear weapons have a limited effect area, even the 500megaton citybusters that'll spread fallout across a continent cannot compare with dropping a few canisters of a violently contagious virus with an 80% mortality rate in a few international airports.

The problem is that nuclear weapons research doesn't look like much else when you find a pattern of equipment and material purchases. Chemical and bio agent research looks like...well, a lot of legitimate things. On the surface it might look like the CDC in Atlanta is either harmless or a bioweapons research facility. They have all the necessary equipment to either work on making these diseases less deadly, or making them even worse. Same goes for Merck, Monsanto, Pfizer, et al. How about Ortho Weed Killer, it's research is remarkably similar to that necessary for fabrication of VX gas. We are reasonably sure Iran possesses these weapons, and we know Iraq does as they used several during the Persian Gulf War.

Information control is far beyond useless at this point, even ignoring the arguments of entropy and so forth. The knowledge is out there, and you just can't stuff everything back into pandoras box.

"Its like someone opened my mouth and stuck a fistful of herbs in it." - Tamio Kageyama, Iron Chef 'Battle Eggplant'
[ Parent ]

Actually, yeah it is. (4.00 / 1) (#57)
by delong on Mon Dec 18, 2000 at 05:24:35 AM EST

Israel has several, if not many, bombs. It is widely believed its three subs are nuclear-armed.

South Africa had the bomb, but voluntarily relinquished it years ago.

Iraq would have had the bomb, if the Israelis didnt have the good sense to bomb the breeder and lab.

Derek

[ Parent ]
alleged nuclear blast in australia (3.46 / 13) (#10)
by TuxNugget on Wed Dec 13, 2000 at 01:26:42 PM EST

My BS detector went off on this, so I thought I'd check it out. If this was just pure conspiracy theory, a lot of kooks would have picked up on it. There really isn't really as much crackpot stuff on google about this as you might expect. There are, however, a few serious investigations.

For example, the US National Science Foundation called it a seismic mystery, after ruling out, of course, the nuclear blast theory.

I do note that this is only a 3.6 magnitude tremor. Those are not exactly rare in seismic areas. So what does an underground nuclear test look like on a seismograph, anyway?

Nuclear tests and seismographs. (4.00 / 6) (#11)
by Christopher Thomas on Wed Dec 13, 2000 at 01:37:07 PM EST

I do note that this is only a 3.6 magnitude tremor. Those are not exactly rare in seismic areas. So what does an underground nuclear test look like on a seismograph, anyway?

Like a single sharp event with very strong locality, instead of the usual event that's mushy in space and time caused by many slippages in different parts of a fault.

Someone was proposing looking for meteor strikes in seismic data a few months back, as they similarly generate sharp events from a very localized region.

[ Parent ]
Crossed lines (none / 0) (#54)
by mesh on Sat Dec 16, 2000 at 09:56:09 AM EST

I'm from/in Western Australia, and I recall a few years ago there was a bit of a fuss about the Aum Supreme Truth cult owning a station in the outback north. iirc there was possibly some chemical experiments going on, however that was as much as was reported.

The most likely explanation would be that the media coverage simply got various events mixed up, and was then further "chinese whispered" by conspiracy theorists to be a nuclear explosion. The British did carry out nuclear testing in the north-west many years ago, this is probably another link in the twisted story.

Contrary to popular belief, the outback is not that empty, with many stations (though fairly sparse), and I'm sure it would have been noticed if there was a nuclear blast site.

[ Parent ]

Your's too? :-) (none / 0) (#56)
by khallow on Sun Dec 17, 2000 at 03:55:19 PM EST

Well, when I looked on google, I ran across this which talks about new terrorist nuclear theats using Aum Shinrikyo as an example.

"Independent development of a nuclear weapon is the most demanding of these acquisition prospects. Assuming the terrorist organization has access to considerable financial resources, the main challenge in nuclear weapon development lies in obtaining a sufficient amount of fissile material. A small amount of highly enriched uranium (HEU) could be used in a simple gun type design while a supply of plutonium would have to be used in a more complicated implosion device. Aum undertook extensive operations to secure a sizeable uranium supply. In 1993, the group's Minister of Construction began making frequent trips to Australia in search of a suitable mining site. In his notebook, he devoted ten pages to descriptions of the quality of the uranium at various properties.

"With the help of electrodes, laptops and other testing equipment, Aum finally decided on Banjawarn, a 500,000-acre sheep farm remotely located 375 miles northeast of Perth. The group purchased the property for $400,000 along with eight mining leases for $4,700 each. Aum later requested a ship and 44-gallon drums in order to export the uranium presumably back to Japan to attempt enrichment. On 28 May 1993, Australian observers noted a seismic explosion that sent shockwaves through the area for hundreds of miles.10 Witnesses in the vicinity of the Aum property reported a bright blush flash at the time of the explosion. The event was explained as a meteor impact but no crater was found in the area."

One thing to remember is that actually detonating a nuclear device in the open atmosphere would probably be detectable world-wide - especially by paranoid types like the US military or nuclear plant facilities (paranoid about worker radiation exposure limits). I speculate that some types of explosives (particularly the kind to initiate nuclear explosions) may generate this profile. Namely, an explosive with an extremely high detonation pressure might generate a sharp sesmic pulse. Needless to say, I'm not an expert on blowing things up (honest!) so I may be getting this all wrong.

On the other hand, if a meteor did innocently smack into the ground near the ranch, then they would have a very strong incentive to bury any obvious signs (like a "football field sized" hole as cited in the NSF article above). They wouldn't need that kind of attention.

Stating the obvious since 1969.
[ Parent ]

Read some Neal Stephenson (3.33 / 9) (#14)
by Frigorific on Wed Dec 13, 2000 at 02:01:22 PM EST

He has some interesting ideas about a world where nanotech is common and many people have, in fact, created nanobots that are trying to destroy all life. Others, however, immediately created counter-nanobots, which destroy the life endangering 'bots. In other words, it turns into just another arms race. The name of the book is The Diamond Age, btw.
Who is John Galt? Rather, who is Vasilios Hoffman?
Unregulated nanotech (3.00 / 1) (#34)
by Spinoza on Wed Dec 13, 2000 at 08:20:19 PM EST

This situation exists in "The Diamond Age" because the means of production are so ubiquitous. (The Matter Compiler). Also, the world is largely balkanised into various distributed nations. This is in some sense akin to a world composed of tiny nations who all have nuclear weapons technology. I think it is fairly safe to assume that if the risk of widespread nanotech is sufficiently great, the nations in which it is developed will probably move to prevent it falling into anyone else's hands.

Something more like the situation in William Gibson's "Idoru", where a nanomech device is considered to be equivalent to the most devastating nuclear weapon it could create is a more likely situation.

If you are looking for a futuristic vision of a nanotech world, however, "The Diamond Age" is certainly Stephenson at his best.

[ Parent ]

No ban will work (3.66 / 9) (#26)
by weirdling on Wed Dec 13, 2000 at 06:24:33 PM EST

Professor Molotov created a type of weapon that is very easy to make and easy to understand and widely used. Can you ban this? No ban will realistically work. Despite what people think, making a nuclear bomb isn't really banned; owning large amounts of high-grade radioactives is. Banning the ownership and sale of U-235 and other necessary elements is what keeps joe schmoe from blowing up his basement, er, neighborhood when he accidentally gets critical mass.
Anyway, in the future, it will be much harder to stop things: get a fabricator, feed it raw materials, steel BBs, for instance, get a gun out the other side. Technically, no laws were violated because the federal serial number laws apply to guns that were sold, not to ones you make. Now, local laws may have been violated...
What's more, the gun can be used in a crime and discarded, and be very hard to trace back to the fab. Technology will always make last generation's superweapons available to the average tin-pot dictator and the generation before's to the masses.
The solution is to get our eggs out of one basket (earth).

I'm not doing this again; last time no one believed it.
That ain't no solution (4.00 / 3) (#28)
by StrontiumDog on Wed Dec 13, 2000 at 06:48:15 PM EST

The solution is to get our eggs out of one basket (earth).

Starting a colony on Alpha Centauri doesn't make wiping out N billion people on Earth any better, nor does it prevent the same thing from happening to the good citizens of A. Centauri. In fact AFAIK it was Joe Haldeman who introduced the concept of Berserkers: self replicating spacefaring machines that spread through the universe, using material from any solar system they encounter to make copies of themselves, with as sole mission the destruction of all life they encounter. That's the grey goo problem, only on a galactic scale.

[ Parent ]

To what end? (3.00 / 1) (#37)
by ZanThrax on Thu Dec 14, 2000 at 12:15:22 AM EST

As I understand it, the grey goo problem is considered to be the result of an accident, or at least bad programming. I can't see how Berserkers could be anything but an intentional act. Given this, I can't see any reason for releasing such a thing into the universe. Why would such a thing be done, especially considering the risk of self-destruction if the Berserker or one of its offspring comes back to the originating system, or one of its colonies.

Before flying off the handle over the suggestion that your a cocksucker, be sure that you do not, in fact, have a cock in your mouth.
[ Parent ]

The only difference ... (3.00 / 1) (#38)
by StrontiumDog on Thu Dec 14, 2000 at 05:15:03 AM EST

... is one of scale. The intent is relatively unimportant; if accidents can happen to a Type I civilisation involving planet-wide grey goo it can happen to a Type II civilisation involving interstellar grey goo. Similarly Earthbound terrorists can unleash evil nanos just as easily as galactic terrorists can unleash Berserkers.

Er, did I just say "easily"?

[ Parent ]

Berserkers are much harder to build (none / 0) (#46)
by weirdling on Thu Dec 14, 2000 at 02:53:14 PM EST

Technology may make a simple grey-goo situation possible, but I doubt it; however, a space navy can see a berserker coming. Perhaps some long term, long range very small, hard to see device can be made, but that tech would be vastly in excess of the abilities of the common military, and terrorists seldom have the technological abilities to do this level of tech. Essentially, a berserker created by terrorists would have to be sufficiently powerful to be able to defeat a standing navy built by presumably better financed and higher tech military types.

I'm not doing this again; last time no one believed it.
[ Parent ]
Age old conflict (3.25 / 4) (#40)
by QuantumAbyss on Thu Dec 14, 2000 at 10:34:01 AM EST

At base I think this is a conflict around who has the power. There is a hidden assumption in what you are saying - that assumption is that a government will act in a more reasonable manner with weapons of mass destruction than an individual. Sometimes that is true, sometimes it isn't.

What we're facing here is something akin to the Cold War but on an individual scale. If the means and information to build weaponry is in the hands of everyone then there are only a couple of options to control it - and they will create themselves naturally as they always have. One is that protection can be bought by those who have enough money (this can happen on many scale). Another is that governments will still have the biggest and the best and everytime an offensive technology is created a defensive technology is soon to follow. There are many other associations that will come about aside from this. Could these new technologies throw sections of the world into some pretty bad states? Yes, just as every major shift in technology has. Some win, some loose.

Of course the thing that is really concerning you is ultimate destruction. Well ultimate destruction is always a possibility. But controlling information and access to resources doesn't always work. It worked with nuclear weapons because there was a substance that could be easily controlled - now that control is slipping. Who knows, such a tactic might work for nano-technology as well, but probably not in the long run. The alternative to a controlled structure is leaving the base technology that allows some sort of defensive action to be out in the open. Yes, we may all need to be walking around with our own little "nano-defenders" inside of us. Does it leave people who are poor in a bad position? Most likely. Is there anything we can do about it? Probably, but we're talking revolution and it may take the technology itself to arive before that is doable.

Finally, I'd like to get back to what I said at the beginning. You are assuming that governments will act more ethically with these technologies than other groups. That isn't always true. While a government isn't likely to wipe out all life, it might do other things. To an extent people need to be empowered with technologies so that they can defend themselves from the government. I don't care if the government wants to limit my access to something material - I do care if they want to limit my access to information. Information is power and without a reasonable amount of information kept in the hands of everyone, groups like the government are likely to get out of control.

Science is not the pursuit of truth, it is the quest for better approximations to a perception of reality.
- QA
Destructive Individuals vs. Governments (3.66 / 3) (#44)
by Agripa on Thu Dec 14, 2000 at 01:06:50 PM EST

Given sufficient information and determination, one can make explosives with modern levels of effectiveness. TNT, nitroglycerin, tetryl, and lead azide are by no means obsolete and RDX (used in C4) is a very real possibility if either you can get the precursors or get set up to make them. The selection of modern explosives used in the military and industry is more dependent on a nation's industrial base and raw materials than the manufacturing process itself.

Small precise milling machines suitable for making everything in a modern firearm except the barrel are available to anyone. The barrel itself is easy to make on a lathe but the rifling requires considerable modification of the lathe and takes quite a bit of time to get right.

Dig up a TWT or klystron and a directional antenna to pump out 100 kilowatts effective radiated power and you can hose every ones cell phone within a block. Jamming all of the cell phones over 20 square miles is even easier. Spread spectrum modulation schemes work well but when you use up all of your processing gain so that you can have more users and lower power for a given range, you make yourself susceptible to interference.

With the exception of super difficult to procure materials, governments will be ineffective in their control without a police state. Historically the cost of police states in the twentieth century has been at least 55.9 million lives in various genocides. Given this number, I am hardly worried about individuals having access to anything up to nuclear materials and even then I am not sure I would care about anything that was not useful in a fission reaction. Incidentally, it is not the fission or fusion bomb that bothers me, but the radio-cobolt or radio-stronium bomb.

We seem to have a shortage of intelligent criminals or anarchists bent on mass destruction. I am more worried about terrorist nation states using exotic means of destruction and our own government using some not so exotic methods. They are both a lot more difficult to control and have resources far exceeding an individual's.

By the time nanotech fabrication reaches the level of the milling machine that fits in my garage, I expect the equivalent of airport xray machines and drug sniffing dogs (not that I agree with either of these) to be ubiquitous in an attempt to save me from myself.


Grey Goo coming to a theater near you. (3.00 / 3) (#47)
by johndhays on Thu Dec 14, 2000 at 05:18:47 PM EST

Go down to video store and in the Sci-Fi section you will find most of humanity's apocolyptic visions. Which of these movies accuratly predicted the present day terrors we all face?

Was it Soylent Green, ZPG, Them, The Andromida Strain, Godzilla, A Boy and His Dog, Running Man, 2001, The Forbin Project? Hell even Star Trek, the Next Generation has been surpassed by present day tech.

Every geek wants Blade Runner but fears 1984. The future will not turn out like you want or fear. Before nanotech gets off the ground it might be surpassed by an even cheaper, safer, more powerful technology and all your worry will be for nought. Meanwhile HERF guns and biotech weapons are a pipe dream.

Meanwhile, I think I'll go cultivate some micro-organizms to produce a volatile, toxic substance. Beer.

Watch out for giant radioactive lizards!



Pollution will kill our Society fastest (3.00 / 2) (#49)
by turtleshadow on Fri Dec 15, 2000 at 12:29:19 AM EST

Your insights into nano technology and viri are good. However technology relies on physics and chemistry to exist.

The substantial amount of environmental pollution that is occuring is a much more substantial and longer lasting threat than any nanobot.

Chemicals that alter endocrine interaction affects all life, these chemicals simply can't be hosed down with bleach and are nearly impossible to filter out of the environment. Stopping proper endocrine interaction leads to imparing or destroying the basic tenet of life -- reproduction.
Nanobots and viri could be argued can be manufactured to do the same, however these delivery methods are highly volitile and could be counteracted in time.

Chemical reactions just need the right ingrediants and conditions to exist -- no program required!

From what I understand the chemicals that harm endocrine interactions just have to be exposed (contact, inhale, ingested) to the subject. Some in enough quanity act immediately and irreversably-- often before the realization exposure has occured and treatment is sought. Others are cumulative and daily doses are hard to monitor but ultimately result in an "Adult" that is disfunctional and/or unable to reproduce.

I believe ALL mamals, avians, fish, and insects share some common endocrine processes so the food chain impact is quite real.
Your right all that grey matter should be focused --- on cleanning up the environment as without a substaining biosphere for Man -- all external technology threats are moot.
Regards,

Turtleshadow

re: Pollution will kill our Society fastest (none / 0) (#52)
by pallex on Fri Dec 15, 2000 at 09:44:03 AM EST

"Your insights into nano technology and viri are good " Well, Bill Joys are, from the essay (somewhere at wired.com) `Why the future doesnt need us`, anyway. Credit where credits due and all that...

[ Parent ]
Underestimating humans (3.00 / 1) (#51)
by JonesBoy on Fri Dec 15, 2000 at 09:40:48 AM EST

<ahem> but I have a problem with your nanobots, herf and "supertechnology". Why would a nanobot be any more distructive than a good ol virus? Who gives a danm about someone jamming a cell phone? I would actually enjoy that one. If push comes to shove, someone running around blowing up electronic devices will meet someone with a low tech device like a pointy stick. I put my money one the low tech guy walking away.

Sarin, anthrax, plague are all wicked easy to come by, easy to manufacture, and particularily nasty if released. They are nothing new either. They used germ warfare in the dark ages! They would get a diseased person into an area, and just keep them trapped until their "typhoid mary" got everyone sick. Yet, none of this happens today. I think your homicidal maniac model of the human is way off. We are actually quite compassionate. We have all these abilities for mass distruction, yet nobody uses them. If some quack does try, I bet nobody would help, they would probably turn that person in. Humans have a concience for a reason.

Blaise Pascal said, if we all stand still nobody will get hurt. Progress will always lead to potential bad things, but that is no reason to stop progressing. I doubt if any of your worries will come true. Only in James Bond movies will people use some complex scientific process to end the world as we know it. The true sickos will use low tech weapons to kill by their own hand. After all, if they kill with a knife, they did the killing and they are infamous (jack the ripper). If they kill with a nanobot, the nanobot is the danger and they are forgotten (go ahead. name the guy that dropped the A-bomb. Or assembled it.).


Speeding never killed anyone. Stopping did.
(a bunch of) replies (3.00 / 1) (#58)
by anonymoushero on Mon Dec 18, 2000 at 08:04:09 PM EST

> Why would a nanobot be any more distructive than a good ol virus?
   Ignoring your lack of spelling...
   Because nanobots aren't naturally evolved. If a virus evolves (mutates) so that it's *very* fatal, it does *not* get passed on, and hence doesn't get reproduced. We're talking about a system that bypasses that, can make billions of viruses that would never reach such a population density (if formed naturally). Also, thrown in for good measure, this is a designed solution, not an evolved solution. All viruses come from something that was interested in staying alive in it's current environment - and most don't have multiple methods of doing stuff, unless they needed them for survival. A natural virus wasn't 'created' for a purpose other than survival, whereas nanobots c/would be.



> Meanwhile HERF guns and biotech weapons are a pipe dream.
   I wish all my pipe dreams were as real...

      Biotech:
   Tell that to the 12 people who died, and the 3,794 (5,300) injured people who had sarin gas 'spread' on them on the subway lines in Tokyo in 1995.
   Tell that to the 68 people and hundreds of animals who died of anthrax in and around Sverdlovsk, now known as Ekaterinburg, an industrial city in the Urals, in April 1979.

      HERF:
http://www.cadre.maxwell.af.mil/warfarestudies/iwac/IWCBT/EWherfcanal.htm

> In the early fall of 1992, a U.S. naval ship entering the
> Panama Canal forgot to turn off its radar systems, which
> operate on the same principle as HERF, but in the form of
> HPM, or high power microwaves. In this case the Canal
> Zone computer systems got zapped! The radar hits were so
> strong that nearby computers were fried and had to be
> replaced.

   Chechen field commander Salman Raduyev's gang had used, during its raid on Kizlyar, a device to disable the police radio communication system. As a result, lack of coordination sharply decreased the effectiveness of the actions taken against it by the law enforcement bodies.

   I could go on...



> [BBs -> guns -- could] be used in a crime and discarded,
   You mean they could be used, and then returned to steel BB form... Who keeps evidence around anyways?



> created nanobots that are trying to destroy all life.
> Others, however, immediately created counter-nanobots,
   And how fast do you think this would be? Fast enough to beat exponential replication (assuming power sources for nanite reproduction)?
   Here's someone who obviously believes as you do:

> governments will still have the biggest and the best and
> everytime an offensive technology is created a defensive
> technology is soon to follow.
   Like the defenses to Nukes?

   Umm, the only defense we have is MADD. Which is purely psychological... And doesn't work on Saddam/crazy-types. Or even versus sneakiness: Say you (chinese) sneak in an ICBM into, oh, Russia, from one of the breakaway republics and launch it at the US. Bing! End of *two* problems.

   The problem with military technology is that the offensive always leads, and defenses *may* catch up.

   See the comment above for the time-restraints on defenses which are *necessary*.



   The power issue thread has a point, but lots of things (besides sunlight) can be used for power. Heat, biologicals, fuels (sugar, petro), as well as electrical energy.

   Problem w/oxygen binding is that plants produce more from CO2, so you have to nuke that to. If liberating oxygen (from CO2) wasn't so energy intensive, I'd say kill the plants all over the world by getting that 'free' carbon (for nano-tubes, diamond coating, food) from the 'free' CO2, I mean, who really need a plant/biosphere anyways...


An aside:
   Nations that were/are Nuke powers: South Africa, Israel. I *think* Israel has something like 160 nukes? South Africa gave up the 6 secret nuclear weapons they'd developed when the last white president found out about them. But I can't find a date for this, as DeClerk? did it in secret.


attribution: -- Ender, Duke_of_URL (none / 0) (#59)
by anonymoushero on Mon Dec 18, 2000 at 08:07:31 PM EST

Always helps to include who's posting :)

[ Parent ]
Mail me (none / 0) (#61)
by Carcosa on Tue Dec 19, 2000 at 10:19:26 PM EST

Whoever wrote the original reply (Ender?) get in touch, I'm interested in some of your sources.

stexaxaveb@herd.plethora.net

xaxa is antispam

[ Parent ]
South African Nukes (none / 0) (#62)
by anonymoushero on Sat Aug 11, 2001 at 01:21:24 AM EST

In April 1997 South African Deputy Foreign Minister Aziz Pahad admitted that an unexplained nuclear explosion detected in the Indian Ocean on 22 September 1979 was a South African nuclear test.

destroyed their arsenal before 10 July 1991

announced 1993-03-24

-- Ender, Duke_of_URL

[ Parent ]
I'm kind of surprised (3.00 / 1) (#60)
by Carcosa on Tue Dec 19, 2000 at 10:15:34 PM EST

I'm surprised how many people posted dissenting opinions in reply to my original post.

There seemed to be two general threads:

1) It can't happen technologically, Carcosa is a crackpot
2) It can't happen because human beings are too fundamentally good
3) Carcosa is against Information's Desire To Be Free and is therefore EVIL, maybe more so even than Bill Joy.

To objection #1, I rebut:
Bioweps not a reality? Do some reading, but not too soon after a meal.
I especially suggest that you look into Unit 731 and Doctor Ishii. Biotech weapons are being used on a daily basis.

As for the idea that biotech weapons can't eat a biosphere because viruses (it's not virii) can't, as the post below this one states, viruses aren't designed. They rely on host populations to spread and maintain an existence. Thus, they've evolved in fact to _maintain_ host populations instead of getting rid of them completely. Layman reference: The Selfish Gene by Richard Dawkins.

Ethnic weapons are coming. They may already be here. It appears that South Africa has a weapons program very interested in this, and it's possible that Iraq's engineered camelpox virus has a preference for those outside of the ethnic line of the ruling Iraqi ethnicity. Reference: The Plague Warriors, forget the author, think it's Presidio Press,1999.

Rebuttal to #2: Hiroshima, Nagasaki, Auschwitz, Stalin. How do you like it that Aum had/has a bioweapons program? How do you like it that Aum designed much of the new Japanese missile defense grid? (Once the J. gov found out, it had to be scrapped. The hugely expensive grid, that is...)

#3: I want information to be free; at the same time free information may have a price and it may be more than the normals are willing to pay.

Wake up, people, the world's changing real quick and it's not all Mondo 2000 techno-ecstasy. It's up to us-- and we ARE an intellectual elite of sorts-- to find common ground with other cultures, and to build controls into the technologies we design, before more problems start. I'm hoping that I'm wrong about my predictions on this kind of thing, but I've got a sinking feeling.

Free Information and the Grey Goo Problem | 61 comments (61 topical, 0 editorial, 0 hidden)
Display: Sort:

kuro5hin.org

[XML]
All trademarks and copyrights on this page are owned by their respective companies. The Rest 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!