It used to be that you could buy dynamite at the hardware store. Farmers used it for blasting stumps, and it wasn't really controlled by the government because it wasn't really much of a problem. People would buy it, they'd blast their stumps, and occasionally they'd blast their limbs into stumps, but that was considered an acceptable risk. Then, it became apparent that to allow unrestricted access to dynamite posed a danger: criminals could use it, there were other ways of pulling stumps, and there was the risk of accidents. So dynamite got regulated.
Nowadays, you can buy things like petri dishes and chemistry equipment fairly easily. An amateur can assemble a lab with all she needs to perform fairly advanced chemistry, and possibly produce munitions with a 1940 level of technology, given sufficient information and determination. Information's available on the net and from various publishers (Loompanics, Amok, Paladin Press) and even in many used bookstores it's possible to find things like special forces field manuals relating to improvised munitions.
And there are problems, and there are men like McVeigh who are willing to sacrifice other peoples' lives for their own distorted principles. And the government's regulating what it can, but explosives are very simple things. When you can make something like that out of urine, how do you really stop people from producing devices to hurt others?
And what about nearfuture tech like HERF devices? How do you stop someone from rewiring a microwave transmitter to throw a pulse designed to fry out electronics? If they have the information and the determination and some rudimentary level of technical knowledge, it might not be difficult to perform such a feat. Say it could fry or disrupt electronics in a hundred foot radius. When you consider information hubs, and modern data storage methods, and lines running through central trunk sites that affect literally millions of people, a day-long service disruption has massively widespread effects: like an electrical shock to a nerve ganglion, systems far away are affected.
I'm not going to talk about biotechnology here because the implications are obvious.
Think about middling-far future technologies like Nanotech, which is tantalizingly close to being within our grasp. Nanotechnology is a desktop science, a very fundamental manufacturing paradigm that would be possible to perform, given sufficient information, on an extremely shoestring budget. Let's say you have a nanotech assembler, a machine capable of fabricating machines of arbitrary design on a molecular scale. There are people you don't like, for one reason or another. Maybe you've decided that the whole biosphere needs to go. So you produce a little device that can replicate itself. It's tiny, miniscule, sub-microscopic. It's the size of a virus. It's programmed to replicate itself for a little while, then stop replicating for a month or so, then start replicating itself again.
You give it a pile of dirt to eat, in a large glass tub. Within seconds, the dirt's transformed into an unbelievably fine black dust. You take the dust and you throw it into the air near an airport or in a subway, or you mix it into a supply of food or drugs or clothing. Your agents are carried all over the world by travellers and by the wind. Then a month later they wake up again and mindlessly they begin devouring all organic molecules they come in contact with, turning them into a substance like themselves.
It's Ice-Nine. It's potential armageddon. It's the big dirt nap for everything on the planet, far worse than nukes.
Like it or not, nanotech is coming. It's going to be military first; any new technology must be first tested by using it to hurt people. Then we're going to see nanotech tools in the commercial sector. There may be assisting developments in the technology tree, perhaps in chemistry or bioscience, that will make it easier to develop and implement nanosystems; the ingression of nanoscale manufacture into our lives may well happen very quickly, because much of the R&D work is done. But what if Russia gets it first, or Iraq? What'll become of the Great Satan and all its servant nations then? So the military has a mandate to develop this kind of science, and do it quickly: we must not allow one of our enemies to develop an assembler first.
So how do we control this kind of technology that can spread massively and on its own power, devices that can manufacture themselves for free and cause damage on a Terra scale? In the sort of personal empowerment the Cyberpunk writers talk about, they often somewhat sidestep the problem of fundamental human evil, and of homicidal madness.
You'll notice that the common denominator in many of the preceding paragraphs was the word INFORMATION.
How many incidents of domestic bioterror, or electronic HERF pulse terror, or nanotech-assisted crime-- and it'll be gruesome and mediagenic-- will it take for the public to cry out for the government to do something, anything, to keep them safe?
How can the government possibly regulate this sort of technology once it spreads? How can they prevent lone wolves, or small organizations, from procuring technology infrastructures that cost very little to produce and don't require anything weird, like uranium? And of course the government will know that they can't do a thing about it, but they'll pass widespread legislation anyway, to "keep us safe" and to consolidate the powerbases they've held for hundreds of years that are now suddenly looking like beachfront mansions in Miami. The kind that you see down by the Keys, half-submerged.
Did you know that Aum Shinrikyo appears to have detonated a nuclear device in the Australian outback in 1993? These are the same benevolent angels, remember, who blew a SARIN device in the Tokyo subway in 95, IIRC. Not the sort of person you want to trust with supertechnology, or the information to produce it.
Point is: If information wants to be free, and we can't stop it from being free now that it's spread its electronic tributaries and wave tendrils throughout our society... What are the implications for human society? How do we reconcile Kuro5hin's ideals of information freedom and human privacy and personal empowerment with issues of human evil and the rise of technology, which WE ARE ABETTING, capable of producing grievous effects? And don't just reply "There'll be technological safeguards built in" because you all know as well as I do that it's far easier to destroy than create, and that initiative rests with the sudden attacker.