Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

[P]
The Coming Technological Singularity

By Pac in Technology
Tue Nov 14, 2000 at 01:16:02 PM EST
Tags: Science (all tags)
Science

Vernon Vinge is one of the best science-fiction writers in activity, the author of "A Fire Upon Deep" and its prequel, "A Deepness in the Sky", both Hugo winners. I have recently (ok, yesterday) found and read Vinge's essay The Coming Technological Singularity: How to Survive in the Post-Human Era.

"Singularity" is not a work of fiction but an essay about the near future (2005-2030) possible technological advances and specially the coming of the Singularity, the post-human intelligence.


The main idea in the essay is that, and I quote the abstract, "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended." The idea is a powerful one and the essay tune of inevitability inspires more sadness than fear. As Vinge puts it, "I'd be more comfortable if I were regarding these transcendental events from one thousand years remove ... instead of twenty.".

But is it really inevitable? Will post-human intelligence arrive in such a short time frame? And will it necessarilly wipe out the human race as we know it today from the face of the planet?

One criticism I could put forth against the essay thesis is that it drinks mainly from evolution to draw conclusions about the possible outcomes of the Singularity. Would it not be possible that a higher intelligence have also a higher moral sense? Vinge discards this possibility quickly, but wouldn't there be a possibility that we can develop Iain Banks Mind's instead of the instruments of out own extinction?

Sponsors

Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure

Login

Poll
If another species ever becomes dominant on Earth, humans will
o Be extinct 5%
o Become Matrix-like parts of biological machines 3%
o Be shown in freak shows throughout the galaxy 9%
o Close their eyes to see if it goes away 22%
o Ask for a manual recount of neurons 6%
o Be calm, cool and collected until everything falls into place 2%
o Call 911 0%
o Get a six-pack, turn the TV on and watch the end of civilization from the couch 49%

Votes: 131
Results | Other Polls

Related Links
o The Coming Technological Singularity: How to Survive in the Post-Human Era
o Also by Pac


Display: Sort:
The Coming Technological Singularity | 48 comments (33 topical, 15 editorial, 0 hidden)
Singular misconception (4.15 / 13) (#8)
by jabber on Tue Nov 14, 2000 at 12:25:59 PM EST

It is often said of the Singularity, that it will be the end of Culture, Civilization and Humanity as we know it. The "as we know it" part is usually said in a low, muffled voice, so as to make the first part of the statement seem more dramatic.

My view is that the Singularity won't end anything. Becoming coupled with AIs, bionics, whatever-hot-tech-you-wish, is only going to extend the current trend of human evolution.

We are already dependant on technology. We don't work the land with our bare hands, we wear glasses and pacemakers, we make phone calls and search the Internet with Google. We sit in tin cans that orbit the planet and we extend our senses to the very edgo of the Cosmos with Hubble, Chandra and our other sensory-amplification prosthetics.

Places like K5, IRC, and even the venerable newsgroups have done much to make humanity a few steps closer to a singular 'group-mind' already - but these new technologies are only continuing the trend set in motion by the printing press, the cave paintings and the concept of language.

Yes, it may come to pass that mechanized reasoning will render human decision-making moot. After all, if Deep Blue could beat Kasparov at his own game, then once we understand human decision-making in an abstract sense, we ought to be able to make a machine that applies that thinking at a rate of several GigaHertz. How is this different than using a knive instead of our teeth?

And if the machines that we design to do our thinking for us once day become independent and decide that we are extrenuous... I'll blame the programmers. :)

[TINK5C] |"Is K5 my kapusta intellectual teddy bear?"| "Yes"

Maybe forgotten... (none / 0) (#44)
by Malachi on Fri Nov 17, 2000 at 10:21:24 AM EST

While we do not till the farms with our hands anymore.. our migrant workers might.. or maybe their distant relatives.. since only 50% of the world has a working telephone, isn't it a bit overdramatic to think that the world is a speeding bullet? I think America and parts of Europe are trendsetters, but we have way too many improvished in the world to think that we're going to erradicate, educate, and enhance the whole damn thing.. Wars exist right now that you don't know about.. tens die every second because of something man is helping to induce or fails to fix properly.

If the bad could realize they could do good as easy as they do bad, the world would be a better place.

We've got a long road to travel, I'm not sure where we're going but its going to be a very rocky ride.

Keepin it real,
-M
We know nothing but to ask more questions.
[ Parent ]

Geh... (3.10 / 10) (#13)
by trhurler on Tue Nov 14, 2000 at 01:02:20 PM EST

Singularity people piss me off. These are the same people who said travelling faster than the speed of sound would destroy civilization, and the same ones who said detonating a nuke would destroy the entire atmosphere, and the same ones who said that everything from genetic engineering to the water wheel was going to be the end for all of us.

They've always been wrong. They'll be wrong again. Just watch and wait. Artificial intelligence may or may not happen, but humanity will go on, and we'll be better off, too. I am willing to bet that I will live twice as long as my parents, or even longer, and that I'll spend the majority of that time reasonably wealthy and comfortable.

George Thorogood has good advice for these people: Get a haircut, and get a real job. Oh, and switch to mochas - those lattes are getting boring. And wear something besides black suits and string ties. Jesus.

--
'God dammit, your posts make me hard.' --LilDebbie

Humans are invulnerable? (3.33 / 3) (#22)
by SIGFPE on Tue Nov 14, 2000 at 03:46:37 PM EST

These are the same people who said travelling faster than the speed of sound would destroy civilization
Do you have a reference for this?
the same ones who said detonating a nuke would destroy the entire atmosphere
Do you have a reference for this also? And an explanation of how exactly you would have demonstrated this to be false at the time.

I'm trying to figure out what you're trying to say. Are you merely asserting that humans are invulnerable or is there a deeper message that I'm missing?
SIGFPE
[ Parent ]
Cites? (3.25 / 4) (#25)
by trhurler on Tue Nov 14, 2000 at 05:14:06 PM EST

Do you have a reference for this also? And an explanation of how exactly you would have demonstrated this to be false at the time.
I lack both references, but there wer people who were afraid of both. The point isn't that they are -literally- the same, because obviously most of those people are dead. The point is that the attitude is the same. As for how you'd prove that a nuke won't destroy the atmosphere, that's quite simple. You apply the laws of thermodynamics. You know the theoretical maximum yield of your nuclear device. You know the thermal properties of air, and you can estimate heat transfer from the atmosphere. You have a reasonable estimate of how much air there is. The result, if you do the calculation, is that a nuke would have to be big enough to literally vaporize the planet to have even a chance of destroying the whole atmosphere, and no nuke even conceivable to build can do that. All that math was available at the time; the people doing the wailing were incompetent.

Now, if you want to claim that I can't mathematically demonstrate the impossibility of a singularity at this point, you're right. However, once you have a technology in mind, then maybe someone can. For now, advocating a singularity is like claiming the existence of God - as long as you don't have to say just what it is or how it works, nobody can refute you, but that doesn't make you right.

--
'God dammit, your posts make me hard.' --LilDebbie

[ Parent ]
You need a better argument (3.00 / 1) (#28)
by SIGFPE on Tue Nov 14, 2000 at 07:14:29 PM EST

You apply the laws of thermodynamics...the people doing the wailing were incompetent.
Absolutely. But I think that in this case the people 'wailing' aren't all incompetent and they're also not simply doomsayers. These views are shared by many professionals in computer science. They are making a point that is more deserving of a direct rebuttal than simply the hand waving "sounds like what lots of other people have said in the past" argument that you've given. People arguing for the existence of a singularity are not simply making an assertion like "God exists". They usually describe a plausible step-by-step scenario for how such an event can come about.
SIGFPE
[ Parent ]
A plausible step by step scenario? (4.50 / 2) (#30)
by trhurler on Tue Nov 14, 2000 at 09:57:07 PM EST

Yeah, with a big "and then a miracle happens" in the middle, maybe. The fact is, nobody anywhere today has any idea of how to construct a viable artificial intelligence. Forget the design; nobody could fabricate it if he COULD design it.

That said, even if we assume that their big miracle occurs, we then have two possibilities.

One, the intelligence is fundamentally like us: it organizes concrete entities into concepts and concepts into more abstract concepts. If this is the case, while it might be better at this than we are, it might not be; we don't have any way of predicting that right now. If it isn't better, then their scenario won't work, which now means we have an assumption AND a miracle. If it is better, then their scenario would work if and only if there was some compelling reason this thing had interests which were necessarily opposed to our own. That seems unlikely. Unlike us, it won't die, which means it has no need to procreate. It will consume minimal resources compared to a human being, which means that it can probably provide for its own sustenance in its spare time. The environmental threats we consider so important are irrelevant to it; it has no need for nature. Odds are, it would be a lot like a lazy, hyperintelligent, immortal human being, only lazier and more intelligent. It might be very active mentally, and might make a good scientist or whatever. However, being immortal and essentially without any need for resources tends to cut off all the reasons intelligences do things like war on one another. (For that matter, it tends to cut off the will to live, but that's another matter.)

Or, two, the intelligence is fundamentally unlike us. In this case, speculation as to its nature is totally premature; we hve no idea what it would be like, what goals it would pursue, the means it would use to pursue them, the capabilities it would have, and so on. These could range from unbeatable and aggressive to nearly helpless and passive. We would have no way of knowing this until we knew how to build it, and maybe not even then.

In other words, anyone telling you some hyperintelligent mother brain is going to come along and kick our asses is talking out his own; that scenario is no more or less likely than any other future you can make up which doesn't totally violate any known laws of physics. I don't care how respected he is, how many degrees he possesses, how old his slide rule is or how new his computer is - he's talking to attract attention and to hear the sound of his own voice.

How can I be so confident? Simple. This idea started with evolution fanatics. I don't mean ordinary people who believe in a valid scientific theory; I'm referring to people who have no sense of reality and think evolution occurs independent of evolutionary pressures. To them, being is reason enough to do anything and everything to make yourself supreme on the "evolutionary ladder." What they don't realize, with all their degrees and so on, is that once you reach the point where your environment has minimal effect on your survival, evolution is irrelevant. Human beings are probably not evolving much anymore, if at all. We probably never will, short of engineering ourselves. Any intellect we create will be in a similar position, but will end up being more durable and less impacted by environmental concerns even than we are. The only hope we have of ever naturally evolving further, and in this case further probably does not mean in ways we'd find desirable, is to put ourselves into extremely hostile environments and live most or all of our lives there. The same goes for this superbrain they're dreaming about. I'm talking, REALLY hostile, too. As in, most of us would die.

Yes, I've seen the "small stupid things aggregate into bigger smarter things" scenario, too. First off, there is no evidence that this can happen; biological intelligences certainly don't do it, and the odds are that once a "brain" is formed, you can't just randomly smack on more pieces like plugging stuff into expansion slots in your PC. As a result, the odds of this scenario working are about zero. And secondly, even if they did, they'd then be smart, which gets you back to all my earlier points. The funny thing is, movie makers understand this implicitly, but so-called experts don't have a clue: without pressure, evolution doesn't exist. Without evolution and/or high resource demands, conflict doesn't exist. Without evolution and/or conflict, even if their singularity occurs, it will mean nothing but benefit for humanity.

The sad part is, these are the same people who think rational self interest means climbing a pile of fresh corpses to plant your flag - and the two mistakes are in fact only one.

--
'God dammit, your posts make me hard.' --LilDebbie

[ Parent ]
What can I say? (3.50 / 2) (#35)
by SIGFPE on Wed Nov 15, 2000 at 12:43:55 PM EST

Except that I think you are right on target!

To them, being is reason enough to do anything and everything to make yourself supreme on the "evolutionary ladder."
This kind of thinking does seem to mar the thought of many otherwise intelligent people. Additionally it's easy to say that even with evolutionary pressure intelligence is inevitable because higher intelligence means a greater chance of survival. But this is not necessarily the case at all - after all the cockroaches are doing fine without much in the way of intelligence.

I think I largely agree with you: It seems to me that should AI's appear then there would be little competition between humans and AI's. Instead what would happen is, to use the language of evolutionary biology, radiation into a new niche - the virtual world. However where I disagree with you is this: I think that there is a small, but not insignificant chance, that what I've just said is incorrect - and it seems to me plainly obvious that AI's would have the ability, if not the motivation, to wipe out humans.
SIGFPE
[ Parent ]
Wipe out humans? (3.00 / 1) (#37)
by trhurler on Wed Nov 15, 2000 at 01:41:53 PM EST

it seems to me plainly obvious that AI's would have the ability, if not the motivation, to wipe out humans.
It seems to me plainly obvious that this would depend on their as-yet-unknown capabilities and also on choices we make along the way. Furthermore, it would depend on the relative dispersion of humanity(we're not limited to one planet,) the number of extant AIs vs humans, the resources available to each, and so on through a hundred or a thousand or maybe a million other factors. Merely being smarter, even if they were smarter, which nobody has yet demonstrated they would be, does not guarantee victory in a conflict. Stupid people and animals slaughter their intellectual betters on a daily basis:) Sure, it might be possible, but then, it might not. We just don't know yet. Which, by the way, is why this topic has the same appeal to some people that religion has to many others... they think they're too "rational" to have faith in a deity, but they haven't renounced the idea of faith, except in name - they'll gladly have faith in some technological happening they can't possibly begin to rationally justify belief in.

--
'God dammit, your posts make me hard.' --LilDebbie

[ Parent ]
Nukes Destroying Atmosphere (4.00 / 2) (#33)
by sigwinch on Wed Nov 15, 2000 at 10:29:47 AM EST

As for how you'd prove that a nuke won't destroy the atmosphere, that's quite simple.

Actually, it's not at all simple.  The concern was not heat, but thermonuclear deflagration.  I.e., they were worried that a nuclear bomb could start an expanding nuclear chain reaction in the atmosphere.  (Or perhaps even in the lithosphere, although the distinction is pretty much moot for the planet's inhabitants.)  And this concern was well founded.  At the time of the Trinity test, high-energy physics was comparatively primitively.  Nobody knew if, for instance, the high neutrino flux would catalyze other nuclear reactions and radically lower the threshold energy in the surrounding atmosphere.  Looking back over 50 years of high-energy experiments, it is obvious that there were no hazards, but at the time the only argument against it was physicists' intuition.

--
I don't want the world, I just want your half.
[ Parent ]

Similar concern now... (4.00 / 2) (#36)
by dennis on Wed Nov 15, 2000 at 01:05:27 PM EST

...with the upcoming accelerator experiment at Brookhaven, which has a reportedly small chance of creating "strangelets," initiating a chain reaction that would convert the entire planet to strange matter. My question: how do you figure probabilities, when what you're talking about is physical law? When the sample size is one, probability is meaningless.

My (somewhat tongue-in-cheek) hypothesis: the reason we haven't received intelligent signals from space is that this experiment does create strange matter. High energy physics is such a fundamental field of inquiry that every technological civilization tries it....

[ Parent ]

Accelerator Experiments Not A Hazard (4.50 / 2) (#38)
by sigwinch on Wed Nov 15, 2000 at 02:02:20 PM EST

The energies produced by particle accelerators are tiny and insignificant compared to cosmic rays.  For example, this article (which I found linked from this Scientific American article) talks about a cosmic ray with a measured energy of 50 joules!  For comparison, a slow .22 bullet from a pistol is 100~200 J.  If strage matter from particle collisions were a problem, everything would have been destroyed long ago.  Particle accelerators are billions of times lower in energy, and thus are nothing to worry about.

--
I don't want the world, I just want your half.
[ Parent ]

Singulatarians != Rifkinites (4.50 / 4) (#24)
by crasch on Tue Nov 14, 2000 at 04:06:37 PM EST

I think that your characterization of most people who think the Singularity will happen does not jibe with my experience. Very few, in my experience, think that the Singularity will be a _bad_ thing. They are _not_ Luddites. Indeed, some welcome the advent of the Singularity with a fervor matched only by those in millenial religious sects. I think you may be confusing them with Jeremy Rifkin and his crowd, who have yet to meet a technological advance that they like.

[ Parent ]
Yes, I forgot about those people... (3.50 / 2) (#29)
by trhurler on Tue Nov 14, 2000 at 07:15:50 PM EST

To be fair, there are two crowds. There's the doom and gloomers, who I already described. Then there's the cult fanatics. They're not concerned with how, or even if, this can happen, but rather with a fanatical insistance that it will. They're even more pathetic than the latte swillers; they swear this "singularity" will happen despite having less reason to believe that than an average person has to believe that he will win the lottery in his lifetime.

I realize I'm being harsh, and this'll probably be the second post of mine in this story to get a 1 rating from some pompous latte swilling twink or borg-dreaming-utopian who hasn't read my bio. However, the point I'm trying to drive home is that normal mentally healthy people do not believe in things like this without a better reason than "some famous AI guy said so." AI guys have been predicting a total success in their discipline for about 40 years, and at any given point in that timespan, the breakthrough was "just around the corner." So far, we have search engines that fail to find what you actually wanted and furbies. I'm not impressed.

--
'God dammit, your posts make me hard.' --LilDebbie

[ Parent ]
AI != I (3.18 / 11) (#15)
by acestus on Tue Nov 14, 2000 at 01:12:32 PM EST

Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.
I've never felt really clear on just what these anxious futurists mean when they say 'superhuman intelligence.' The author of the propagated essay quotes Good:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any any man however clever.
This presupposes that a machine can have any sort of intellectual activity. I do not believe it can. The famous Turing test demands that a computer be able to impersonate a woman as well as a man can impersonate a woman, and Turing admits that it is not a test of intelligence, but a test for the appearance of intelligence. While it may be possible for a computer program to successfully impersonate a woman (though I doubt even that), I do not believe that it is possible to create any sort of real, 'living', creative intelligence, especially given the fact that we can only begin to understand human intelligence. That is, a chatbot may fool you for an hour, but it will not write the Great American Novel.

jabber commented:

Yes, it may come to pass that mechanized reasoning will render human decision-making moot. After all, if Deep Blue could beat Kasparov at his own game, then once we understand human decision-making in an abstract sense, we ought to be able to make a machine that applies that thinking at a rate of several GigaHertz. How is this different than using a knive instead of our teeth?
The difference is that Deep Blue is not thinking. Neither, despite their adverts, is the Sega Dreamcast. They are computing. Deep Blue does not play chess the way Gary Kasparov plays chess. Kasparov looks at the board and intuits the best moves, from which he chooses a course of action, with a mind to strategy and the future of the game. Deep Blue hard computes, with brute force, billions of possible futures, and picks the path leading toward the greatest number of 'good' futures. That is not thinking, but computing.

If we take this analogy and apply it to the quest for AI, we end up with a computer that will, perhaps, pass the Turing Test, but that is not intelligent. Given a stimulus (Acestus says, "Hello, Computer! What's up?"), the computer can apply huge amounts of brute force to formulating a response. (Computer says, "Hi. Not much.") That is not how humans think, and I don't think that it will lead to a self-aware computer. It will just lead to a really good online RPG.

There are numerous problems to AI as a concept, let alone a 'killer app.' The chief of these, I think, is the disambiguation problem. (This may be because I favor linguistic philosophy.) Before buying into the hype, one interested in AI should look at the theory behind it -- theory born just before the electronic computer, and perhaps more comprehensible for its lack of computer jargon. Forgive the Amazon link, but I would suggest this excellent book by John Haugland.

Acestus
This is not an exit.

You misread Turing (4.25 / 4) (#18)
by Novalis on Tue Nov 14, 2000 at 02:29:59 PM EST

This presupposes that a machine can have any sort of intellectual activity. I do not believe it can. The famous Turing test demands that a computer be able to impersonate a woman as well as a man can impersonate a woman, and Turing admits that it is not a test of intelligence, but a test for the appearance of intelligence. While it may be possible for a computer program to successfully impersonate a woman (though I doubt even that), I do not believe that it is possible to create any sort of real, 'living', creative intelligence, especially given the fact that we can only begin to understand human intelligence. That is, a chatbot may fool you for an hour, but it will not write the Great American Novel.

1. The turing test does not involve impersonating a person of specific gender - you mis-read the article. Find it, and re-read it. It involves impersonating a person in general.

2. It tests for the appearance of intelligence. Turing argues that the appearance of intelligence is equivalent to actual intelligence - kinda like "I think, therefore I am". This seems, to me, to be correct.

3. The test shall go on until testers are satisfied. You wouldn't be satisfied until it composed an original work of fiction? So ask that question!


If we take this analogy and apply it to the quest for AI, we end up with a computer that will, perhaps, pass the Turing Test, but that is not intelligent. Given a stimulus (Acestus says, "Hello, Computer! What's up?"), the computer can apply huge amounts of brute force to formulating a response. (Computer says, "Hi. Not much.") That is not how humans think, and I don't think that it will lead to a self-aware computer. It will just lead to a really good online RPG.

You are assuming that AI will be approached the same way chess will. It won't. Or rather, it has been, and that has failed. That doesn't mean it's impossible - it just must be done differently. Remember, on a low level, all your brain does is pushing electrons and chemicals around.



-Dave Turner
Sig:
-Dave Turner
[ Parent ]
Turing, Chess, and Novels (3.50 / 2) (#32)
by acestus on Wed Nov 15, 2000 at 07:09:26 AM EST

1. The turing test does not involve impersonating a person of specific gender - you mis-read the article. Find it, and re-read it. It involves impersonating a person in general.
As I understand it, Turing's original 'imitation game' did indeed involve imitating a woman -- my insane professor of formal logic drilled that into our heads as one of the illustrations of Turing's, uh, eccentricity.

Clearly, he may have been mistaken. Once my copies of the notes are unpacked (I'm moving at the moment), I'll look it up. For now, I'll cede that I could be wrong. I believe, though, that the gender issue is just dropped in nearly all later forms of the test.

2. It tests for the appearance of intelligence. Turing argues that the appearance of intelligence is equivalent to actual intelligence - kinda like "I think, therefore I am". This seems, to me, to be correct.
I believe, actually, that he argues that the appearance of intelligence is the only testable criterion, as there is no good test for intelligence itself -- but he does not argue that they are the same.
3. The test shall go on until testers are satisfied. You wouldn't be satisfied until it composed an original work of fiction? So ask that question!
If these two individuals (AI and Fred) were locked in a room long enough for one or both of them to produce a novel, I'd probably judge the novel-writer to be the AI, as it hadn't gone mad or starved. (Ha ha.) I'm not sure, though, that this sort of question is reasonable for the imitation game, and I think that the application of the game (meant for human-like behavior) as a test for actual cognizance is... ill-founded.

You are assuming that AI will be approached the same way chess will. It won't. Or rather, it has been, and that has failed. That doesn't mean it's impossible - it just must be done differently. Remember, on a low level, all your brain does is pushing electrons and chemicals around.
No, I'm just arguing against jabber, whom I quoted. He seemed to argue that a brute-force take on computer intelligence was a good take -- and I disagreed.

Acestus
This is not an exit.
[ Parent ]
Turing's paper (3.50 / 2) (#39)
by Novalis on Wed Nov 15, 2000 at 02:45:41 PM EST

The original paper (OCR'd, it looks like).

It seems to be the case that Turing things that the question "Can machines think", in itself, is fairly meaningless. So, he made up a test. For a machine to succeed in the test, it would have to do all of the things that we regard as thinking (including original compositions). BTW, that was a 2 minute Google search.

-Dave Turner
[ Parent ]
The test isn't what I always thought! (none / 0) (#46)
by error 404 on Wed Nov 22, 2000 at 12:18:36 PM EST

Here is how it looks to me from the paper:

  1. The imitation game is set up with a man and a woman. This is, essentialy, a control group.
  2. The man is replaced by a machine. So, in essence, the computer is programed to emulate a woman.
  3. The results of the two imitation games are compared. The AI passes the test if it is as good at imitating a woman as a man is.
Yow! Computers in drag!

Now, Turing doesn't make a big deal about the gender, other than constantly comparing the computer to a man (possibly using "man" in the generic sense) but it is pretty clear in the setup.


..................................
Electrical banana is bound to be the very next phase
- Donovan

[ Parent ]

Better HTML version (3.75 / 4) (#16)
by caadams on Tue Nov 14, 2000 at 01:43:01 PM EST

A better HTML version of the essay can be found here.

--Cliff (who thinks someday every website will simultaneously display a story about the Singularity)

This is NOT new (3.57 / 7) (#17)
by Mendax Veritas on Tue Nov 14, 2000 at 02:05:59 PM EST

People have been predicting revolutionary changes of this type for a long time. For example, back in the late 1800s, a guy named Adams calculated (extrapolating, like Vinge and others, from the rate of technological development) that humanity would harness infinite power by 1920. Well, obviously that didn't work out. There were also some biologists and life extension researchers back in the '60s and '70s who thought that by the year 2000, we'd have the whole aging problem solved and we'd be a race of immortals. That doesn't seem to have happened either. We haven't even managed to cure cancer, which some experts thirty years ago thought we'd have by 1980.

This notion of a "singularity" (whether called by that name or not) occurring around 2010 or so has been circulating at least since the 1970s. I think Terence McKenna was one of the first to popularize that idea.

I think there's a fundamental problem with using extrapolative techniques to come to dramatic conclusions like this. A trend can be observed, in retrospect, but it's a bit trickier to predict where it's going. You never know but that we may run into some limit that puts an end to Moore's Law, or our own ability to manage and synthesize all this stuff we're developing.

My guess is that the world of 2010 will be different from that of 2000, but probably no more so than the world of 2000 is different from that of, say, 1970.

Singularity would not end humanity. (3.50 / 6) (#19)
by root on Tue Nov 14, 2000 at 02:47:08 PM EST

The mere argument that a revolutionary breakthrough will occur that will transform the face of AI is nothing new or imaginative. AI seems to be rooted in two camps, the "old school" symbolic manipulators who believe in sets of laws and rules that govern behaviour. They take a very mathematical and "hard" approach to AI. Then you have the "new school" neural network and biological modeling types. They attempt to root out of biology a model they intend to simulate.

AI raises interesting questions, is simulated life really life? Is simulated intelligence really intteligence? Neither camp seem to realize that they both are looking at portions of the human brain and how it works. They are too focused on modelling the entire human intelligence model out of one lego block. The human mind is not fully understood yet, so a biological approach won't work (yet). Hard mathematics and symbolic manipulation (rulesbases and the like) are good for specific intelligences. These specific tasks are not however what make us human.

I also sincerely doubt that human intelligence is the only model of intelligence out there. There is most likely a "better way to do it (TM)." This better way is not always going to come to us as a gestalt "AHA!", nor will it always be so evident through years of painstaking research. Sometimes (actually most of the time) it is a little bit of both.

I have no doubt articficial intelligence will one day exist. I do however doubt it would end humanity. If anything, it will extend humanity raising our own consciousness to a higher level. We'd have a full understanding of what truly defines intelligence. We'd also be capable of exploring other models of intelligence and learning from the assistance of these models more information than we can limited by our own model of intelligence. Imagine an intelligence that can understand an n-dimensional problem with the greatest of ease, but has trouble identifying with which color goes better with that new rug you just bought. You start to see the extrapolation that in our own model of intelligence, there are many sub models of intelligence that need to be broken down and understood, and then the interaction between them must be understood. Then it is a matter of assembling the pieces in the proper combination. The proportions will vary from person to person, but on the whole humans are comprised of several different models of intelligence.

Saying that we will one day create a better model that will displace us is foolish. It will merely enhance us.

#
evolution's font? (4.16 / 6) (#20)
by iGrrrl on Tue Nov 14, 2000 at 03:34:01 PM EST

Funny, just today I was reading Jaron Lanier's responses to the Reality Club's comments on his ONE HALF OF A MANIFESTO on edge.org. Lanier thinks the whole idea of the singularity is unlikely.

I'm not so sure. My view is he's right on crunchy technology (the chips and salsa), but may be wrong on wet technology (meat). I am persuaded by the arguments that AI isn't what it's cracked up to be, and that it will never replace or surpass the humans that create technology. What I think will change us is biotechnology.

The rate at which biotech is moving has stunned me. I've cut and pasted DNA from back when only a few companies sold the very few tools available. There was a lot of DIY in those days, whereas you can buy kits now. And the kits are generally a year to six months behind the curve.

One of the big things holding back biotech at this point is actually crunchy technology -- bioinformatics. There is so much information being generated in biomedicine that no one can synthesize it all. When the information technology catches up to the information being produced, more is in store than cloned sheep and flies with eyes on their knees.

It's not that I think the post-human humans will be engineered, though that seems more likely to me than it did five years ago. I worry more about continued stratification of society based on 1) access to biotechnology limited to the wealthy and 2) discrimination based on genetics.

Of course, the latter point could be argued as "evolution in action," the ethics of which can be debated. Certainly the evolutionary argument has been applied to the idea of the Singularity -- that the intelligent machines will evolve beyond human understanding or control. I'm not an evolutionary biologist, but perhaps I can argue against the appropriateness of thinking about these ideas in terms of evolution, specifically Darwinian evolution.

Most biological evolution (which I'm defining as a change in species through time ) takes place based on DNA which mutates in a random fashion. In other words, the traits selected either for or against arise blindly. Lamarkian theories posit that the animal's behavior can result in heritable traits (the baby giraffe's neck is a bit longer for the parents having stretched toward leaves) -- an idea rejected in the Darwinian camp and only taught seriously in Communist countries. The evolving intelligent machines would rely on Larmarkian (desired) rather than Darwinian (random changes that work or fail) methods.

In molecular biological evolution, no morals or ethics are involved. Blind chance determines genetic change, and external survival pressures determine whether the change remains in the population. Lamarkian ideas hold some implied moral value, so it seems to me. The effort of trying to reach a food source is thought to benefit the offspring, whose necks are just that much longer.

Darwinian evolution throws out most of the changes. Fortunately for living things, we reproduce like crazy, and there's room for that kind of slop in the system. The reason for this system not working for machines is, I think, fairly obvious. Machine evolution, should it ever happen, will follow an accelerated Lamarkian model, since the machines will manufacture the next generation of machines. The next generation will supplant the one which created it, perhaps even cannibalize the "parents" for components. This implies motivation on the part of the machines -- an ethic, if you will, of continued betterment with self-sacrifice.

As a population, humans don't tend to behave that way. I doubt our creations will.

--
You cannot have a reasonable conversation with someone who regards other people as toys to be played with. localroger
remove apostrophe for email.

Just a thought: (3.00 / 1) (#42)
by kjeldar on Thu Nov 16, 2000 at 12:07:18 AM EST

...an ethic, if you will, of continued betterment with self-sacrifice. As a population, humans don't tend to behave that way.

What do you call war?



[ Parent ]
sacrifice vs threat (4.00 / 4) (#43)
by iGrrrl on Thu Nov 16, 2000 at 09:25:55 AM EST

kjeldar asks what about war. It's a very reasonable question. Any number of self-sacrifice for the Good stories are available, so why do I think humans are bad at this?

Here's my take on it. Humans are very good at responding to immediate threats with great acts of heroism. From my American POV, WWII is a sterling example of both individuals and populations working for the percieved Good. However, I think the Viet Nam war is a similar example. Much of the population did not percieve the situation in Viet Nam as a threat to themselves or the country. Rather, they felt the immediate threat was that our own government sending our young men to fight. Many protesters sacrificed for what they thought was the Good. And again, those who were fighting and who agreed with government (not always coincindent) heroically fought for what they believed was the Good.

But I don't want to get bogged down in historical politics. I want to point out the difference between an immediate threat and a distant one. Americans use up more resources than any other country. You can calculate your personal ecological footprint here. We have a nice comfortable lifestyle, in general, and don't want to give up SUV's, dinosaur-shaped chicken nuggets, and air travel. The "threat" is too far distant, and the solutions do not involve great heroic acts, but rather long-term changes in our cultural behaviors and expectations. We are very bad at that.

I'm not a hair-shirt environmentalist, by any means. I do think there are technologies that will allow us to have the life-style we like with a smaller burden on the planet, but the short term interests of big corporations do no favor such investment and development. It is happening, but much more slowly than it could.

To tie this back to machine evolution: The point could validly be made that the programming of the machines in question would include the constant desire to better The Machine at the expense of the individual machine. And perhaps the imperative to self-preservation could be made subject to this over-riding quest for improvement. I confess to a dollop of anthropomorphism, but do not withdraw my main point, which is that the Darwinian model is not the right one for such discussions.

--
You cannot have a reasonable conversation with someone who regards other people as toys to be played with. localroger
remove apostrophe for email.
[ Parent ]

Why do people still consider AI a horsepower issue (4.00 / 5) (#26)
by bjrubble on Tue Nov 14, 2000 at 06:29:10 PM EST

Vinge seems to be going the Kurzweil route and assuming that, should the horsepower be there, AI will somehow inevitably follow. I just don't get this.

From where I'm sitting, thought seems to be an emergent property of highly chaotic and intricately balanced systems. Decomposing and simulating such a system is quite easy next to actually understanding and tuning it. Just because you've built a weather simulation that can predict a hurricane coming onshore at a certain time and place, doesn't mean you know how to "design" a hurricane (ie. set all the environmental values that would result in a hurricane) that will come onshore at a time and place you want. These two tasks are not comparable to each other at all.

I always think about AI in light of genetic programming (which I believe any successful AI will be based upon) -- the end results of these experiments are horrendously complex algorithms that no human could ever decompose, yet which often work more efficiently and robustly than human-designed ones. The robustness is particularly compelling -- the ability of biological systems to adapt and to successfully deal with less-than-pristine information is matched far more capably by evolved algorithms than designed ones. This approach to AI also "feels" right to me because it more closely models the way the human brain (and every other intelligence of which we know) was developed. But it pretty much closes the door on ever easily decomposing and understanding "thought" -- like "hurricane" or "checkmate" it remains an attribute fairly easily recognizable in a system, but far more difficult to engineer into it.

I'd be interested in some references (none / 0) (#27)
by SIGFPE on Tue Nov 14, 2000 at 07:07:09 PM EST

the end results of these experiments are horrendously complex algorithms that no human could ever decompose, yet which often work more efficiently and robustly than human-designed ones.
It's fun to imagine that this is the kind of result that genetic programming might result in but can you actually give some substantial examples of this?
SIGFPE
[ Parent ]
Hard to find (4.00 / 3) (#31)
by bjrubble on Tue Nov 14, 2000 at 10:08:27 PM EST

After an hour of Google I've given up finding any site that definitively addresses this -- unfortunately the characteristics I claimed in GAs (complexity, obfuscation, etc) are shared by the problem domains to which they're generally applied -- but I did find a bunch of unsubstantiated comments, which is the next best thing!

"This has proved a very powerful way of generating solutions to very difficult problems, but with the drawback that the resultant code is all but incomprehensible to humans"

"How this circuit does what it does, however, borders on the incomprehensible. It just works."

"Evolutionary design methods frequently produce solutions that are incomprehensible to the observer"

"Some of the circuits generated by the genetic algorithms in this chapter are not easily understandable. That is, although they can be tested for functional performance, it is not obvious how they work."

The strongest point, I think, is the parallel with biological evolution. Biological systems are swimming with subtle interactions and dependencies, which makes any piece of the system extremely difficult to remove once in place. We've never been able to ditch our reptilian brainstem, and our internal organs still need something resembling seawater. Evolution tends to work by adding or tweaking pieces, and the result is that the genetic code usually gets larger as time goes by. 95% of human DNA serves no evident purpose, and even this figure is extremely dubious because it's so difficult to disentangle the genetic code from its phenotypical expression. Genetic algorithms are created through the same process, and tend to share many of the same characteristics.

[ Parent ]
Thanks!! (none / 0) (#34)
by SIGFPE on Wed Nov 15, 2000 at 12:26:49 PM EST

He he! I like to chase people up when they make unsubstanted comments. Usually people get defensive and don't have much to say to justify their claims. But this time it's paid off. Thanks for those very interesting links! I hope I can find enough time to read them.
SIGFPE
[ Parent ]
Ever read Godel Escher and Bach? (3.00 / 1) (#40)
by livewire on Wed Nov 15, 2000 at 11:13:13 PM EST

This article instantly had me leafing through my copy of GEB. I always used to think that there was something a bit mystical about how the human mind worked. Not supernatural, but obfuscated; somehow on a level that we couldnt reach. Afterall something seems off about an intelligent entity being able to model itself completely. Encapsulate its intelligence within its intelligence if you will. But last summer I blazed through GEB and a lot of my previous intuititions were overturned. Douglas Hofstadter's arguments are not particularly complex, but are backed with lots of logical models. In the end he questions the intrinsic nature of meaning in symbols and therefore attacks the "specialness" of the mind. First off he says that the brain and its complex neural activity appear to be shuffling around symbols, such as "mother". And that these symbols are linked to others lin a web of association like "mother" to "father". Basically when you are thinking about your next move in chess, patterns of symbols are being shuffled around, linked in different ways, sometimes looping back to each other, sometimes reinforcing each other etc. Since the brain exists in the real world, and is therefore appears to be goverened by rules, one could theoretically plot this great on paper. Secondly he says that strings of symbols (ie character strings) are divorced of any inherent meaning but rigidly follow certain rules. SS + S = SSS etc. Therefore if the structures of thought can be plotted at all, complex as hell as they may be, there will always be an isomorphic simpler system (ie recursive structures like for loops) that can be computed on your average supercomputer. Basically all things in reality can be reduced to formal mathematical systems. If the brain works on one high level ( or at least we are conscious only of the highest levels), and a computer works (or we are conscious of, or we program) on a lower level there will be an isomorphism that allows both to do that same thing. Now this is a halfassed and wrong explaination of what that great book has to say ( because im leaving out a big part, godel's theorem), but its the gist I get from it. If anyone has also read the book and has a better explanation that would be great.
bow down before the one you serve
erm, my tabs got vaporized (none / 0) (#41)
by livewire on Wed Nov 15, 2000 at 11:15:14 PM EST

oh shit you have to put in your own html tags
bow down before the one you serve
[ Parent ]
Well hoo-ray. (2.00 / 1) (#45)
by ksandstr on Sat Nov 18, 2000 at 11:10:07 AM EST

Jeez. Another lump of cyberpunk utopianist post-consumer male bovine nutrition. These people would be more useful to the whole tech thing if they did something real (like learning about the actual technology) instead of writing these head-in-the-clouds fluffy feel-good "how-its-gonna-be" articles about "the coming technological change" that really don't accomplish anything.

Read this piece for enlightenment.



Fin.
Intelligence isn't well defined, nor sufficient (4.00 / 1) (#47)
by error 404 on Wed Nov 22, 2000 at 01:12:43 PM EST

My computer can do a number of things that used to be considered "intelligent" better than I can. Even some pretty advanced ones - those translation sites aren't perfect, but they translate English to French better than I do. And for those who actualy speak French, the computer is still (by some definitions of intelligence) more intelligent, because it can translate more languages.

But there are a few things no computer has, and none appears to be moving toward in the next few years:

  • motivation No will to take over the world, no will to survive, no will to ANYTHING. You build that hyper-intelligent machine and it answers your questions. Run out of questions, and it sits and spins. Big Blue can whip your ass at chess. But is Big Blue going to cheat? No, not unless it is programmed wrong. Is it going to seek out new opponents? No, just sit there, maybe with a prompt blinking, or even so far as a video game come-on sound track.
  • abstracted goals For singularity, the machine has to make a "smarter" machine. What the hell does "smarter" mean, in terms that a computer can deal with? And, more importantly, in terms that the computer can pass on to Junior.
  • creativity In order to make that better machine, you need a better ideas, and so far, the only ideas computers "have" are those they were programmed with, except for some neural net stuff that is interesting, but nowhere near self-replicating with improvements.
None of this is neccessarily impossible to simulate eventualy, but we aren't making progress toward any of it at anything like the "double every 18 months" rate. By 2010 we will have computers that are annoyingly slow by 2011 standards, doing some things that we don't think are real practical (my guess - I see rudimentary text analysis by then. The software will "know" what a post is about most of the time, with complicated sentence structures or tricky messages designed to slip through spam filters fooling it. People will laugh at how wrong the software gets it. Some of us will have agent software skimming real big discussion fora for topics of interest and replying to marginaly interesting posts automaticaly.) now. And singluarity will be about ten years off.
..................................
Electrical banana is bound to be the very next phase
- Donovan

The Coming Technological Singularity | 48 comments (33 topical, 15 editorial, 0 hidden)
Display: Sort:

kuro5hin.org

[XML]
All trademarks and copyrights on this page are owned by their respective companies. The Rest 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!