Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

[P]
Rethinking the Turing Test

By Farq Q. Fenderson in Op-Ed
Sun Jul 11, 2004 at 09:46:34 PM EST
Tags: Science (all tags)
Science

In the 1950s, Alan Turing had proposed a metric for machine intelligence. This metric is currently known as "the Turing Test" and much work in the field of Aritificial Intelligence (or AI) has been influenced by this metric. In short, Turing suggested that a machine that could behave in a manner indistinguishable from a human could be considered to be "thinking."

For many researchers, the goal is simply to pass the Turing Test.

In 1990, the first formal instantiation of the Turing Test, the Loebner Prize, was introduced. The Grand Prize, awarded to the first computer able to provide responses indistinguisable from a human, is a gold medal and $100,000 and has never been awarded. However, each year $2000 is awarded to the entry that fares the best. This is ostensibly designed to stimulate research in the area.

I propose that not only does this metric exclude much in the way of actual thought, it also fails to encourage much in the way of machine intelligence. I also propose that the Loebner Prize, for adhering to this metric, puts an incentive on an aspect of AI that does little to advance machine thought or intelligence, in practice. Thus a reconsidered and reformed version should be introduced.


What is Intelligence?

Before addressing the problems with the Turing Test, the notions of intelligence, and thought must be clarified. Obviously I do not consider the definition of thought to be, precisely, that which the Turing Test is testing for. The Turing Test has been designed in such a way as to avoid having to define thought, which is clever, but unfortunate. It merely provides a condition, that if met, asserts that thought exists, while saying nothing for the case where the condition is not met.

Thought, unfortunately, is too elusive in any comfortable form, for the purposes of this document. (I do have my own notions, but they are specific to a framework and philosophy that would be alien in concept to other philosophies and frameworks.) The purpose of this document isn't to undermine the Turing Test on its own grounds, but rather to establish a metric that provides more incentive to machine intelligence, and on a broader spectrum. Further, I don't think it's useful to get into any arguments about whether thought is a prerequisite for intelligence, intelligence for thought, possibly neither being prerequisite to the other, or perhaps they're one and the same.

Intelligence is a more agreeable term for definition, or at least, to state requirements for, and to highlight observable symptoms of whether or not these requirements have been met.

Perhaps intelligence is a result of evolution that has aided the survival of all species having it, or perhaps not. Regardless, given the language used when describing intelligence, or intelligent behaviour, I think that it's fair to consider intelligence a faculty of using acquired knowledge (whether explicit or implicit) and learned skills to solve problems presented to the entity in question. Intelligence, therefore, is not only the ability to store and relate and use information, but also to adjust behaviour based on experience. While not all problems must be solved, some do -- inherently, any threat to survival is a problem that must be solved.

Finally, the Intelligence Quotient is commonly used as a measure of intelligence. Despite the fact that it is often considered a poor metric, the fact remains that it is a more popular test than the Turing Test, in general, and deals exclusively with solutions to problems presented to the testee. However, I do not propose an IQ test, rather I present this merely to support the notion that problem solving ability is core to intelligence.

The Problem with the Turing Test

The problem is that much of AI is concerned with a kind of software known as the chatterbot. A chatterbot is a conversational program, designed to simulate conversation with a human as best as is possible. Chatterbots largely fail to achieve thought or intelligence based on the ease of creating a superficial algorithm that creates an approximation of the desired semblance without requiring intelligence or thought. As such, contestants for the Loebner Prize can persue reward without actually advancing reasearch for true machine intelligence.

This is not to say that there is no merit in chatterbots, nor that all chatterbots are designed without an approach to real machine intelligence. Many, however, are. Most of the ones I have personally interacted with are of this nature, though many of their authors will claim that their approach may yield intelligence. I won't bother making examples since I'm not here to pick fights or demerit anyone's work.

Conventional approaches that work more towards semblances than actual intelligence may have more utility in them, at least at present. In fact, I'm quite confident that they do. The most stunning example of real machine intelligence that I've personally witnessed was a game with fictional animals. To the public, it was a form of entertainment.

On the other hand, the most useful applications of AI seems to be fuzzy logic circuits that can land a helicopter with a broken blade, or neural nets that can form their own heuristics with training. Neither of these things are truly intelligent, but they do provide us with solutions to problems that have not been solved by other means.

Despite the relative lack of utility in genuine intelligence oriented directions, at present, I hope that the true goal of AI is to arrive at genuine intelligence, artificially, as opposed to using the adjective "artificial" to degrade the notion of intelligence. Lest AI become a derogatory term.

Finally, regardless of accuracy, the Turing Test, as applied in the Loebner Prize competitions, does not permit any work that cannot engage in conversation. This means that each entry must be capable of language, no matter how intelligent, thinking or no. This fact alone barrs much of AI research, and I feel it biases almost all of the popularity towards a very small share of the work being done.

The Test

As previously mentioned, Turing's metric was cleverly constructed to be deliberately open-ended in implied definition of the term, "thought." I would like to extend this somewhat to intelligence with new focus. The adjustment is simple but effective, instead of judging how well software can behave in the semblance of a human, a superior test, in my opinion, is the measure of convincingly software can resemble an animal, in the general case. To clarify, I mean that the software should not be expected to exhibit the behavior of any animal in specific, but to exhibit convincing animal behaviour.

Since cosmetics are problematic, and mostly irrelevant, artificial animals should be described by intermediaries, who report to judges. Additionally, real animals should be used as controls, and described similarly, and in similar detail. It is best if the complexity of behavior in the control animals roughly matches that of the artificial animal, so that an appropriate contrast can be made.

In competitions, where it is unfeasible to match each artificial animal to a real one, classes should be made, and each participant is free to determine the class they wish to compete in. Each class being worth only a limited amount of points, regardless of how convincing the entry is. Additionally, points may be awarded according to the detail permitted by the entry (with an established minimum -- "you must be this detailed to ride") up to the maximum permitted by the class.

The reason that detail must be reported equally between control and artificial animals is simply that an extant organism in full detail has the advantage of being obvious, where an artificial animal will likely be missing cosmetics, particularly if it is purely software. Consider that full detail, in this sense, could mean simply forgoing the use of intermediaries. Additionally, fictional animals would automatically lose, unless contrasted against controls that the judges have no knowledge of.

Finally, because fictional animals should be permitted -- why not? -- the animals can never be named. This further removes unfair cues from judging, since the animals will be judged on animal behaviour in general, the judges will not attempt to fit each into a specific mold.

Complexity of Behaviour

Further elaboration is required on the use of the term "complexity of behaviour" in this context. It must be meaningful, and it must relate strongly to the notion of intelligence in order to be useful. For example, a intricate ritual that could be easily pre-programmed does not denote complexity of behaviour in this sense.

The behaviour that counts must be based on experience and must be a component to the solution of a problem, or stated a little more naturally, the achievement of some goal or desire. Unfortunately this is extremely difficult to define, but I think it is safe to leave this open to the discretion of the judges, and as with the Turing Test, the goal is to convince the judges rather than to meet an arbitrary technical requirement.

Since learning can be subtle and take time, especially in many animals, it's best to audition each artificial animal over a greater period of time than would be taken with conversational software. This provides both an advantage to truly intelligent works, and a severe disadvantage to superficial ones. Learning will show gradually over time, and in a realistic manner, where attemts at mere semblance are bound to betray their nature eventually.

Finally, interaction should be carried out by intermediaries or another party independant of the entry or judging. The auditions should not be permitted to execute unmolested, but rather an enviroment should be provided that permits interaction to ensure that events are not scripted. Also, to permit those interacting with the animals to provoke as much in the way of complex behaviour from the animals as possible. This also serves a disadvantage to superficial approaches and an advantage to genuine approaches.

Conclusion

I feel these guidelines are conducive to a healthy spirit of achievement in AI research. Not only do they better fit present technology, but they also provide a broader range of behaviour. On an aesthetic aside, what's cuter than baby animals learning about their environment? (Perhaps human babies, but that's usually punctuated by the mother yelling "don't!" during the interesting moments.)

These are just guidelines, however. I'd be happy with anything that adequately reflects ths same spirit found here. Additionally, they're not meant to replace the Turing Test or the Loebner Prize, I wouldn't ask the authors of countless bots to give up their work. I could never compete in their field, and they probably feel the same way about mine. I confess that my original thoughts were to replace the existing tests, but after consideration, diversity is simply a much better idea.

Finally, I feel the need to point out that I've left a lot out. The guidelines I've described don't fully account for everything required in a competition. In particular, they take no ownership of scoring, but I hope I've provided enough detail to seed a complete set of guidelines. I also hope that I have demonstrated that the environment is not as rich as it ought to be.

Sponsors

Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure

Login

Related Links
o Also by Farq Q. Fenderson


Display: Sort:
Rethinking the Turing Test | 258 comments (225 topical, 33 editorial, 1 hidden)
read some books (none / 1) (#13)
by ant0n on Sat Jul 10, 2004 at 01:43:14 PM EST

What is Intelligence?

E. G. Boring defined intelligence in 1923: 'Intelligence is what these tests [that is, intelligence tests in general] measure'. Period. Please write this definition down 100 times. Then read some books on AI. And I mean books, not the internet. Then rewrite your article.
You'd do us all a favor, because it's simply too tiresome to have a discussion about AI with someone who has not been properly brainwashed.

The problem is that much of AI is concerned with a kind of software known as the chatterbot.

'chatterbots' are only a minor sub-field of AI. There are very few publications in AI which deal with this kind of software; the first and most widely known is the article "ELIZA-A Computer Program For the Study of Natural Language Communication Between Man and Machine" by Joseph Weizenbaum, but there is not very much more. AI is mainly concerned with Problem Solving, Automated Theorem Proving, Expert Systems, Multi Agent Systems, Neural Networks, and the like.
There is however a community of hobbyists who write programs in the tradition of Eliza. It's typical for these programs that they are nothing more than a pumped-up Eliza: they have much more sentence-patterns, keywords, exceptions, and so on, but that's all.


-- Does the shortest thing the tallest pyramid's support supports support anything green?
Patrick H. Winston, Artificial Intelligence
Intelligence (none / 2) (#19)
by The Solitaire on Sat Jul 10, 2004 at 05:54:26 PM EST

That is one understanding of the word "intelligence". It is far from being an accepted term in AI/CogSci. Really, most AI these days could care less about what intelligence is. The focus is really on solving problems that typically require human intelligence.

With regards to the chatterbots, I agree, they really have little to do with modern conceptions of intelligence. Those people involved in "real" natural language processing (as I am) generally treat them with disdain. That being said, I think the author is trying to say that if they can pass the test, it really looks like an invalidation. I agree. The question is, "can a chatterbot ever pass the Turing test?" I doubt it highly.

I need a new sig.
[ Parent ]

can a chatterbox be intelligent - no (none / 0) (#70)
by lukme on Sun Jul 11, 2004 at 05:05:47 PM EST

can a chatterbox pass the turing test - depends on if you can find a subset of people who believe the chatterbox is a human.


-----------------------------------
It's awfully hard to fly with eagles when you're a turkey.
[ Parent ]
Chatterbots and ability to pass as human. (none / 0) (#240)
by tabris on Sat Jul 17, 2004 at 12:56:09 AM EST

I dunno that this is worth very much.

I have a friend who wrote his own chatterbot in perl... And plenty of kids fell for it. Heck, it sometimes was more coherent than some of the kids who were on the chatnet (which was the point of the bot).

There's some logs somewhere of somebody asking the bot for cybersex... ask k5 user scanman, he has the logs, he wrote the bot.

[ Parent ]

copout (none / 2) (#98)
by gdanjo on Mon Jul 12, 2004 at 02:44:44 AM EST

That is one understanding of the word "intelligence". It is far from being an accepted term in AI/CogSci. Really, most AI these days could care less about what intelligence is. [...]
That's a bit of a cop out, don't you think? It's like a mechanic saying "I don't really care about what makes an engine work, as long as I'm using an engine to solve REAL problems (like going to work, where I fix cars)."

With regards to the chatterbots, I agree, they really have little to do with modern conceptions of intelligence. Those people involved in "real" natural language processing (as I am) generally treat them with disdain. [...]
Chatterbots have EVERYTHING to do with intelligence - they use "real" natural language, and they are the exception that fails the rule. The question is: why?

Imagine that you have a chatterbot that has access to an infinite amount of information, and an infinite amount of algorithms to use at it's disposal. Surely one configuration of this Infinibot would pass the Turing test, at least once, no? If that's the case, and we still agree that "conversing" with another constitutes intelligence, then we need to figure out why this is so.

Again, I think ignoring it is a copout - just as the various paradoxes in history have been ignored, until a paradigm shift eventually solved them (and revolutionised thought).

Dan ...
"Death - oh! fair and `guiling copesmate Death!
Be not a malais'd beggar; claim this bloody jester!"
-ToT
[ Parent ]

Variations on a theme (none / 0) (#163)
by Zabe on Tue Jul 13, 2004 at 03:43:18 AM EST

Or, instead of a chatter bot having an "infinite amount of information, and an infinite amount of algorithms" if the code inside were rearranged an infinite amount of ways at least one configuration would probably pass the test.

Both our ideas are the generally the same, but with mine all we would need to do is start rearanging the chatter bot's code randomly until it passes the test  (rather then try to build a bot with infinite information).

The question becomes would processing power, and other variables, play a role or is just the code that we need to play with?
Badassed Hotrod


[ Parent ]
Question (none / 0) (#16)
by WorkingEmail on Sat Jul 10, 2004 at 02:43:29 PM EST

Does an AI need to be "Intelligent" to take over the world?


Hmm. (none / 1) (#23)
by spooky wookie on Sat Jul 10, 2004 at 06:28:13 PM EST

Would it be Artificial _Intelligence_ if it is not intelligent?


[ Parent ]
It would be an Intelligent Artifact (none / 0) (#31)
by Farq Q. Fenderson on Sun Jul 11, 2004 at 12:36:19 AM EST

Which is how I interpret "Artificial Intelligence." A lot of people seem to see it as intelligence that is artificial, in the sense that it's not real intelligence, but rather a convincing facade.

farq will not be coming back
[ Parent ]
Words. (none / 0) (#73)
by spooky wookie on Sun Jul 11, 2004 at 06:10:53 PM EST

I think the meaning of "Intelligent Artifact" and "Artificial Intelligence" is exactly the same.

Intelligence made by Humans.


[ Parent ]

As opposed to... (none / 0) (#119)
by DavidTC on Mon Jul 12, 2004 at 11:20:20 AM EST

...all those humans that are created by?

-David T. C.
Yes, my email address is real.
[ Parent ]
As opposed to (none / 0) (#139)
by spooky wookie on Mon Jul 12, 2004 at 04:20:38 PM EST

natural evolution.

[ Parent ]
Did humans need to be intelligent to take over... (none / 0) (#52)
by NoMoreNicksLeft on Sun Jul 11, 2004 at 10:32:10 AM EST

the world?

No.

--
Do not look directly into laser with remaining good eye.
[ Parent ]

But more intelligent than the competition (none / 0) (#69)
by horny smurf on Sun Jul 11, 2004 at 04:30:46 PM EST

Humans were more intelligent and resourceful than the other animals. Shouldn't the same criteria apply if AI were to overtake humans?

[ Parent ]
Well, think about it (none / 0) (#92)
by WorkingEmail on Mon Jul 12, 2004 at 01:22:52 AM EST

I believe it is entirely possible that an AI could sieze power without or bothering to learn language, or not bothering to say anything, or failing a Turing Test.


[ Parent ]
No... (none / 1) (#85)
by bugmaster on Sun Jul 11, 2004 at 11:47:49 PM EST

...it just needs to be Google.
>|<*:=
[ Parent ]
Better attacks on the Turing Test (none / 2) (#20)
by The Solitaire on Sat Jul 10, 2004 at 06:04:22 PM EST

It's good to see someone giving the Turing Test a bit of a critical look. Too many people simply accept that it is valid, simply because Turing (a genius, no doubt) came up with it. However, I don't think the main problem is with chatterbots.

There have been some very sophisticated attacks on the fundamental idea of the test over the years. The most famous of these is John Searle's "Chinese Room" thought experiment ("Minds, Brains, and Programs"; Behavioral and Brain Sciences, 1980), but there have been others. I highly recommend the paper "Toubles with Functionalism", by Ned Block. He forwards two of what I consider to be devestating attacks on the power of the Turing Test (actually, the attacks are on Machine Functionalism, but the Turing Test is intimately tied to that theory of mind).

As for the "animal behaviour" test - I believe that it falls prey to the same arguments. We can produce a winner, without producing an intelligent machine. Also, an intelligent maching might not win. Hardly a foolproof test.

I need a new sig.

It's not foolproof, but... (none / 0) (#30)
by Farq Q. Fenderson on Sat Jul 10, 2004 at 11:56:25 PM EST

it's better suited to our ability to produce intelligence in machines. I think it offers better incentive, and allows people who are working on non-chatterbots to participate.

farq will not be coming back
[ Parent ]
Chinese Room (3.00 / 5) (#55)
by bugmaster on Sun Jul 11, 2004 at 10:52:17 AM EST

The Chinese Room thought experiment is just a logical fallacy. Consider a similar argument (that I just made up):
A mechanical watch is a device that can tell time. Let's say we wanted to actually build such a watch. We'd probably take a bunch of gears, springs, knobs, etc., and assemble them in some fashion. But, look at a single gear. Can it tell time ? No. Can a spring tell time ? No. Can a knob tell time ? No. Clearly, then, a collection of these things won't be able to tell time, either.
The problem here is that the whole can be greater than the sum of its parts. Searle is right in saying that the man in the room doesn't speak Chinese -- but we aren't testing the man, we're testing the entire system: the room, the rulebooks, the pencils, the in/out slots, etc. Similarly, we aren't just using a cog to tell time -- we have to look at the whole watch.

Searle defeats himself further when he proposes the second version of his experiment -- where the man in the room memorizes the rules, and learns how to pronounce all the funky symbols. Searle says that the man would pass the Chinese Turing Test, but "clearly", he still doesn't speak Chinese.

I don't know about Searle, but personally, if I met someone who speaks -- eloquently, natch -- to me in Chinese, I would naturally assume that he is able to understand Chinese. I don't have psychic powers -- I don't know what he has in his head, be it fleshy brains, memorized rulebooks, silicon, or whatever. Searle's problem here is that he is breaking the conditions of the test, by assuming his conclusion. In Searle's version of the experiment, we know a priori that the man cannot speak Chinese. Well, if we know that a priori, then there's no point of testing the subject, right ? The whole point of the Turing Test (or any test, for that matter) is that we don't know the answer ahead of time.
>|<*:=
[ Parent ]

I agree (none / 0) (#57)
by The Solitaire on Sun Jul 11, 2004 at 12:53:31 PM EST

I wasn't endorsing Searle's argument, just pointing out that it is out there. I personally think a combination of the systems and robot replies (as put forward in the peer commentary in the original argument) pretty much shut him down. Block's thought experiment is much better. However, regardless, no argument has been completely open and shut.

All of these critisisms of the Turing Test are what Daniel Dennett calls "intuition pumps". There is no deductive proof that the Turing Test is adequate, or inadequate. The idea is to produce a system that can pass the test, but is obviously non-intelligent. In Searle's case, it's the Chinese room, in Ned Block's, it's the Blockhead.

My position on the whole thing has always been that the ability to pass the Turing Test is neither a neccessary or suffient condition for intelligence. That being said, it is the best (and in fact only) test that we have.

I need a new sig.
[ Parent ]

Deductive Proof (none / 1) (#74)
by bugmaster on Sun Jul 11, 2004 at 06:14:59 PM EST

I'm not sure what you mean by "deductive proof", but, if you let me know what that is, I'll take a shot at it.

My position on the whole thing has always been that the ability to pass the Turing Test is neither a neccessary or suffient condition for intelligence.
I think that, sadly, the meaning of the word "intelligence" has become so vague that there can be no test that checks for it, because no one knows what it even means at all. The Turing Test just checks for behavior, not this mystical quality of intelligence -- which is sufficient as long as you accept materialism of some sort.

The idea is to produce a system that can pass the test, but is obviously non-intelligent. In Searle's case, it's the Chinese room, in Ned Block's, it's the Blockhead.
Ok, I know what the Chinese Room is (and, to me, it seems obviously intelligent... go figure), but what's this blockhead thing ? I've never heard of it.
>|<*:=
[ Parent ]
sadness (none / 2) (#77)
by gdanjo on Sun Jul 11, 2004 at 09:50:22 PM EST

I think that, sadly, the meaning of the word "intelligence" has become so vague that there can be no test that checks for it, because no one knows what it even means at all.
Aw, don't be sad! The meaning of the word "intelligence" is vague because our understanding of it is incomplete (vague). We used to think that intelligent behaviour was one thing, now we know that it is not this thing. We are (hopefully) converging on the definition, and until we figure out what it is, we won't know what it's definition is.

The question is, should we be able to know the definition of intelligence before we know how to implement it (as in, we knew what flying was before we were able to fly)? Or are we helping define 'intelligence' implicitly in our search for what intelligence is (as in, we define what a computer was, and is, by building millions of them)?

Or, possibly, are these two questions asking the exact same thing? That to know something is to 'implement' it? So while we may have intuitively known what "flying" meant, we probably imagined feathers glued to our arms as the instantiation of what flying "means" - it was only when we discovered how to fly that we learned the limitations of flying, and how such a concept applies to us. Similarly, it's possible that, while we understand what intelligence "is", we still don't understand how an instantiation of intelligence will look like - and hence, we cannot define what "intelligence" is.

If you know what I mean. :-)

Dan ...
"Death - oh! fair and `guiling copesmate Death!
Be not a malais'd beggar; claim this bloody jester!"
-ToT
[ Parent ]

Still sad (none / 0) (#81)
by bugmaster on Sun Jul 11, 2004 at 11:23:54 PM EST

I'm still sad. You say:
The meaning of the word "intelligence" is vague because our understanding of it is incomplete (vague). We used to think that intelligent behaviour was one thing, now we know that it is not this thing... Or are we helping define 'intelligence' implicitly in our search for what intelligence is... ?... So while we may have intuitively known what "flying" meant, we probably imagined feathers glued to our arms as the instantiation of what flying "means"
I think this only reinforces my point. We know right away what "flying" means -- it's that thing that birds and bees can do, despite being heavy and made of meat (or squishy goo or whatever). We can look up at them, and say, "ok, I want to make a machine that can do that". Similarly, we can define what a computer is -- a machine that can manipulate numbers (lousy definition, but workably). With intelligence, we can't do that.

Personally, I think that we can define what intelligence means -- we are just afraid to. Because, as soon as you define it, you might find out that some people have more of it than others, and that would be discrimination, now wouldn't it ?
>|<*:=
[ Parent ]

The meanings of words (none / 2) (#89)
by The Solitaire on Mon Jul 12, 2004 at 12:32:30 AM EST

It's not that clear that we do know what flying is; at least not in the way philosophers want to know what intelligence is. Does a glider fly? What about someone in free fall in orbit above the planet? In deep space? Is a parachuter flying? When you push a concept in the right way, we often find that we don't in fact know what we mean.

Wittgenstein was famous for examining just this problem. He set out to figure out what the necessary and sufficient conditions were for something to be called a game. What he found is that he couldn't get the conditions to include all and only things that were typically called games. At every point, there were games that were not being classified as such, or non-games that were being classified as games. He eventually gave up and concluded that there is no such set.

He came up with a completely different understanding of concepts based on what he called "family resemblance". That is, there is no single property which all games share, nor is there some single game that contains all of the properties of a game. Rather, something is called a game by virtue of its resemblance to other games (but not to all other games).

Can we define intelligence? Sure we can, and people have. But, like Wittgenstein, I think we will quickly realize, each time we try, that our definition is in some way lacking. As for the discrimination thing, I don't think that is the problem. People have been giving IQ tests for years, trying to show that one person is smarter than another. And those tests have teeth! They help decide who gets into what school and so on. The problem is, such tests are known to be biased towards certain groups (this is not a case of being politically correct - they really are bad). If we were to simply define intelligence however we like, and use it in this way, we had better be certain that whatever we are measuring is something real; which is the same thing as saying we need to know that our definition is at least mostly correct.

I need a new sig.
[ Parent ]

Flying (none / 0) (#104)
by bugmaster on Mon Jul 12, 2004 at 06:52:15 AM EST

It's not that clear that we do know what flying is; at least not in the way philosophers want to know what intelligence is.
Yeah, I think you are saying something along the same lines as myself. We don't need philosophers to tell us what flying is -- we can just look up. Sure, not everyone can agree on the details (does a rock fly when you chuck it ? How about paper airplanes ?), but the basic idea can be derived from some sort of empirical observation.

Game-wise, I once read an amazing article about the subject, which delineated the boundaries between games, toys (SimCity is a toy), and puzzles -- tying the matter in with game design. It was one of the best pieces of writing I've ever seen... But I can't find the link now, and that makes me sad :-( If you know what I'm talking about, point me to the link :-)

Anyway, I think it seens intuitively obvious that some people are smarter than others -- not anyone can be an Einstein or a Ramanujan or even Stephen Hawking. Clearly, there must be something that Newton had and I lacked -- otherwise you'd be learning about "Bugmaster's Third Law" in school by now. On a more down-to-earth level, there's certainly something that con artists possess and their victims lack -- otherwise, Nigerian spammers couldn't afford money for their dirty bandwidth. We don't know how that "something" works, but I think it would be disingenuous to deny that it (intelligence, smarts, critical thinking skills, whatever) exists.
>|<*:=
[ Parent ]

The genius factor (none / 1) (#128)
by The Solitaire on Mon Jul 12, 2004 at 02:14:43 PM EST

It's a good question as to what makes one person a genius, and one person a janitor. The problem is, whatever it is, it's not likely to be some single attribute called "intelligence". There are millions of factors that contribute to a person's abilities, some genetic, some environmental, and some social. Becoming a genius is more than just having a good head on your shoulders; in fact, this isn't likely to even be the most important biological factor. One thing that seems to separate those great minds you mentioned from the rest of us slobs is that they had perserverance. They were (are) consumed by what they did (do). There is no substitute for effort, and none of those you mentioned were known as slackers.

That being said, they all had different social backgrounds, Hawking being relatively affluent, but suffering a terrible congenital illness; Einstien growing up a jew in Austria at a time that it was bad to be a Jew in Austria; and Ramanujan teaching himself mathematics in high school, despite living in not the best conditions for scholarly study. One thing that adversity seems to have taught all of these people is that hard work pays off. But no matter how hard you work, if the ability isn't there, you're going to have issues.

Still, I think there are millions of people with the same raw potential as any one of these figures. The problem is, so very few are able to turn that ability in to something real. Moreover, even those who do work hard, to be remembered as a "great" you sort of need to be in the right place and the right time. Nobody who discovers Newtonian mechanics now, no matter how "on their own" they do it, is going to be famous for it. Furthermore, scientific discoveries are rarely, if ever, produced in a vacuum. Newton once said (albeit sarcastically) "If I have seen further, it is by standing on the shoulders of giants." I've always thought the sentence should be "If I have seen further, it is by standing atop a pyramid of midgets." Each scientist, no matter how humble, adds slightly to to scientific whole; greats are just those that are able to synthesize that knowledge into a leap forward.

I need a new sig.
[ Parent ]

intelligence (none / 1) (#97)
by gdanjo on Mon Jul 12, 2004 at 02:25:58 AM EST

We know right away what "flying" means [...]
We know what your definition of "flying" means, but what about the definition of flying that existed in people that never flew? They may have associated flying with the very real act of "flapping one's wings" and not the abstract notion of "moving through space, supported only by air" (or however you define it).

The difference between the abstract notion of "flying" and the real action of "flapping one's wings" is kind of like the difference between the abstract notion of "intelligence" and the real actions of conversing, rationalising, reducing, expanding, debating ... in other words, thinking, then acting according to these thoughts.

We have yet to define what it is about "intelligence" that makes it intelligence (and not just cleverness) - even though we already know how to "flap our brains" to create it!

Dan ...
"Death - oh! fair and `guiling copesmate Death!
Be not a malais'd beggar; claim this bloody jester!"
-ToT
[ Parent ]

Blockhead (none / 0) (#87)
by The Solitaire on Mon Jul 12, 2004 at 12:16:58 AM EST

I'm not sure what you mean by "deductive proof", but, if you let me know what that is, I'll take a shot at it.

I mean the kind of proof that we use in mathematics and logic. The kind that works completely without any evidence, simply arguing from first principles. The Blockhead is one example of an attempted deductive proof against the Turing Test. However, it fails in its goal, since, for all we know, the Blockhead might be intelligent. We can't 100% rule it out a priori. That being said, I don't think it is intelligent, and I would imagine that most would agree. That being said, even it it was a deductive proof of the inadequacy of the Turing Test, it still isn't over. So long as you don't think that the test is in some sense definitional of intelligence, but rather indicative of intelligence, the test remains pragmatically useful, even if it can sometimes be wrong.

As for the Blockhead argument itself, I would have linked the source article but there doesn't seem to be an online copy. The paper is called "Troubles with Functionalism"... I don't know the complete reference offhand, I think it has been published in a couple of places. The "blockhead" is one thought experiment intended to show problems with machine functionalism, and hence, the Turing Test.

In a nutshell, the Blockhead is simply a decision tree. A great, huge, dear-god, honking decision tree. In fact, it has, at every node in the tree, branches representing all possible utterances of English (or whatever language you choose). It engages in conversation simply by look-up from its table. Because it has been so ingeniously programmed, it can pass the Turing test with ease. However, there doesn't seem to be much in the way of intelligence there (since, after all, its just a massive nested if-then statement).

The idea here is to prove that the Turing Test is insufficient to show intelligence, since there is a logically possible entity that can pass it, without conforming to our tacit understanding of intelligence. The hidden assumption is that we do have such an understanding. That is, we might not be able to define intelligence in terms of necessary and sufficient conditions for something to be intelligent, but we usually know if something is intelligent or not. This is very much in the spirit of the Chinese Room argument, but the Blockhead just somehow seems... dumber.

There are a couple of responses that I feel I should tackle now, since almost everyone tries them. It doesn't matter that the Blockhead is not physically possible. Block is trying to knock down a theory of mind - one implicitly assumed by the Turing Test. If there is even one logically possible Turing Test passer that is not intelligent, it is enough to knock down the whole theory.

It is also not a (satisfactory) response to suggest that the Blockhead can only do language. We can just scale it up to include all human perceptual inputs, and the argument holds. This obviously increases the size of the Blockhead dramatically, but the increased size is still not a problem, since the number of nodes in the tree is still finite, hence logically possible.

I need a new sig.
[ Parent ]

Re: Blockhead (none / 0) (#102)
by bugmaster on Mon Jul 12, 2004 at 06:36:17 AM EST

I mean the kind of proof that we use in mathematics and logic. The kind that works completely without any evidence, simply arguing from first principles.
Derr. That's kind of harsh. Not many things can be proven in that way, besides claims such as "I exist" a la Descartes. Even science requires some assumptions: cause and effect, empiricism of some sort, etc.

You're right about the Blockhead; it has the same problem as the internalized Chinese Room, except it's even dumber, because it uses a crappy implementation :-)
>|<*:=
[ Parent ]

Granted (none / 0) (#131)
by The Solitaire on Mon Jul 12, 2004 at 02:25:31 PM EST

This is a very very difficult standard of proof. But is often the kind of proof that (non-empiricist) philosophers are most interested in. That being said, I don't think that it is important that there is no deductive proof of the adequacy or inadequacy of the Turing Test. It is better to think of it as a rough-and-ready test that might, sometimes, lead us astray.

I need a new sig.
[ Parent ]

Well... (none / 0) (#159)
by bugmaster on Tue Jul 13, 2004 at 01:55:21 AM EST

I agree, for the most part. However, your last sentence raises the question, "do you have a better test in mind ?" Or, a slightly easier question: "if the Turing Test led us astray, how could we find out ?".
>|<*:=
[ Parent ]
One possible counterargument... (none / 0) (#107)
by warrax on Mon Jul 12, 2004 at 07:49:12 AM EST

although I'll admit I haven't really thought about it that much.

It seems to me that the Blockhead argument assumes that the set of possible inputs/decisions is finite or at the very least countably infinite. If that is not the case, it becomes impossible to even enumerate all the possible decisions, much less construct an if-then-else-type thing out of the "tree".

Whether or not the number of possible inputs/decisions is in fact finite or countably infinite I will leave up to the reader.

-- "Guns don't kill people. I kill people."
[ Parent ]

True (none / 0) (#130)
by The Solitaire on Mon Jul 12, 2004 at 02:22:50 PM EST

But the number of sentences of a language is countably infinite, assuming that its grammar is productive. However, a human can only deal with sentences of a finite length, hence the number of possible sentences that Blockhead would have to cope with is finite.

As for "all sensory inputs", we have limited abilities to percieve the world. Our eyes have finite capacity to discern color, detail, etc. One need only figure out what the resolution of the human eye is on these dimensions, and make sure that each of those possibilities is enumerated. Anything that falls below the human capacity to discern can be ignored. The same goes for all of the other sense modalities.

I need a new sig.
[ Parent ]

Blockhead is impossible as described (none / 0) (#111)
by marinel on Mon Jul 12, 2004 at 09:03:53 AM EST

I believe that the idea of Blockhead is flawed conceptually, if I understand your explanation correctly. It is impossible to have a simple decision tree that would give a correct answer to every conceivable question without requiring intelligence. In order to produce the correct answer, some questions require some sort of intelligence that can't be encapsulated in a decision tree.

Simple example:
QuestionXN: If SubjA receives one coin from SubjB1, two coins from SubjB2, three coins from SubjB3, ..., N coins from subjBN, how many coins does SubjA have?

If you take all possible QuestionXN questions (by substituting SubjA and SubjBN with all possible and qualified English words and by substituing N with all possible natural numbers), you have to go beyond a if-then-else decision tree to code a Blockhead and you have to code some arithmetic and abstraction into it, and at this point you have given it some intelligence (which contradicts the definition of a Blockhead, right?).

Simply put, for any program to answer intelligibly every question posed, it requires some intelligence be coded in it because the infiniteness of possible questions requires some intelligent abstraction. The totality of all required abstractions would simply constitute what we call intelligence. So, I think that the idea of a Blockhead (as explained by you) is impossible.
--
Proud supporter of Students for an Orwellian Society
[ Parent ]

Simple example correction (none / 0) (#112)
by marinel on Mon Jul 12, 2004 at 09:34:34 AM EST

I just remembered that 1+2+..+N=N(N+1)/2, so if a calculator is an accepted Blockhead subsystem, a better example would be 1^k,2^k,...,N^k (where N is exactly N and not an arbitrary number) which would require the Blockhead to do deduction and return a formula as a function of N and k as an answer since not all Sum(N,k) formulas can be enumerated (due to the infiniteness of N and k). So these formulas would have to be computed by the Blockhead on the spot from all previous sums :-)

Is that better?
--
Proud supporter of Students for an Orwellian Society
[ Parent ]

Finiteness (none / 0) (#129)
by The Solitaire on Mon Jul 12, 2004 at 02:18:03 PM EST

This was my original thought as well when I first heard the argument. The problem here is that Blockhead doesn't have to do this for all N. Rather, he just has to do it for N as large as the best human can deal with. Similarly, language is productive, and there are (theoretically) an infinite number of sentences. However, we can't understand sentences that are greater than, say, 200 words (200 is just an estimate... any number will do so long as it is finite).

I need a new sig.
[ Parent ]

Blockhead's finiteness vs human mind infiniteness (none / 0) (#153)
by marinel on Mon Jul 12, 2004 at 11:51:18 PM EST

It is not true that humans can't deal with arbitrarily large numbers. I personally can deal with any number as long is it's a compact form. If you give me that question with N=10^10^10^10... and k as a small integer, I can compute an answer as a function of N (regardless of how big N is). I do recognize that for values of k sufficiently large I would have to write a small program to help me, but that should be acceptable and irrelevant to my main drive.

As for sentences greater than 200 words, again one should be able to deal with arbitrary lengths, if they are constructed in the way I stated my question schema. It shouldn't be hard to follow that the question reduces to asking for the value of 1^k+2^k+...+N^k, so the question can be of arbitrary length.
--
Proud supporter of Students for an Orwellian Society
[ Parent ]

We are finite too (none / 0) (#158)
by bugmaster on Tue Jul 13, 2004 at 01:51:23 AM EST

It seems like you're defeating your own purpose (even though I sort of agreed with you on the other thread... heh). If you, a finite being, can come up with all these answers to math questions, then a finite Blockhead could, too.

I think the main question here is: can a decision tree of arbitrary yet finite length be substituted for a stateful system of some kind ? I think the answer is "yes", but I am too tired to prove it right now... maybe I'll take a shot at it tomorrow :-)
>|<*:=
[ Parent ]

We are finite but we are not Blockheads (none / 0) (#212)
by marinel on Tue Jul 13, 2004 at 11:42:13 PM EST

Maybe I don't understand your drive, but are you assuming that humans are somehow limited to Blockheads limitations? What I mean is that me, a finite being, can infer, but if I understand correctly, Blockhead can't, thus the limitation of this Beast comes from its very definition (and thus it can't answer every math question as any human could -- given enough hints).
--
Proud supporter of Students for an Orwellian Society
[ Parent ]
True, but... (none / 0) (#214)
by bugmaster on Wed Jul 14, 2004 at 12:15:27 AM EST

I thought Blockhead was supposed to include (in its decision tree) every single question that could ever be asked of it ? The set of these questions is extremely large, but, quite possibly, not infinite -- because it's finite humans who are asking the questions. Also, it's acceptable for Blockhead to answer with things such as "I'm bored, leave me alone", and "dammit Jim, I am a theoretical thought experiment construct, not a mathematician !".

Don't get me wrong, a decision tree is still a stupid implementation. It can, however, be made to seem to infer things -- just create a branch that matches the correct conversation track ("what's the first number ? 2 ? and what's the second one ? 2 ? Then the answer is 4"), and away you go.

Anyway, it's not immediately apparent to me that a mega-huge decision tree couldn't emulate a state machine... Is there some classic proof for (or against) this ?
>|<*:=
[ Parent ]

Finiteness and (AI v. (emotion feigning|deceipt)) (none / 0) (#232)
by marinel on Thu Jul 15, 2004 at 01:17:21 AM EST

With respect to the finiteness of humans and the q's they pose see my other post:
#231 Wrong: finite alphabet !(imply) finite questions.

With respect to Blockhead giving me copout answers, I would assume that is not allowed, otherwise I would not really test intelligence. With those type of answers allowed, I would be testing:

  • Blockhead's ability to feign emotions (which are human traits quite orthogonal to intelligence IMO), or worse yet,
  • its capability to deflect and deceive when it does not know the answer to a perfectly legit question, given that it's an automaton after all and it doesn't need to get tired or bored or annoyed, right?

--
Proud supporter of Students for an Orwellian Society
[ Parent ]
Finiteness in practice (none / 0) (#233)
by bugmaster on Thu Jul 15, 2004 at 05:28:23 AM EST

Ok, but remember: Blockhead doesn't need the ability to answer any possible question. It just needs the ability to answer any question that might be posed to it. This number is very large, but not infinite (finite humans, finite population, finite lifespan), despite the infinite alphabet.

With respect to Blockhead giving me copout answers, I would assume that is not allowed, otherwise I would not really test intelligence.
Well, I guess I'm not intelligent then, either. If you asked me "How does quantum teleportation work ?" or "What did you have for breakfast at this very day 10 years ago ?", my answer would be "Hell if I know".

Emotion-wise, you're right -- but all this time, I was assuming that Blockhead was "emulating" (careful: don't assume the conclusion) humanity, not intelligence specifically. I mean, there are plenty of stupid humans out there, after all.
>|<*:=
[ Parent ]

Not exactly... (none / 0) (#234)
by marinel on Thu Jul 15, 2004 at 04:58:04 PM EST

With regards to finiteness I urge you to read my other comment that I refered to (#231) in this post's grand-daddy.

With regards to the questions we ask Blockhead, you're misrepresenting my attack by presenting some impossible questions. I defined clearly my angle of attack and I would even dare call my questions approachable by any above-average intelligence human.

Come to think of it, my schema of questions can be reduced to a schema that simply necessitates induction, thus outing Blockhead for the blockhead that it really is.

You're right though about plenty of stupid humans out there, although I would like to believe that most of them are just lazy and/or untrained and that reality just creates a semblance of stupidity (ignorance != stupidity)...

IMO, if Blockhead only emulates an average or stupid human, it's not interesting at all from an AI POV, since it would be a lot easier to create a real Blockhead through run-of-the-mill natural reproduction.
--
Proud supporter of Students for an Orwellian Society
[ Parent ]

two counter points (none / 0) (#113)
by DrH0ffm4n on Mon Jul 12, 2004 at 10:23:55 AM EST

  1. Blockhead does not have to know the correct answer to every question. People don't. Blockhead can simply say "I don't know".
  2. You are positing an abstract method of generating questions. Strictly you are defining a question schema. The questions that the schema generates are still enumerable and hence so are the answers.


---
The face of a child can say it all, especially the mouth part of the face.

[ Parent ]
counter-counter points (none / 0) (#151)
by marinel on Mon Jul 12, 2004 at 11:39:09 PM EST

1. If the program decides at some point to throw me a "I don't know", I could explain to it how to derive the formulas from previous formulas, so it should be able to internalize that and apply it properly when I start asking again, or would learning be a function absent from the Blockhead program?

If learning is considered off-limits, than Blockhead does not prove intelligence to start with. If it does learn than it has to go beyond if-then-else statements in order to internalize and apply what I'm trying to teach it.

2. This counter point is a logical fallacy. Of course my questions will be enumerable, but since Blockhead does not know which particular finite subset of an infinite set I will be asking it, it has to know how to answer appropriately all infinite possible questions (lest one of my enumerable questions could be outside its enumerable subset of canned answers).
--
Proud supporter of Students for an Orwellian Society
[ Parent ]

Son of counter-point (none / 0) (#157)
by bugmaster on Tue Jul 13, 2004 at 01:48:06 AM EST

Correct me if I'm wrong, but isn't Blockhead supposed to be able to answer all possible questions in the English language ? Yes, it would probably need an infinite decision tree for that, but that's not a problem, because this is a thought experiment. Give him an infinite decision tree, and then he doesn't even need learning, because he's effectively omniscient.

Of course, this also defeats the whole point of Blockhead. Since he (it ? heh) cannot be implemented in principle, its existence doesn't really say anything interesting about the Turing Test, one way or another.
>|<*:=
[ Parent ]

There's a snag (none / 0) (#170)
by DrH0ffm4n on Tue Jul 13, 2004 at 06:40:33 AM EST

How would you fill the tree? Is it pre-filled? Then it has answers to questions that we don't even know the answers to.

---
The face of a child can say it all, especially the mouth part of the face.

[ Parent ]
Sort of (none / 0) (#174)
by bugmaster on Tue Jul 13, 2004 at 07:58:48 AM EST

Technically, it only has to be as smart as a normal human, so it can have branches akin to "sorry, I don't know how to solve the Grand Unified Theory". But yes, it's prefilled. Well, as long as we're prefilling it, we might as well make it have answers to everything, what the heck... it's a thought experiment anyway :-)
>|<*:=
[ Parent ]
I concede I don't know Blockhead (none / 0) (#169)
by DrH0ffm4n on Tue Jul 13, 2004 at 06:39:09 AM EST

I must admit I've not even looked up what Blockhead was supposed to do, but arguments like this one always seem to assume that humans can answer all questions. They can't. Turing's halting problem suffers similar misinterpretations. Just because there is not universal algorithm for deciding whether any given algorithm halts does not mean that brains/minds cannot be modelled by a TM, since we cannot decide all such questions either.

My counterpoint 2 is not fallacious. Any definition of an potential infinite set of questions will be informal enough that I can give an equally informal answer. Your enumerable infinite set of questions can be ordered lexicographically in such a way (e.g. lexicographically) that it will always be possible to look up an answer to a question of finite length in finite space and time. You are not allowed to ask questions of actually transfinite length.

---
The face of a child can say it all, especially the mouth part of the face.

[ Parent ]

Assumptions, finiteness and recursiveness (none / 0) (#209)
by marinel on Tue Jul 13, 2004 at 11:14:52 PM EST

  1. My argument does not assume humans can answer all questions. It simply assumes that a human can derive some formulas given some hints.
  2. My questions are not transfinite. The can be random enough though to preclude a simple enumeratory answer generator from  providing a fool-proof solution to all of them with 100% certainty.
Maybe I do not understand Blockhead either, but if Blockhead is non-recursive in its design and my questions require recursiveness, as I believe they do, I did find its Achilles' heel, right?
--
Proud supporter of Students for an Orwellian Society
[ Parent ]
I don't think so (none / 0) (#225)
by DrH0ffm4n on Wed Jul 14, 2004 at 06:46:50 AM EST

Your questions must be in some predefined language? At the very least they must be written using some predefined finite alphabet? We can therefore lexicographically order all finite questions made using this alphabet. If you can generate (or pick) at random from this list, then blockhead can do the same with its answer.

---
The face of a child can say it all, especially the mouth part of the face.

[ Parent ]
Wrong: finite alphabet !(imply) finite questions (none / 0) (#231)
by marinel on Thu Jul 15, 2004 at 12:37:54 AM EST

You're assuming that a finite alphabet domain will generate a finite question codomain. Wrong!

My schema (or almost any half-assed schema) can generate an infinite number of questions (because it can combine those alphanumerics without limit), thus no amount of finite canned answers can cover all the possible questions I can ask.

Let me simplify the gist of my drive. If I have an alphabet made of only one letter N, I can generate an infinite amount of words with it: N, NN, NNN, NNNN, ....

Here is a different angle: Let's say that Blockhead covers all the possible answers to my schema up to N=Max, where Max is an arbitrary finite number, what will Blockhead do if I ask it a question where I substituted N=Huge, where Huge>Max ?

I think your logic breaks down when you assume that since I (as a human being) can ask only a finite number of questions than Blockhead can cover my questions, right? What you fail to realize is that even though my questions to it will be finite in number, my domain is infinite thus the Blockhead can't cover all my questions without a hiccup unless it's extremely lucky or it can read my mind.

Capisci ... dottore?
--
Proud supporter of Students for an Orwellian Society
[ Parent ]

I assumed no such thing (none / 0) (#244)
by DrH0ffm4n on Mon Jul 19, 2004 at 06:31:25 AM EST

Although ypur questions must be necessarily finite in length to be intelligible as questions. The set of such questions is denumerabley infinite. That does not prevent a lexicographical ordering.

If you are capable of generating any question from your list, then blockhead is capable of storing the answer to that question. As soon as you allow yourself to increase your range hypothetically to Huge, so I can allow blockhead's range to inrease to Huge too.

---
The face of a child can say it all, especially the mouth part of the face.

[ Parent ]

Responses (none / 0) (#245)
by marinel on Mon Jul 19, 2004 at 12:18:44 PM EST

First, maybe I don't understand what lexicographical ordering means, but I don't see why any ordering has a bearing on the fact that the boundless possibility of my questions will stump the Blockhead sooner or later no matter how he sorts questions. Just because it can categorize my question it does not necessarily imply that it can answer it as I demonstrated in my previous post.

As to you modifying Blockhead to adjust to my Huge upper bound, then I'm not really testing Blockhead, but a moving target that has the benefit of a human adjusting it as we go. I thought that the whole exercise is that Blockhead can fool me (unaided) into believing it's an intelligent creature, right? You interferring with Blockhead after I start questioning it, is like changing changing the rules of a game during play, isn't it?
--
Proud supporter of Students for an Orwellian Society
[ Parent ]

Lexicographical is a bit like alphabetical, except (none / 0) (#246)
by DrH0ffm4n on Tue Jul 20, 2004 at 08:35:13 AM EST

Lexicographical is a bit like alphabetical, except all sentences of length n come before all sentences of length n+1.
E.g with the alphabet {a, b, c}:
  1. a
  2. b
  3. c
  4. aa
  5. ab
  6. ac
  7. ba
  8. bb
...
  1. cc
  2. aaa
  3. aab
etc...

My point is not that I can change Blockhead to deal with the questions that you proffer but that we set the rules before you ask questions. You claim that you can ask a question of any finite length up to Huge. I'll just construct Blockhead to be able to answer any question up to that length.

Your counter is that hypothetically you are allowed to ask any question of finite length. There are denumerably many such questions. If you can ask any question from this denumerable set, then I insist that Blockhead can have denumerable storage, addressed by natural numbers. Any finite length question will always be some finite address into the storage.

---
The face of a child can say it all, especially the mouth part of the face.

[ Parent ]

Back to the infiniteness of the human mind (none / 0) (#251)
by marinel on Wed Jul 21, 2004 at 05:45:17 PM EST

If I give you the number Huge a priori is like giving Blockhead the answer in advance. Exactly because Blockhead can't read my mind and it can not know what Huge is, is why I'll stump it (if Huge is out of its bounds).

From a different angle, if I don't know what Huge is before I start the questioning, and I'll keep on bumping it up super-exponentially as we go, it won't take me long to stump Blockhead.

More generally (and probably less obtuse), there is an infinite number of schemas similar to the one I concocted, so Blockhead would only work if we had to feed it a priori all schemas. Since that's impossible, Blockhead will always have a soft spot and as soon as someone asks it a valid question that was not covered in its clever programming, Blockhead will fail to answer it correctly.

The beauty of the human mind is that there is always the possibility of someone asking a completely new question (that was never asked before), and this question will be totally legit and answerable by quite a few human beings, yet impenetrable to a Blockhead that is limited only to questions of the past. Here is an example of a question that would have been a 99.99999% sure Blockhead stumper at this exact point in time, because even though all the concepts are known, their senseless yet logical combination as illustrated below is valid yet virtually a first:

If N0,N1,...,Nk are all prime numbers less than or equal to Nk, GCD and LCM have the usual mathematical meaning, and I define

pp(Nk)=N0*N1*...*Nk
rexp(N0,N1,...,Nk)=N0^N1^...^Nk
foo(N0,N1,...,Nk)=(GCD(N0,N1,...,Nk)+LCM(N0,N1,...,Nk))

then, in terms of pp(Nk), what is the value of

[foo(N0,N1,N2,...,Nk)-logNk(...logN2(logN1(logN0(rexp(N0,N1,N2,...,Nk)))...)]?

Any human that knows what GCD and LCM are, assuming [s]he is not lacking basic symbolic manipulation skills, should be able to answer it in less than one minute, yet Blockhead wouldn't have a clue unless it knew this question a priori, which chance (as I said) is virtually nil.

There are an infinite number of questions like that that I can pull out of my ass on the spot. It's just impossible to code questions not yet thought of into Blockhead's lame decision tree, unless we live in an universe in which backwards-in-time travel is possible. So, does it click now?


--
Proud supporter of Students for an Orwellian Society
[ Parent ]

But that's the whole point of blockhead? (none / 0) (#252)
by DrH0ffm4n on Thu Jul 22, 2004 at 08:37:01 AM EST

It is programmed with the answers to all of the questions you could ask. There is a finite limit to that. Even if you could type at 100 keypresses per minute, you could only type a finite question in your lifetime (~4*10^9 characters). There are only a finitely many such questions.

Your question has 237 characters from a possible 256 character alphabet. There are 256^237 sentences that are that length or less. That may look big, and is in fact more than the number of electrons in the universe (~10^80). But we were talking hypothetical right? So blockhead could have the answers to all questions of that many characters pre-programmed. Hypothetically.

---
The face of a child can say it all, especially the mouth part of the face.

[ Parent ]

... ergo Blockhead is impossible in our universe (none / 0) (#253)
by marinel on Thu Jul 22, 2004 at 10:10:50 AM EST

Yes, I can ask only a finite number of questions, but because the pool of questions from which I draw is virtually unlimited, you can't really cover all the questions I can ask.

As to the limitation imposed by the alphabet, you're warmer, yet so far away still. Yes, I am limited by the alphabet and my capacity to construct reasonable-length questions, but, as you pointed out so clearly, Blockhead can not be implemented in our puny universe, so it would have to exist in a parallel universe to be able to fool me that he is intelligent (ignoring the little problem with the "parallel" qualifier since we're talking hypothetically).

Hypothetically, there are parallel universes, but we usually discard constructs orthogonal to our problem domain, otherwise we're just waxing poetic. Am I mistaken? Why should I care about a hypothetical construct that is impossible in our universe? Shouldn't reality bear any limitations on constructs that are meant for this world?

Note: Some say that the universe contains 10^130 electrons which would still push Blockhead into a parallel universe even if my questions are limited to only 55 characters in length :-)


--
Proud supporter of Students for an Orwellian Society
[ Parent ]

If blockhead were just a lookup table (none / 0) (#255)
by DrH0ffm4n on Mon Jul 26, 2004 at 08:11:38 AM EST

Then, no, it's not physically possible using current technology. So it can only have been mooted as a hypothetical entity. In which case your argument fails. You are much more limited than a hypothetical blockhead.

I should maybe actually find out what blockhead was supposed to be.

---
The face of a child can say it all, especially the mouth part of the face.

[ Parent ]

Wrong again: blockhead is ALWAYS impossible (none / 0) (#256)
by marinel on Wed Jul 28, 2004 at 11:25:25 AM EST

In case you forgot the part about the number of electrons in the Universe, Blockhead would be impossible at any point in time. Technology has nothing to do with it and everything to do with Blockhead's incorrigible limitations spawning from its inefficient design.

And when you say that I am more limited than a hypothetical blockhead, you're talking nonsense, since you're comparing a real efficient design(me) with an [impossible] primitive design (Blockhead). Hypothetically the Tooth Fairy could be smarter than any human being, yet no one bothers with such nonsensical comparisons.
--
Proud supporter of Students for an Orwellian Society
[ Parent ]

maybe (none / 0) (#257)
by DrH0ffm4n on Fri Jul 30, 2004 at 06:48:16 AM EST

I've taken the unprecedented move of actually bothering to take a look at blockhead's inception. The definition does state limits on the length of possible questions. Technology advances may make a blockhead possible such that it does not use electrons alone to store state. Who knows?

As it stands, it was always supposed to be measured against a real person, but was a hypothetical thought experiment. The point was that, logically, it is not impossible to imagine a blockhead as a very large (but finite) lookup table, but would you say it was intelligent?

---
The face of a child can say it all, especially the mouth part of the face.

[ Parent ]

Another major drawback (none / 0) (#258)
by DrH0ffm4n on Mon Aug 09, 2004 at 04:58:02 AM EST

If blockhead were a static lookup table, then it would always give the same response to any repeated input. This means it would never be capable of true self-reference - i.e talking about the conversation it was having. That'd be a very quick way to spot such a beast.

---
The face of a child can say it all, especially the mouth part of the face.

[ Parent ]
Infinite Sentences (none / 0) (#248)
by The Solitaire on Tue Jul 20, 2004 at 05:54:52 PM EST

Wow... I never thought this discussion would last as long as it has! :)

Anyways, there is an answer to this problem. I don't think that making the Blockhead a moving target is the right answer. But, remember, the Blockhead only has to answer as well as a human would, in an ordinary conversation. So, by setting Hugemuch better than any human. Nobody understands 10 million word sentences. Nobody. (and if somebody does, just set Huge arbitrarily high so that no-one can understand a sentence that long).

So, if Blockhead got a sentence greater than 10 million words in length, it might say something like "Holy crap, are you still talking? I fell asleep a while ago, and you just kept going! You need to get out more." or "Dear God you need to learn the virtue of brevity there guy!" or "I seem to have this long grey beard that just sprouted while you were asking that last question (which I can't remember). So now I have to go shave!"

You get the idea. :)

I need a new sig.
[ Parent ]

You didn't read my schema did you? (none / 0) (#250)
by marinel on Wed Jul 21, 2004 at 03:49:37 PM EST

If you read the thread, you would have found out that my schema does not generate long sentences. I need only change the N paramenter to stump Blockhead. If I set N=rexp(10,10000) where rexp() means 10^10^10...10000 times, then my question won't be very long would it?

And I do not need to ask it too many questions, all I need is to ask it carefully chosen random question to stump it because it simply can't have an infinite memory and I'll run it out of bounds sooner or later :-)

As to Blockhead giving me copout answers, I already addressed it up the thread (see #232).
--
Proud supporter of Students for an Orwellian Society
[ Parent ]

Blockhead paper pointers and SHRDLU (none / 0) (#213)
by marinel on Wed Jul 14, 2004 at 12:03:13 AM EST

The actual "Troubles with Functionalism" paper you mention was written by Ned Block and published in 1978 in the "Minnesota Studies in the Philosophy of Science" (http://www.mcps.umn.edu/v9toc.html) and again in 1991 in  David M. Rosenthal's "The Nature of Mind", chap. 23, pp. 211-228 (http://www.amazon.com/exec/obidos/tg/detail-/0195046714).

As to Blockhead itself, I wonder if it borrowed from SHRDLU (http://en.wikipedia.org/wiki/SHRDLU).
--
Proud supporter of Students for an Orwellian Society
[ Parent ]

"The argument from ESP" (none / 0) (#61)
by trane on Sun Jul 11, 2004 at 01:38:35 PM EST

Interestingly, Turing in his paper "Computing Machinery and Intelligence" dealt with the "psychic powers" argument, as well as the "Argument from consciousness" and many others that are being invoked in this discussion. Here is a link to the original article: http://www.abelard.org/turpap/turpap.htm.

Here is The Argument from Extra-Sensory Perception:


  I assume that the reader is familiar with the idea of extra-sensory perception, and the meaning of the four items of it, viz. telepathy, clairvoyance, precognition and psycho-kinesis. These disturbing phenomena seem to deny all our usual scientific ideas. How we should like to discredit them! Unfortunately the statistical evidence, at least for telepathy, is overwhelming. It is very difficult to rearrange one's ideas so as to fit these new facts in. Once one has accepted them it does not seem a very big step to believe in ghosts and bogies. The idea that our bodies move simply according to the known laws of physics, together with some others not yet discovered but somewhat similar, would be one of the first to go.

This argument is to my mind quite a strong one. One can say in reply that many scientific theories seem to remain workable in practice, in spite of clashing with E.S.P.; that in fact one can get along very nicely if one forgets about it. This is rather cold comfort, and one fears that thinking is just the kind of phenomenon where E.S.P. may be especially relevant.

A more specific argument based on E.S.P. might run as follows:

"Let us play the imitation game, using as witnesses a man who is good as a telepathic receiver, and a digital computer. The interrogator can ask such questions as 'What suit does the card in my right hand belong to?' The man by telepathy or clairvoyance gives the right answer 130 times out of 400 cards. The machine can only guess at random, and perhaps gets 104 right, so the interrogator makes the right identification." There is an interesting possibility which opens here. Suppose the digital computer contains a random number generator. Then it will be natural to use this to decide what answer to give. But then the random number generator will be subject to the psycho-kinetic powers of the interrogator. Perhaps this psycho-kinesis might cause the machine to guess right more often than would be expected on a probability calculation, so that the interrogator {p.454} might still be unable to make the right identification. On the other hand, he might be able to guess right without any questioning, by clairvoyance. With E.S.P. anything may happen.

If telepathy is admitted it will be necessary to tighten our test up. The situation could be regarded as analogous to that which would occur if the interrogator were talking to himself and one of the competitors was listening with his ear to the wall. To put the competitors into a 'telepathy-proof room' would satisfy all requirements.

Of course another option to the 'telepathy-proof room' would be to simply make your program understand and be able to respond to psychic phenomona, to the same degree that some real human does.

The fact that the founder of computer science considers telepathy in an academic paper, to me, shows the transcendant nature of his genius...

[ Parent ]

Telepathy (none / 0) (#72)
by bugmaster on Sun Jul 11, 2004 at 05:40:32 PM EST

Heh, I wasn't aware of this one -- thanks ! Somehow I expected better of Turing, but then again, no one's perfect. Turing's argument is still basically right: "if telepathy exists, computers may not possess it". That's a pretty big "if", though.
>|<*:=
[ Parent ]
Defending the Chinese Room (none / 0) (#124)
by klash on Mon Jul 12, 2004 at 12:56:28 PM EST

Algorithms are deterministic processes that depend only on their input. It strikes me as a bit mystical to claim that the method by which input and output are delivered to and from a non-intelligent algorithmic engine can combine with that engine to make intelligence.

It is true that a cog alone cannot tell time, but the components of a watch work together to tell time in an explainable way. You are claiming that mechanisms that demonstrably do not affect the system's output (the pencils, in/out slots) can inexplicably combine with the man's non-understanding of Chinese to make a whole system that does understand Chinese.

If intelligence is mystical, defying rational explanation or analysis, then there is no point in trying to create an objective standard for when it is present.

[ Parent ]

"Mystical" (none / 0) (#136)
by mrcsparker on Mon Jul 12, 2004 at 03:56:00 PM EST

Great word to choose. I tend to agree with Searle that much of the A.I. crowd leans more towards a religion than a science. Searle has actually answered this precise argument many times, on paper and during lectures.

[ Parent ]
Argument from ignorance ? (none / 0) (#156)
by bugmaster on Tue Jul 13, 2004 at 01:44:23 AM EST

It seems like you (along with Searle) are saying, "We don't know how to build an AI, therefore it can't be done". Well, no -- if we thought like that, we'd be still sitting in caves, eating raw meat, because we wouldn't have fire.

Furthermore, Searle himself grants that the rulebook in the Chinese Room actually works; the rules in it are non-mystical; instead, they were written by someone who knows what they're doing, and they do indeed produce a room that can (or, at least, can appear to) speak Chinese. True, we can't write such a rulebook today, but for the purposes of the thought experiment, we can assume that it exists.

In fact, some people would contend that the human brain is nothing more than an "algorithmic engine", though not in the "10 PRINT 'HELLO WORLD'" sense that we usually apply to algorithms. Human brains receive inputs from various sensors (eyes, ears, whatever); they learn and develop as they grow. I don't see anything that would, in principle, stop a machine from doing the same thing.

If you're concerned that an "algorithmic engine" does not have a random factor in it, that's not a problem -- just hook it up to a white noise generator, or random.org, and you're good to go.

I stand by my original statement: the only way to deny a Turing AI human status is to believe in some sort of dualism -- which you're free to do, at any time, of course.
>|<*:=
[ Parent ]

no, just defending the Chinese room (none / 0) (#184)
by klash on Tue Jul 13, 2004 at 12:30:23 PM EST

It seems like you (along with Searle) are saying, "We don't know how to build an AI, therefore it can't be done". Well, no -- if we thought like that, we'd be still sitting in caves, eating raw meat, because we wouldn't have fire.

I'm not sure what you're talking about. It certainly doesn't have anything to do with what I wrote. The entirety of my post was devoted to refuting your claim that the Chinese room argument is a logical fallacy. I argued that you cannot accept that claim without resorting to mysticism.

Yes, Searle grants that the rulebook works. I am not intimately familiar with Searle, but I believe the reasoning is "even if you can get a machine to pass the Turing test, that's not proof of intelligence." So he takes a successful rulebook as a premise, because if you don't have a rulebook that fools a human then even Turing test believers won't argue that you have intelligence.

In fact, some people would contend that the human brain is nothing more than an "algorithmic engine", though not in the "10 PRINT 'HELLO WORLD'" sense that we usually apply to algorithms.

There is no other meaning for "algorithm." That's what the Church-Turing thesis is all about. No one has ever discovered any algorithmic model that exceeds the capabilities of a Turing machine. It is meaningless to say that the human brain is an "algorithmic engine" if you are not willing to accept that it is equivalent to a Turing machine.

Are you arguing that it's algorithmic because it takes inputs and produces outputs? So does a black box that decides the halting problem, we have no reason to believe that such a machine can be built. We have proved it can't be built with Turing machines, and no one is speculating that there is some kind of yet-undiscovered super algorithm that could decide it.

I stand by my original statement: the only way to deny a Turing AI human status is to believe in some sort of dualism -- which you're free to do, at any time, of course.

I think it's premature to talk about what we will grant or deny a Turing AI when no such thing is remotely close to existing. Without some major discovery, I cannot see any way that a computer is going to be able to stand up to simple queries like:

"Can you explain the difference between the words 'a' and 'the' in the English language?"

"Can you explain to me the irony in this sentence: 'We don't need no education.'"

Not to mention that the idea of "granting a Turing AI human status" sounds like the prelude to silly arguments like "pushing CTRL-C is murder!"

[ Parent ]

I think I should clear this up now. (none / 0) (#186)
by Farq Q. Fenderson on Tue Jul 13, 2004 at 01:06:16 PM EST

The problem with the Chinese/Mail Room experiment not semantical, as this debate seems to have gotten. This thought experiment was designed to demonstrate that a set of rules can't think.

The fallacy here is essentially a straw man. The straw man being the example - which boils down to a lookup table. A lookup table is not intelligent, it's not even dynamic. It is inherently lacking the ability to learn.

It doesn't demonstrate any more or any less than this. It does show that ELIZA, for example, is not intelligent.

To prove that intelligence cannot be simulated, one must first determine how intelligence works and properly state that it cannot be done. It would also be acceptable to demonstrate that any prerequisite condition for intelligence is unattainable in simulation. Howevere, this also includes the burden of demonstrating that such a prerequisite is indeed a prerequisite.

farq will not be coming back
[ Parent ]

Lookup Table (none / 0) (#202)
by bugmaster on Tue Jul 13, 2004 at 05:22:31 PM EST

The fallacy here is essentially a straw man. The straw man being the example - which boils down to a lookup table. A lookup table is not intelligent, it's not even dynamic. It is inherently lacking the ability to learn.
You're absolutely right about that -- but didn't Searle amend his version of the Chinese Room to include some sort of a state machine (a read/write rulebook) ? I would maintain that, due to the systems reply, the Chinese Room 2.0 is still a fallacy.
>|<*:=
[ Parent ]
Chinese Room, round 2, fight (none / 0) (#201)
by bugmaster on Tue Jul 13, 2004 at 05:20:58 PM EST

I'm not sure what you're talking about. It certainly doesn't have anything to do with what I wrote.
But earlier you wrote:
You are claiming that mechanisms that demonstrably do not affect the system's output (the pencils, in/out slots) can inexplicably combine with the man's non-understanding of Chinese to make a whole system that does understand Chinese.
I assumed that you were saying, "we don't know how to make the Chinese room work". Did I misinterpret what you said ? If so, sorry.

You are absolutely right about Searle and Rulebooks: if the machine can't pass the Turing Test, it's just a dumb chatterbot, end of story.

There is no other meaning for "algorithm." That's what the Church-Turing thesis is all about.
You are right, of course -- I was merely pointing out that the human brain has a very different architecture as compared to a modern computer. But architecture is not all that important, as compared to functionality.
Are you arguing that it's algorithmic because it takes inputs and produces outputs? So does a black box that decides the halting problem, we have no reason to believe that such a machine can be built.
Derr. Why would a Turing AI need to solve the halting problem ? Humans can't do it either, after all. No, I think we mean the same thing when we say "algorithmic": reducible to a Turing Machine (or a Turing Machine with a noise generator as one of the inputs). Of course, some people believe that the brain is not algorithmic, but usually there's some sort of dualism involved, as I said. Note that, even if it turns out that the brain is not algorithmic in nature, nothing is stopping us from building a machine that works the same way -- assuming that the brain is still purely physical in nature.

Not to mention that the idea of "granting a Turing AI human status" sounds like the prelude to silly arguments like "pushing CTRL-C is murder!"
The standard reply to this one is as follows: Imagine that you got hit by a car, and lost your leg. But it's the year 3007, so it gets replaced by a robotic cyber-leg. Now you get hit by a car again, lose your arm, say hello to cyber-arm. You keep getting hit by cars, and losing pieces of your body, until all you have is your brain. *Bam* comes the bus; your brain is destroyed, but the all-powerful future doctors (hey, it's a thought experiment) manage to give you a prosthetic brain before any damage to your personality is done. At this point, would sending Ctrl-C to the computer that now functions as your brain be considered murder ? And how are you any different from an AI which acts just as human as you do ?

I see several possible answers:

  1. Sending Ctrl-C to your brain is murder; you are still human. You and the AI are equivalent from the social rights POV.
  2. You cease to be human when your brain is replaced, even if your personality (and, thus, behavior) remain intact. Eat Ctrl-C, cyber-zombie ! That goes for your AI buddy too.
  3. You are still human, but the AI is not, even though it has the same hardware and behaves just like a human would.
  4. Brain prostheses are impossible in principle, and thus this thought experiment is nonsense.
I don't see a way of justifying anything other than #1 without appealing to some sort of soul. But perhaps there's another choice I hadn't considered ?

You're right in saying that this discussion is pretty pointless, since Turing AI "is not even remotely close to existing". But philosophy is still fun :-)
>|<*:=
[ Parent ]

Deterministic? What? (none / 1) (#161)
by Farq Q. Fenderson on Tue Jul 13, 2004 at 03:03:26 AM EST

Algorithms are deterministic processes...

Where'd you get that nonsense? Okay, it's debatable, but I have the feeling that the determinists are just afraid of non-determinism.

Regardless of which camp you're in, arguing that deterministic algorithms gets you nowhere. Life clearly isn't deterministic, why should our programs be? Not all of them are. For the record, my own efforts are non-deterministic.

It strikes me as a bit mystical to claim that the method by which input and output are delivered to and from a non-intelligent algorithmic engine can combine with that engine to make intelligence.

It strikes me as a bit mystical to claim that the method by which forces act on non-intelligent matter can combine with that matter to make intelligence.

Intelligence is by no means a simple property. I don't really think it's mystical, though. There are some really tough concepts required to understand, to begin to understand, a very simplified version of how animal intelligence works. And that's after subtracting most of the biology from the matter which has profound impact on intelligence.

farq will not be coming back
[ Parent ]

Um, yes, deterministic. (none / 1) (#185)
by klash on Tue Jul 13, 2004 at 01:04:15 PM EST

It's debatable whether algorithms are deterministic? Determinism is practically the definition of an algorithm. How would you even express non-determinism as part of an algorithm? Random numbers that you feed to an algorithm are still part of its input.

Life clearly isn't deterministic? There are those who believe it is. (I suspend judgment on this point.)

It strikes me as a bit mystical to claim that the method by which forces act on non-intelligent matter can combine with that matter to make intelligence.

Yes, it is a mystery, sort of like chemistry was a mystery to alchemists. Just because we turn up colored metal that fools the king doesn't mean we've created gold.

[ Parent ]

This might come as a shock to you... (none / 1) (#187)
by Farq Q. Fenderson on Tue Jul 13, 2004 at 01:18:40 PM EST

I've got about 68,500 hits here on nondeterministic programming.

I don't know what you call an algorithm, but what is generally referred to as "algorithm" in common programmer parlance doesn't actually have to be deterministic. In fact, spread sheet programs have to emulate determinism, in case you make circular cell calculations - so that the program stops after 10,000 iterations instead of locking up your machine. That's all for what it's worth.

To serve my point, however, I'd like to state that there's a lot of code, particularly event-driven applications and simulation software (including biological simulations,) that are made with nondeterminism in mind.

Finally, you missed my reference to 'life.' Yes, I was referring to this thing we all happen to be afflicted with. I was also referring to Conway's clever diversion.

farq will not be coming back
[ Parent ]

confusing algorithms with "programming" (none / 1) (#189)
by klash on Tue Jul 13, 2004 at 02:00:15 PM EST

I said that algorithms are deterministic, and I meant algorithms in the Church-Turing sense. It appears that people use the term "nondeterministic programming" to refer to systems that specify what is to be computed without describing how to compute it. That's fine, but all such programs need deterministic algorithms that process them to actually do the work.

CPUs do not have any non-deterministic instructions. So how could you possibly run a non-deterministic algorithm on one?

[ Parent ]

You have a forest/trees problem. (none / 0) (#190)
by Farq Q. Fenderson on Tue Jul 13, 2004 at 02:08:45 PM EST

The whole is not the sum of it's parts, especially when forces or processes are involved.

An algotithm is a collection of instructions. Those instructions can be arranged deterministically, but they don't have to be. It's a simple concept really.

For example, if I take a quantity of things that are all smaller than me and assemble enough of them, eventually I'll have something with a property that cannot be found in any of the components: the construction will be bigger. How can this be?

farq will not be coming back
[ Parent ]

You have a vagueness problem (none / 0) (#193)
by klash on Tue Jul 13, 2004 at 02:29:59 PM EST

Not once in this thread have I depended on the assertion "a whole is no more than the sum of its parts," so why bother refuting it?

Assertion: algorithms (in the Church-Turing sense) are deterministic.

Assertion: you cannot get a deterministic machine (a Turing machine) to do something non-deterministic, no matter how many abstractions you lay on top of it.

If you believe either assertion is false, then provide a counterexample, rather than mumbling about assembling small components or citing Google searches. If you do not respond by providing a non-deterministic algorithm, I will have little reason to reply.

[ Parent ]

I already gave you a damn fine example. (none / 0) (#195)
by Farq Q. Fenderson on Tue Jul 13, 2004 at 02:47:44 PM EST

Do you not remember the spreadsheet example? If you don't understand it, I'll explain it for you.

Or maybe that's not your problem, maybe your problem is that you don't think computer programs are necessairly algorithms. That's fine -- but then your comment about deterministic algorithms has little to with actual programming, therefore irrelevant to the matter of AI in software. You know, computer programs.

As for the whole being more than the sum of it's parts, perhaps I misunderstood you. Please explain what you meant by the following:

CPUs do not have any non-deterministic instructions. So how could you possibly run a non-deterministic algorithm on one?

Maybe you meant to imply something other than if a given propery is not found in the part, it cannot be found in the whole. If so, what?

farq will not be coming back
[ Parent ]

does it contradict my assertions or not? (none / 0) (#199)
by klash on Tue Jul 13, 2004 at 04:01:26 PM EST

If you believe that your spreadsheet example contradicts either of my assertions, then please make a case for how it does so. Then we will have grounds on which to debate. All you have said so far is that spreadsheets "emulate determinism" by limiting the number of iterations.

If it does not contradict either assertion, then I am not really interested. Remember, what started this debate is that you called it "nonsense" to say that algorithms are deterministic. My original ground was only to maintain that Church-Turing algorithms are deterministic, but I was willing to expand my ground it when you shifted to talking about "nondeterministic programming." However I do not wish to expand it any further.

Maybe you meant to imply something other than if a given propery is not found in the part, it cannot be found in the whole. If so, what?

What I meant to imply is that non-deterministic processes cannot arise out of deterministic ones. That is a more specific claim than "the whole cannot be greater than the parts."

[ Parent ]

Tell you what, I'll let *you* decide. (none / 0) (#203)
by Farq Q. Fenderson on Tue Jul 13, 2004 at 05:25:04 PM EST

I know this is asking a lot, but please answer this question for me:

Is there an algorithm (in the sense that you use) that will, for a given (Turing Complete) Turing Machine, determine whetheter a specified set of instructions is an algorithm (again, in the sense that you use), or deterministic? (Either will do.)

farq will not be coming back
[ Parent ]

Boronx (none / 0) (#205)
by Boronx on Tue Jul 13, 2004 at 07:51:41 PM EST

It seems to me that the look-up table in the Chinese Room would could be finite in length only if the length of the conversation was bounded. (You can fool all of the people some of the time...). One option to get around the Chinese Room is to add a requirement to any machine before it can pass the test: Number-of-possible-machine-states < Number-of-possible-conversations
Subspace
[ Parent ]
what then? (none / 0) (#26)
by gdanjo on Sat Jul 10, 2004 at 09:20:23 PM EST

Say we get a winner of this competition, that we find someone's 'Humansaurus' program to be the first artificially intelligent agent. What then?

The next logical question is: Can you use the same mechanisms, algorithms, techniques, etc. (whatever) to get a machine to talk to me? And to convince me that I'm talking to a human?

If the answer is no, then, regardless of the merits of the Turing test, it will not be considered intelligent, for there's something missing. And if it is considered intelligent, then someone will come out of the woodwork and say "hey, I used those techniques in X way back when and it wasn't considered intelligent" and we'll be back to where we started: arguing over semantics.

The problem with Turing's test is not in the test itself, but in the way scientists are attempting to answer it. We have known for a LONG time that science is really shit at answering the "big" questions (what is life, what is God, why are we here, etc. which has come to the insipid answers "not a proper question", "nothing", and "for no reason"), because it denies these questions by method.

Turing's test is, and will continue to be, just such a question and science will continue to do what it does best: deconstruct, deconstruct, and deconstruct. In the meantime, the thing that makes the question interesting will be lost.

(the thing that makes intelligence interesting is not it's mechanics, but the higher-order behaviour of it's mechanics, which is lost when you deconstruct).

Dan ...
"Death - oh! fair and `guiling copesmate Death!
Be not a malais'd beggar; claim this bloody jester!"
-ToT

The Fabric of Reality (none / 0) (#47)
by nusuth on Sun Jul 11, 2004 at 07:57:21 AM EST

http://www.amazon.com/exec/obidos/tg/detail/-/014027541X/qid=1089546510/sr=8-1/r ef=pd_ka_1/002-2069367-2648863?v=glance&s=books&n=507846

You might feel you lost time reading thru the whole book, if you do that, but the introductory chapters about science being about explanations is well worth time.

[ Parent ]

The "big questions" are stupid. (none / 1) (#127)
by handslikesnakes on Mon Jul 12, 2004 at 01:56:14 PM EST

What is life?

Do you mean "how do we distinguish life from non-life" or something more vague? If the former, then it's just a matter of word definition, not a particularly interesting problem.

What is God?

I think a better question to start with would be "does God exist". It would be a bit silly to try and determine properties of something that can't be detected.

Why are we here?

Because the processes of the universe led to our births. If you're speaking philosophically, then you're jumping to conclusions again. Why do you assume that there's a reason other than the one I just gave?

Of course, these answers probably sound insipid. Science doesn't deny these questions any more than it denies "what is karma" (ill defined and can't be shown to exist in the first place) or "for what purpose is snow here" - they're silly questions in the first place.



[ Parent ]
Intelligence (3.00 / 4) (#28)
by ZorbaTHut on Sat Jul 10, 2004 at 10:18:55 PM EST

I've always thought that the "artificial intelligence" people really aren't quite going up the right tree. One of the defining features of human intelligence is that, over time, we can understand concepts and processes that we were never physically designed for - and that seems to nicely squash anyone designing intelligence for a particular purpose.

Another feature is a truly spectacular number of interactions between things we *do* know about - for example I can look at a box of Kleenex and know what it's for, and I can look at a stapler and know what it's for, and - despite the fact that nobody has ever said "kleenex + stapler + tape = abstract art" I can realize it. I've got somethign on the order of 30 unique items on my desk, all of which could be combined in any set, and it's pretty clear nobody could ever program the associations between them directly.

So a program has to be able to program itself, and this is what people *do* - as I remember, it's been shown conclusively that a newborn baby hasn't yet realized "just because I don't see an item doesn't mean it doesn't exist". And babies *learn* this, and they put together an entire view of the world from pretty much nothing.

I think this is what the first true AI is going to be - someone will come up with the bare small number of goals that can be used for teaching (perhaps curiosity and the basic happy/sad reaction will be enough), some learning engine that's capable of reprogramming itself, and then they'll spend eighteen years teaching it to be an adult. And no, I'm not joking.

Basically, I think anyone trying to write a program that will start speaking English and making reasonable decisions in new situations is going totally down the wrong path. We're going to need that "new situation learning" anyway, so why attempt to program English when it can just learn on its own? We're treating the symptoms of nonintelligence, not the root cause.

I agree, (none / 0) (#32)
by Farq Q. Fenderson on Sun Jul 11, 2004 at 12:42:16 AM EST

And I think the kinds of approaches that this kind of contest provides incentive for are a little more realistically oriented.

When I first started my own research, one of the first things I realized was that if I ever do produce something intelligent, it will take years and years to mature once started.

farq will not be coming back
[ Parent ]

Well that's the point of using a dictionary: (none / 0) (#37)
by trane on Sun Jul 11, 2004 at 02:58:55 AM EST

It's supposed to represent the knowledge that a native speaker (of English, if that's the language your chatbot is using) has accumulated over his lifetime...

Obviously a chatbot should be able to learn English from scratch, but it's a shortcut to use a dictionary and give it a head start.

Also there are self-organizing schemes out there (Markov models for example) that can generate text without the help of a dictionary or grammar...

[ Parent ]

More than dictionary meanings (none / 1) (#41)
by The Solitaire on Sun Jul 11, 2004 at 03:27:32 AM EST

Lingustic meaning is much more than dictionary meaning. Dictionaries are woefully inadequate for full-blown linguistic understanding. Typically, dictionaries concentrate on a handful of relations between terms (synonymy, hyponymy, meronymy, etc). For complete linguistic understanding, you need much more than that.

For starters, you need to be able to construct sentence meaning from word meaning. This is probably the easiest of the tasks, and the one that has received the most attention. Next, you need to have a good understanding of figuration (which is itself generative). Assuming you have both of those, you're up to the level of (maybe) reading a cookbook or stereo instructions. To move beyond, to something like conversation, you need tremendously more. You need a complete theory of mind. You have to be able to predict what someone else is thinking from very subtle cues. Also, you need to know what is appropriate (are you talking to a five year old or a college professor?), and so on.

As for the Markov models, to think that our ability to communicate with language can be modelled by such an utterly brain-dead tool is laughable. Stochastic CFGs and the like are better, but almost certainly also inadequate.

I realize that you're not saying that a dictionary is all you need, nor a Markov model. And I agree with you that there is nothing wrong with using such resources to give an AI a head-start on the learning process. I'm just concerned that there is a tendency in the AI community to think that "building a bigger database" is a somehow a panacea for all of the really hard problems of AI.

I need a new sig.
[ Parent ]

Yeah I'm aware of the limitations of dictionaries (none / 0) (#43)
by trane on Sun Jul 11, 2004 at 04:35:24 AM EST

since I'm currently having major problems getting my attempt at a chatbot to generate relevant responses to user input...

Sure markov models and dictionaries and syntactic parsers aren't the ultimate answer; but for me with my limited brain and resources, it's a place to start (as you say). The hope is that by making such first or second attempts, we will uncover new models or new solutions that will help us create more usable AI programs, and, eventually, work up to attacking the "really hard" problems...

[ Parent ]

I Remember (none / 1) (#35)
by teece on Sun Jul 11, 2004 at 02:35:35 AM EST

A bit on NPR about a woman in Israel doing AI research who had exactly this opinion.  She felt it would be impossible to simply program intelligence, and had thus programmed a learning AI, and planned to spend the next decade teaching the AI to become truly intelligent.

I suspect that is how it will be done, if/when we create an AI.

I wonder how her project is going.  It's been a few year since I heard the story.

-- Hello_World.c, 17 Errors, 31 Warnings...
[ Parent ]

Misunderstanding of AI (3.00 / 4) (#38)
by The Solitaire on Sun Jul 11, 2004 at 03:13:32 AM EST

Many researchers in AI aren't trying to directly program intelligence. Right now, machine learning is the dominant paradigm. That being said, it's quite possible that we can't create a "full-blown" AI, starting with a tabula rasa. Assuming we accept humans or animals as our paradigm examples of what consitutes intelligence (and I don't see that we have much choice), then we have to remember that there is a lot that is hard-wired into the brain. We can come up with all the learning algorithms we like, and it still might be the case that we don't have a thinking machine at the end. In all likelyhood, achieving human-level intelligence in machines will require a combination of approaches. Certainly learning will be a major part (of that I have no doubt). However, it will likely require that we bias learning in certain ways, or we will never reach our goal.

Take, for example, language. Chimps are pretty bright creatures, and very similar to us in many ways. They can certainly learn a lot, but for some reason, they don't learn language (I know this is a contraversial opinion, but I'm using a strict definition of language here, which includes generative grammar). If general learning procedures is "all there is" to being intelligent, chimps should be able to learn language, albeit perhaps somewhat more slowly. Now what is even more interesting is the ability with which human children learn language. It is simply not even an effort. Children from around the world learn it, and learn it regardless of their overall mental capacity (the inability to communicate linguistically seems to have relatively little to do with general intelligence). It reall seems that there is a straightforward biological component that isn't learnable - or at least not easily learnable.

I need a new sig.
[ Parent ]

More Langauge Quirks (2.83 / 6) (#68)
by teece on Sun Jul 11, 2004 at 03:40:32 PM EST

Now what is even more interesting is the ability with which human children learn language. It is simply not even an effort. Children from around the world learn it, and learn it regardless of their overall mental capacity

Even more interesting, at least to me, is that when humans learn language, it has to be as children (I'm talking about language for the first time, not a second language).  Feral children, if they have not learned to speak by a certain age (early teens), are never able to speak at all in their lives.  They become exactly like chimps -- they can learn a very limited lexicon, and they have little to no grasp on syntax and grammar.  Like chimps.  It seems that the language acquisition algorithms in our brains switch off at a certain point.  Imagine if we could turn on a that learning in a chimp?  Now that would be cool...

Which is fascinating.  If we could figure out what makes that happen in children, it would be very usefull for AI and psychology/neuroscience.

-- Hello_World.c, 17 Errors, 31 Warnings...
[ Parent ]

Couldn't agree more [n/t] (none / 2) (#90)
by The Solitaire on Mon Jul 12, 2004 at 12:33:51 AM EST



I need a new sig.
[ Parent ]
Language learning... (none / 0) (#173)
by Chakotay on Tue Jul 13, 2004 at 07:39:14 AM EST

Speaking of language learning, I am completely blown away by the feat produced by my children. They're twins (a boy and a girl), and they were 6 years old when we moved from the Netherlands to France. They went to school, not speaking a single word of French. By Christmas, they spoke French pretty much like their peers. And now, two years later, they are even better at French (orthography and grammar) than their French peers! Our older son had lived in French Africa the first two years of his life, and when we moved to France he was 12 years old, and spoke no French at all either. However, he picked up French in only a few months, even though he had never really spoken French - his first two years of hearing French in Africa must have still been stored in some forgotten corner of his brain anyway...

I was able to learn Esperanto in just a few weeks, but that's not really exceptional, because I already did speak four languages...

--
Linux like wigwam. No windows, no gates, Apache inside.

[ Parent ]

Amazing (none / 0) (#183)
by teece on Tue Jul 13, 2004 at 12:25:55 PM EST

I wish I could learn new languages that easily.

I have heard that from the ages somewhere around 2-10 children are amazingly adept at learning new languages.  Less so, but still good, from 10-15.  It is winding down, but gradually and slowly from then on out.  It is actually harder for adults to learn new languages, and it gets a little harder with every passing year.  But never impossible or even insurmountable, it just takes more effort to activate those parts of your brain.

It is fascinating.  I would love it if we figured out how to turn that ability back on -- I really want to learn French, Spanish, Ancient Greek, Old English, Italian ....  Sigh.  I guess it's the hard way for me.

Sometimes I wonder if the real key to intelligence, as humans embody it, is language.  Nothing is more important in your life.  It gets completely entwined in your identity, your day to day function, indeed, language is almost impossible to separate from thought itself.

-- Hello_World.c, 17 Errors, 31 Warnings...
[ Parent ]

I second that. (none / 0) (#192)
by Kasreyn on Tue Jul 13, 2004 at 02:24:33 PM EST

I consider myself something of a master of the English language, but I've always wanted to speak other languages... and somehow never learned. There was an attempt to teach me French in primary school, but I couldn't handle the nasal accent and it frustrated me. In high school and college I took Spanish because I found its clipped phonetics to be much more aesthetic than French or German, and I can carry on a minimal conversation in it... as long as I speak in present tense... and have someone to help me with vocabulary every 10 words or so... >_<Basically, I speak Spanish on a 3 year old level. <br>
Which is very frustrating considering all the effort I've put into it! But I just can't seem to get new Spanish grammar and vocabulary to *stick*. I study and I speak, but it's like flinging handfuls of drying mud against a wall: less of it will stick now that the mud isn't as wet. :-\ I've learned phrases and words in most other languages I've heard, but that of course doesn't count as speaking them.

So, condolences from another stifled wannabee-multilinguist.


-Kasreyn


"Extenuating circumstance to be mentioned on Judgement Day:
We never asked to be born in the first place."

R.I.P. Kurt. You will be missed.
[ Parent ]
forget 'tabula rasa' (none / 0) (#188)
by Wah on Tue Jul 13, 2004 at 01:24:22 PM EST

it's wrong.  Just pretty much flat out wrong as a conception for the foundation of what we call 'intelligence'.

Database schema is a much better two-word metaphor for how our minds come from the factory.  An empty, but very robust database schema.  

'Tabula rasa' is an outmoded conception from the last age.
--
umm, holding, holding...
[ Parent ]

Applications (2.60 / 5) (#56)
by bugmaster on Sun Jul 11, 2004 at 10:57:13 AM EST

Actually, a machine that speaks a human language would have many practical applications. It would be able to summarize a day's news (kind of like Google does now... hmm...), it would be able to actually translate documents from Japanese to English in a way that doesn't make people go "huh ?", it would be able to effectively filter spam from non-spam, etc. Any of these applications (plus all the others I haven't mentioned) would be worth a lot of money to whomever manages to implement it first. Of course, a computer program that can do all that might also start asking for 8-hour workdays and a pay raise, but that's a different story :-)
>|<*:=
[ Parent ]
I disagree (3.00 / 6) (#88)
by NoBeardPete on Mon Jul 12, 2004 at 12:31:00 AM EST

You assert that newborn babies do not yet realize that hidden objects still exist. You further assert that when they do develop this knowledge, it is because they have learned it. You further suggest that they learn this with a very general intelligence.

If you look at more recent research, I think you'll find it casts doubts on your first point. You're probably referring to studies where infants are shown a toy that they would like to play with, and then the toy is hidden under a cloth. At an early stage in development, the infant is unable to do anything about the situation, while a few months later they are easily able to move the cloth to get at the toy.

This used to be ascribed to the infant not realizing that the object still exists while hidden. Other interpretations might explain this behavior though. Perhaps infants understand that the toy still exists while hidden, but are unable to put together a plan to move the cloth to get at the toy they want.

More recent research has been done with different techniques to try to get inside the heads of infants. One common technique in studying infants is to time how long they spend looking at something. In general, the more surprising or unusual an event, the longer they will look at it. One experiment shows a toy to an infant, and then places the toy behind the screen. The screen is then removed, and the toy is either revealed, or is mysteriously gone. If an infant doesn't understand that the toy continued to exist while hidden, either outcome should be equally interesting. In fact, infants of very young ages seem to find the mysterious disappearance to be more interesting.

The second assertion I've discussed is that because infants develop a cognitive ability at some point, they must have learned it through experience. This does not logically follow. Infants are born with no teeth, and later develop teeth. Do we then say that they have learned teeth from their parents and older siblings? No, we recognize that their bodies continue to grow and develop more or less independently of the outside world. Likewise, certain cognitive skills may simply develop with time, as the infant's brain grows and matures. You can't argue that new cognitive skills are learned without some evidence that points specifically to that fact.

Even supposing that infants do, in fact, learn the objects continue to exist while hidden, it may be the type of learning that depends on very specific bits of brain designed specifically for this task. It is now pretty well accepted that language is learned, but that the human brain comes with a tremendous amount of intuitive understanding of language, with circuitry specifically designed for the purpose of learning language. The actual learning process is more akin to filling out a form than writing an essay, it is a question of checking a few boxes and writing in a few fields, not something that is constructed from scratch.

I think it would be surprising if knowledge of the physical world was not very similar. Almost all animals show some understanding that objects continue to exist while hidden, even animals with little in the way of general learning abilities. Are you suggesting that they all learn about the world through a general intelligence? It seems far more likely to me that they learn about the world through specialized bits of brain that are pre-programmed with a lot of implicit knowledge about how the world works. And it would be surprising indeed if most animals had this knowledge, but somehow humans did not.


Arrr, it be the infamous pirate, No Beard Pete!
[ Parent ]

Which God? (none / 2) (#96)
by gdanjo on Mon Jul 12, 2004 at 02:07:52 AM EST

One of the defining features of human intelligence is that, over time, we can understand concepts and processes that we were never physically designed for [...]
Which God told you the reasons for our design? Who blabed?

I, for one, welcome our new God our(ver)lord.

Dan ...
"Death - oh! fair and `guiling copesmate Death!
Be not a malais'd beggar; claim this bloody jester!"
-ToT
[ Parent ]

Heh (none / 0) (#145)
by ZorbaTHut on Mon Jul 12, 2004 at 10:00:06 PM EST

Well, I don't believe in God. And it seems very unlikely we evolved to understand particle physics. So there you go. :)

If you do believe in God, things get murkier and I can't really answer. *shrug*

[ Parent ]

Very vague recollection (none / 2) (#34)
by GenerationY on Sun Jul 11, 2004 at 01:42:19 AM EST

of another alternative to the Turing test. The methodology went something like this; there was a game with certain rules to be played. Teams could either design a system to play the game like a human or design a system for detecting whether a human was playing or not. The definition of "human" was done on the basis of actual data collected by the people running the competition.

I haven't a clue with regard to more details, but it was reported on Slashdot etc. and occured sometime in early 2003 if memory serves.

Other Problems (none / 1) (#42)
by The Solitaire on Sun Jul 11, 2004 at 03:45:33 AM EST

I think the biggest problem here is not the Turing Test at all, but these silly academic competitions in general. Competitions, by their very nature, encourage winning, not learning. Such competitions are always massive oversimplifications of the real problem that they are trying to model, and teams take advantage of those oversimplifications to improve their standing - adding little to the body of real research in the field.

Furthermore, there is a real fear of failing in such competitions. Some teams simply do not compete if they do not stand a very good chance of winning. This is especially true of private companies, but also of universities and the like. The problem is that success in such competitions is directly related to your funding levels. You do well, you tend to get more money. You do poorly, you often get less.

What this means for research is that nobody will really put an idea out there that doesn't represent an improvement in performance. But sometimes such a step backwards is necessary, in order to really move forward. There is nothing wrong with incrementally improving an algorithm, but sometimes we need a whole new approach, and researchers shouldn't be afraid to take those kinds of chances.

Finally, if you are a new team in many of these competitions, you're pretty much screwed no matter what you do. Other teams have multi-year head starts tooling their system for the challenge. By this I don't mean they have really good AI algorithms, but rather, they've found all the loopholes and so on, to make everything work that much faster. The effects of such tooling often overshadows any real progress made by the other competitors.

The Loebner prize is perhaps less subject to some of this than most, simply because pretty much nobody (at least the last time I checked) in the AI community takes it seriously. Probably because it is too hard for anyone to actually win the big prize (well, that and it is infested with chatterbots, which don't even count as AI as far as I am concerned). There are loads of other competitions, though: Robocup, Trading Agents, TREC, and so on. All of them suffer from the problems I have mentioned here.

I think the long and the short of it is "science is not a game" - though it can certainly be fun.

I need a new sig.

Not always the case (none / 1) (#45)
by melia on Sun Jul 11, 2004 at 06:26:32 AM EST

In my experience, you're wrong. "Academic competitions" can often be the best (or even only) way to bring together disparate groups of researchers and focus them on some particular aspect of a problem. It's not "oversimplification", it's focus - and that helps to develop the field. Indeed, I would take the criticism of the turing test as being that it is rather too broad.

TREC is an excellent example, actually. In QA, there are a lot of entrants from all over the shop, some of them getting awful scores yet persisting year after year. The competition was shook up a couple of years ago by a rather shallower approach than that taken by the NLP big guns. TREC is a brilliant forum for research in that field.
Disclaimer: All of the above is probably wrong
[ Parent ]

TREC (none / 1) (#58)
by The Solitaire on Sun Jul 11, 2004 at 01:05:02 PM EST

Interesting you should use TREC as it is the only one of these competitions with which I have personal experience. I suppose that my problem with the TREC competition is not so much the competition itself, but what it has done to the field of natural language processing in Information Retrieval. It seems that if you fail to use some variation on the vector-space model for IR tasks, you have to spend the first half of your paper justifying why you didn't just use vector space, since it usually gets better results. Now, I haven't had this happen at TREC itself, but I think this "performance at all costs" attitude is really quite related to the existance of such competitions.

Also, there is a tendency for people to think that the only IR tasks there are are the ones that TREC has as tracks. This is utterly false. My own research was on extremely high precision IR (e.g. distinguishing the semantics of "theft of a vehicle" and "theft from a vehicle"), returning only tiny snippets. There is some overlap with question answering, but not quite. Models like vector space do a terrible job of this kind of thing, and I believe that we may need to return to deeper syntactic/semantic analyses. Trying to get this view published in light of all the papers that state "natural language processing is pointless in IR" was pure hell.

So, I'm out. Done with IR, and never going back.

I need a new sig.
[ Parent ]

The Loebner Prize (none / 1) (#46)
by trane on Sun Jul 11, 2004 at 06:33:38 AM EST

is rather different than most, I think. Some of the competitors from past contests frequent the Robitron yahoo chat group, where the rules and contest conditions are not infrequently discussed, helping to alleviate the point about the advantage past competitors may have.

The quibble I have with that particular contest is, the Turing in his original article proposed a five-minute test; the 2004 Loebner Prize rules more than double that. Also, Loebner requires that all chatbot entries use a character-by-character mode of output, rather than the far more common message-by-message mode used in IM programs.

Chatbots may not be hard AI, but they sure would be useful, and find many practical applications. For example I would rather deal with a capable, always courteous, always available, chat program, using the natural language of my choice, than most customer service representatives or automated telephone menus...

[ Parent ]

Chatterbot Inadequacy (none / 1) (#60)
by The Solitaire on Sun Jul 11, 2004 at 01:13:40 PM EST

I don't have a problem with the idea of chatterbots, and obviously, if someone wants to spend their time working on them, I wouldn't have any problem with that. I guess my problem is that I have serious doubts about the ability of a chatterbot to be "capable" as you put it. Language is an unbelievably complex domain, and chatterbots, which employ primarily template matching and rule-based generation, are just flatly not up to the task.

There are layers and layers of meaning associated with the simplest utterances. The lexical template matching that chatterbots do is just the very lowest such level. You need a syntax and a compositional (or worse non-compositional) semantics in order to bootstrap the words into a literal sentence meaning. You need to understand figuration (since this is in fact endemic in our real speech), pragmatics, and rhetorical structure (esp in the case of argument).

All of these tasks are really hard, and it is not clear that they can be modularized. That is, it might not be the case (and IMHO is not the case) that we can just implement syntax, and worry about the other levels later. It is quite possible, and in fact probable, that there are complex interactions between all of the levels. Language is anything but a computationally simple challenge.

I need a new sig.
[ Parent ]

Yes (none / 1) (#62)
by trane on Sun Jul 11, 2004 at 01:48:45 PM EST

But sometimes we proceed by baby steps...it's really hard to figure out relativity but we've made some progress over the centuries...hopefully it won't take as long to codify intelligent behavior.

I'm aware of the problems of natural language, I studied linguistics in grad school for a while. My main problem with current chatbots is that they don't seem to take into account much linguistic theory, so I'm trying to incorporate some of that into my feeble attempts...feature theory for example, and transformations would be great to incorporate too (so you could reduce surface structures to a "deep structure").

[ Parent ]

Templates (none / 1) (#99)
by ensignyu on Mon Jul 12, 2004 at 04:44:17 AM EST

Disclaimer: I know very little about AI and language, so this is all speculation and gut feelings. So don't cringe when I misuse words or generally screw things up :P

I think that the first step will be a winning chatterbot that's a hybrid of templates and real parsing. The biggest problem I've seen in the chat logs is that bots have a hard time staying on topic, or responding to something that they don't know about.

Well, so a smarter bot might generate look at an input sentence and identify the thematic roles. Or at least the subject. And then look that up in an ontological database and determine the topic. And then it would go back to the templates, but also knowing the topic so it doesn't say something completely random. It'd also help if the bots actually remembered what the user said two lines ago :P

Later on, bots will start moving towards a more full semantic analysis. Syntax may not be perfect, but I think it would already do much better than the purely template-based bots already. I think that this can be separated from other parts of the system for most cases. Slang and figures of speech would go back to templates. Ambiguities could probably be resolved in many cases using statistics. The remaining cases that can't be handled without a knowledge database, we just hope that users don't stumble upon them, and that's good enough for now.

The systems above syntax, though, are probably very interconnected. So, I think the next-gen chatterbot is going to have yet another template system -- but at a higher level than words, like "<person> gave <a type of gift> to <family member>". It's still pretty simple, but it seems like something that would be a major pain to do in today's word-specific template languages.

We won't know until we try it. Hey, templates worked better than people expected (and currently, better than "smarter" systems that don't work), even if they're still really awful. Eliza has been known to fool some people.

[ Parent ]

AI and NLP (none / 0) (#132)
by The Solitaire on Mon Jul 12, 2004 at 02:35:56 PM EST

Disclaimer: I know very little about AI and language, so this is all speculation and gut feelings. So don't cringe when I misuse words or generally screw things up :P

Oh, I always cringe. Its a reflex action from marking student papers :) I wouldn't take me too seriously. Really, it's nice to see people who aren't steeped in AI lore considering these topics.

Well, so a smarter bot might generate look at an input sentence and identify the thematic roles. Or at least the subject. And then look that up in an ontological database and determine the topic. And then it would go back to the templates, but also knowing the topic so it doesn't say something completely random. It'd also help if the bots actually remembered what the user said two lines ago :P

You're at least partially describing many of the approaches to natural language understanding that are currently in use. The problem is, doing things like identifying thematic roles turns out to be much harder than you'd think. Template systems just won't do the trick; they simply do not have the computational power. This kind of problem is typical in NLP. Understanding language seems so simple to us, yet even the simplest tasks turn out to be quite difficult when we attempt to automate them. In my own research (computational rhetoric of science) we were held back by a problem that seems, at first glance (and I was taken in by the apparent simplicity), trivial. That problem was sentence segmentation. Yep, breaking a paragraph into sentences. Turns out, that it is not at all straightforward when you're dealing with scientific articles. Go figure.

We won't know until we try it. Hey, templates worked better than people expected (and currently, better than "smarter" systems that don't work), even if they're still really awful. Eliza has been known to fool some people.

Yes, but frankly, most researchers aren't working on conversation engines a la the Loebner Prize. There is work going on in dialogue systems though, and really, they are much better than chatterbots. The thing is, they are usually confined to a domain, and not trying to be domain-general conversational engines.

I need a new sig.
[ Parent ]

Definitive Loebner article is (none / 2) (#48)
by johnny on Sun Jul 11, 2004 at 09:01:57 AM EST

here. It's informative and funny and well written. It also has the added benefit of being written by me.

There was a very similar thread here on K5 when this article came out, if anybody cares to google for it.

yr frn,
jrs
Get your free download of prizewinning novels Acts of the Apostles and Che

Good show. (none / 1) (#50)
by Farq Q. Fenderson on Sun Jul 11, 2004 at 09:53:32 AM EST

I read the first page and I just stopped to post this. I had no idea that people were that pissed off.

Does Minsky answer emails from random people?

farq will not be coming back
[ Parent ]

v. good (none / 1) (#105)
by fleece on Mon Jul 12, 2004 at 06:55:59 AM EST





I feel like some drunken crazed lunatic trying to outguess a cat ~ Louis Winthorpe III
[ Parent ]
Bah (3.00 / 12) (#53)
by bugmaster on Sun Jul 11, 2004 at 10:33:12 AM EST

"Artificial Intelligence" basically stands for "that thing we can't do yet". "Pathfinding ? Well, yes, but clearly that's not true AI. Image recognition ? We got that, but clearly that's not true AI. Machine translation ? Well, yes, but you see..." It keeps going on and on; as more and more problems that were thought to be the sole domain of humans are solved by machines, the holy grail of "True Artificial Intelligence (tm)" moves further and further away.

Which is actually why I like the Turing Test so much. The Turing Test does not actually test for any kind of intelligence or consciousness, because no one knows what the hell those things are, and besides, we don't have true AI yet, right ? Right. Anyway, the Turing Test only tests for behavior; it answers the question: "Does this thing on the other end of the teletype/email/AIM session behave like a human would, or close enough to it that no one can tell the difference ?" That's all it does; it doesn't actually prescribe any strategy for creating such a machine: you can use whatever method you wish, from rule-based harcoding (good luck) to slow, gradual learning, to divine intervention.

The Turing Test would actually work less well for animals, because we can't think like animals (and we can think like humans, since we are humans). Thus, deciding whether the subjects behaves like a bunny or whatever would be a very difficult task for anyone but an expert on bunnies.

You say that the Turing Test is biased because it requires the subject to speak a human language. I don't see it as a bias -- it's just a purpose of the test. By analogy, the SAT test is designed to test your verbal and math skills -- it's not biased, it's just what it does. You can have other tests that test for your musical talent or ability to jump; just because they're specialized doesn't mean that they are biased.

I think a lot of the confusion about the Turing Test stems from the confusion about words such as "intelligence" or "consciousness". People can't agree on what these words mean, whether they can be applied to machines, etc. etc. My personal opinion on this is pretty simple: if you are faced with some machine that passes the Turing Test (i.e., this machine is capable of chatting on IRC/AIM/etc. as competently as a human being), then you have two choices:

  1. Accept the machine as a human being, capable of independent thought, and thus endowed by all kinds of social contracts and rights... or...
  2. Assert that, despite the apparently human behavior of the subject, it is still not a human being, and thus lacks independent thought, social contracts, rights, etc.
The Turing Test doesn't tell you which choice to pick -- that's up to you. Personally, I believe that the only valid reason for picking choice #2 is some sort of dualism -- a belief in immaterial souls, minds, or something to that extent. Seeing as I don't believe in any kind of dualism, I would personally pick #1. Still, this discussion is currently pretty hypothetical, since ,right now, we are a long, long way off from creating a machine that can pass the Turing Test.
>|<*:=
picking #2 (none / 0) (#138)
by WorkingEmail on Mon Jul 12, 2004 at 04:14:27 PM EST

Ask yourself this: Would you be okay with the following future? The human species slowly dies out, but society and civilisation remains, practiced by robots who act human.

Personally, I would betray the humanist ideology without regret. The humanists can kiss my replacement's shiny metal ass.


[ Parent ]

Robots Rule (none / 1) (#155)
by bugmaster on Tue Jul 13, 2004 at 01:01:21 AM EST

Absolutely. What, you'd prefer us to die out with no replacement whatsoever ? Forget that.
>|<*:=
[ Parent ]
hehe (none / 0) (#160)
by WorkingEmail on Tue Jul 13, 2004 at 02:01:58 AM EST

The comparison was humans continuing to survive vs. robots alone continuing to survive.


[ Parent ]
Oh, in that case... (none / 1) (#164)
by bugmaster on Tue Jul 13, 2004 at 03:45:25 AM EST

...It actually doesn't matter. If the robots can pass the Turing Test en masse, then they are humans, just with fusion-powered titanium-clad bodies instead of the fleshy drippy ones we have now. Go robots !
>|<*:=
[ Parent ]
Minsky and me (none / 3) (#54)
by johnny on Sun Jul 11, 2004 at 10:35:32 AM EST

Prof. Minsky did not answer any of my emails when I was writing that story. I don't know Prof. Minsky, although I did chat with him briefly in 1981.

HOWEVER

And it's a really big however, a coincidence of mind-numbing proportion. . .

Shortly after that article appeared on Salon I interviewed for the job than I now still have, as technical writer for Laszlo. Where, by a kozmic irony, my boss is Oliver Steele, who is married to Margaret Minsky, Prof. Minsky's daughter. And Henry Minsky, Prof Minsky's son, is my friend and office mate. Actually Oliver and Margaret live in Prof Minsky's big-ole house in Brookline. And sometimes I go there for meetings. In fact, I was in Prof Minsky's house last Friday (although I've never seen him there). I have no idea whether Minsky read or even hearad about my story. I expect it's all rather old business to him.

So to recap: shortly after I wrote in a national magazine that Marvin Minsky was Yosemite Sam and that I had always wanted to give him a pie in the face, I became personally and professionally involved with Minsky's son, daughter, and son-in-law; and his son-in-law is my boss.

So I don't think I'll be writing too many more articles about how I've always wanted to give Marvin Minsky a pie in the face. For that matter, I don't think I'll be writing too many more snide comments about the "triumphalists" of the 1980's MIT Media Lab -- who included, as it turns out, Margaret Minsky, who is my boss's wife, and more to the point, my (new) friend.

I'm proud of the article I wrote for Salon, and I think the story is funny. But, if you do write to Marvin Minsky, do me a favor: don't mention my name!

yr frn,
jrs
Get your free download of prizewinning novels Acts of the Apostles and Che

Did it hurt when you sold out? (nt :) (none / 1) (#63)
by trane on Sun Jul 11, 2004 at 01:54:21 PM EST



[ Parent ]
I wish I could sell out! (none / 1) (#64)
by johnny on Sun Jul 11, 2004 at 02:12:32 PM EST

My salary ain't making me rich. I don't have much time for Salon writing these days, and I have nothing much more to say about Loebner and Minsky and the Turing Test that I haven't said already. So it's not as if I had a career making fun of Marvin Minsky and now I've forsaken that career to suck on the Minsky teat if that's what you mean by "selling out".

I'm not at all embarassed by what I wrote; I think it's great. But on the other hand it would be really unspeakably rude of me to wave it in the man's face when I'm a guest in his house, dontcha think?

yr frn,
jrs
Get your free download of prizewinning novels Acts of the Apostles and Che
[ Parent ]

Ooops (none / 1) (#65)
by johnny on Sun Jul 11, 2004 at 02:16:17 PM EST

This (parent comment) was supposed to be attached to another comment down in the thread. Sorry.

yr frn,
jrs
Get your free download of prizewinning novels Acts of the Apostles and Che
[ Parent ]
OMG! (none / 1) (#101)
by Farq Q. Fenderson on Mon Jul 12, 2004 at 05:13:27 AM EST

"I'm not used to being perceived as the most sober participant," Wallace told me, sounding apologetic.

Wow. That's one of the most telling statements I've ever read. It's pure gold.

farq will not be coming back
[ Parent ]

Was Minsky wrong to not respond? (none / 1) (#106)
by Tayssir John Gabbour on Mon Jul 12, 2004 at 07:29:15 AM EST

Should he have responded to you? I'm sure he didn't know if you'd just use a response as more fuel to burn him. From his perspective, you had every incentive for him to reply, and he had none. If you wanted to crucify him, his response couldn't possibly help. If you wanted to portray his side fairly, you probably would've done so regardless of his response.

[ Parent ]
three comments (none / 1) (#114)
by johnny on Mon Jul 12, 2004 at 10:27:32 AM EST

1) I don't think I was unfair to Prof. Minsky. I wrote the story as I saw it and understood the facts. I stand by what I wrote.

2) I don't think he had any obligation to respond to me. He's a busy guy, and he's a retired guy, and this Loebner story is old and (and evidently irritating) news to him. I don't blame him at all for not responding.

3) There's also a good chance that he simply never saw my email or nuked it when it came in. I may simply have escaped his notice.

yr frn,
jrs
Get your free download of prizewinning novels Acts of the Apostles and Che
[ Parent ]

Well... (none / 0) (#217)
by Tayssir John Gabbour on Wed Jul 14, 2004 at 03:49:40 AM EST

Entirely possible I misread your article. It certainly is good from a literary perspective. Perhaps I homed in on comments like throwing a pie in Minsky's face, and took that more seriously than I should have.

[ Parent ]
There's another way of looking at it. (none / 1) (#116)
by Farq Q. Fenderson on Mon Jul 12, 2004 at 10:32:45 AM EST

I loath receiving email from addresses at mit.edu.

Maybe I've just had bad luck, but if I'm initiating the conversation, I get something so terse it feels like a canned response. If I'm not initiating the conversation, I get a very lengthy and demanding message from someone who seems to think that the fact that their email address ends in @MIT.EDU (always in caps, always) makes them more intelligent in all fields. It's really strange to get messages like this from people who are asking for help.

It's like that secretary at Cisco I talked to: "I can't possibly have a network problem - I work at Cisco. What's passive mode anyway?" (Hot tip, if you're ever in the position of having to deal with someone who clearly understands far less than they seem to think, throw RFC's at them. It might not get them off your back, but it's really funny.)

It's that kind of thing that keeps me the hell away from MIT. Not that I think that everyone, or even most people, at MIT are like this, but to put it plainly, at least 90% of the people I've encountered that are like this are @MIT.EDU. I don't think it would be a very good environment for me.

In all seriousness, though, I briefly flirted with the idea of sending mail to Minsky, but he's probably too busy doing his own thing. I'll not assume that he'd write me off, but if he's too busy for an interview, he's surely too busy for me.

farq will not be coming back
[ Parent ]

Recommended Reading (none / 3) (#66)
by MichaelCrawford on Sun Jul 11, 2004 at 03:22:01 PM EST

Alan Turing: the Enigma by Andrew Hodges.

While it discusses Turing's work, it's more about the life of Turing. It helps one to understand who he really was.


-- Could you use my embedded systems development services?


Anyone seen Breaking the Code? (none / 1) (#71)
by epepke on Sun Jul 11, 2004 at 05:15:42 PM EST

It was a play based on that book. I saw it in London. I think it also played on Broadway.


The truth may be out there, but lies are inside your head.--Terry Pratchett


[ Parent ]
Seconded (none / 1) (#91)
by The Solitaire on Mon Jul 12, 2004 at 12:38:18 AM EST

Really a great book. Almost disturbingly detailed. I'm not sure how one goes about gathering that much information about a person.

I need a new sig.
[ Parent ]

The Turing Test's Great Weakness ... (2.25 / 4) (#67)
by Peahippo on Sun Jul 11, 2004 at 03:37:03 PM EST

... is that decreased intelligence on the tester's side can produce a false positive on the testee's side. As far as the average IMer is concerned, a 300-line Perl script is intelligent. "hw r u 2day?" "im fine omg!11! spld my drink im all wet" "lol b crfl n00b" Jesus Frickin' Christ.

I well know what intelligence is, by watching it in action. It is the combination of desire, ability and gumption to do virtually anything possible. As with the Turing Test, this definition avoids precision because intelligence is a self-referential thing, hence by nature very difficult to define. But it does the important thing that the Turing Test tries to do: conclude by example.

We should continue pushing the limit of information systems until we truly end up with some that are out of our control. Our losing control is a fine example of the system attempting to push the possible. Which is why AI as it is, has been such a failure; these academics are unable to accept giving up control. Once they finally let go, then AI will truly advance.


No Weakness (2.75 / 4) (#82)
by bugmaster on Sun Jul 11, 2004 at 11:28:16 PM EST

The Turing Test's Great Weakness ... is that decreased intelligence on the tester's side can produce a false positive on the testee's side. As far as the average IMer is concerned, a 300-line Perl script is intelligent.
Actually, that's the strength of the Turing Test. It doesn't measure any kind of intelligence (as in "the quality of being smart"); all it determines is whether the subject is human enough to be considered one of us, for all practical purposes. So, yes, one valid strategy for passing the test is to dumb down the population until even a coffee maker can outsmart us. Sad, but true.
>|<*:=
[ Parent ]
I agree (none / 0) (#137)
by WorkingEmail on Mon Jul 12, 2004 at 03:56:51 PM EST

Very well said. That definition of intelligence is probably even useful.

I agree on your last point, too. When we achieve good AI, we won't be able to understand everything about it.


[ Parent ]

Genetic Algorithms (none / 0) (#146)
by fn0rd on Mon Jul 12, 2004 at 10:32:47 PM EST

I'm no scientist, but in my dim understanding of these things it seems that researchers working with genetic algorithms are giving up some control in their pursuits.

This fatwa brought to you by the Agnostic Jihad
[ Parent ]

genetic algorithms (none / 1) (#148)
by DDS3 on Mon Jul 12, 2004 at 10:45:33 PM EST

are not about yielding control.  That are about getting a better return on time invested.  They are a tool, like any other.  The idea is, let a computer "genetically mutate" to find the most likely combination of states that best fit the solution you seek.  You then use those as your starting point.  A GA is only as good as its implementation and initial states, not to mention, how well the problem and goal have been defined.

GA are an empowering technology and do not reflect loss of control in the least.  You should think of it as a step up from brute force experimentation.


[ Parent ]

In The "Least"? (none / 1) (#154)
by Peahippo on Tue Jul 13, 2004 at 12:04:25 AM EST

Yes, GA involve a loss of control in the least, at least. If you had your handy-dandy end resulting algorithm, filled with a mish-mash of functions, or an end resulting math formula, like a 78-term polynomial, and somebody asked you "just tweak that a bit", you couldn't do it. You can't tweak the damned thing since you don't understand (i.e. don't control) its structure. You'd have to go through the evolution process again, placing some measure of control upon the GA itself.

I've seen genetic end results, functions and terms. Nobody knows what they mean, and studies to understand them are fruitless since you can invest .0001% of the effort in another round of evolution to fulfill your tweak.


[ Parent ]
Umm... (none / 0) (#204)
by DDS3 on Tue Jul 13, 2004 at 05:34:44 PM EST

...that's a measure of complexity and not a measure of control.  What you stated is that it's too complex to grasp in one's mind.  If that's the case, then either you use a tool to management it or you don't have a solution.

That's, in no way, shape, or form, a loss of control.  A measure of complexity is completely orthogonol to a measure of control.  The simple fact that you CONTROL the mutation and validation of mutation, means you have absolute control.  Like I said, it's a smarter, faster way than brute forcing your way to a solution?  Yet, according to you, brute force must mean that we have absolutely no control at all.  Which, of course, is absolutely false.

[ Parent ]

And yet ... (none / 0) (#211)
by Peahippo on Tue Jul 13, 2004 at 11:26:20 PM EST

... when your professor asks you to tweak that 78-term polynomial to adapt to a changed condition, you don't know what you're doing. That's a hard fact, Bub. You don't have control. The "control" you have is to run the evolutionary cycles all over again ... and that makes it hard to make a ROM to do the job. "Another condition? Hold it right there, I'll re-evolve another solution. Er ... when did you say your re-entry window was again?"


[ Parent ]
Question (none / 1) (#219)
by kerinsky on Wed Jul 14, 2004 at 04:08:02 AM EST

What's the difference between this and a student in CS 101 using a compiler? The proffesor asks you to change your program but you can't, because you don't understand the binary machine code. You can edit your source code and then recompile it, but this act seems pretty analogous to re-evolving an algorithm based on updated criterion to me.

-=-
Aconclusionissimplytheplacewhereyougottiredofthinking.
[ Parent ]
answer (none / 0) (#228)
by DDS3 on Wed Jul 14, 2004 at 09:26:10 AM EST

There is none.  He completely ignored the point and the reality that we must live in.  There are two possibilities here.  One, you understand the problem domain and therefore, can define and implement a GA to achieve your well defined goal.  Once the defined goal is achieved, it does not have to mean you have a solution, which is what the person above mistakenly believes.  From there, you will then go about validating and/or refining your results until it's crafted into a solution.  This final crafting may be by means of other brute force methods or even manual intervention.  In all cases, absolute control was maintained.  Again, this saved time from having to test every possible solution by simply testing the most probable solutions.

The second possibility is that the subject matter is too complex for a human to put his head around without a tool.  Thusly, a tool (GA, among others) is used to develope, test, and validate a solution.  The solution may be so complex that it readily prevents "hand-tweaking", but that is not a loss of control.  No more than requiring CAD/CAM to finely make adjustments is a loss of control.  It's called using the best tool for the job.  In other words, a tool was used to allow a solution whereby one would otherwise not be available.

Now then, it's certainly possible that someone may attempt to confuse these two realities, but it only means that someone is over their head with the problem domain rather than it being a result of loss of control.

[ Parent ]

Source Code (none / 0) (#235)
by Peahippo on Fri Jul 16, 2004 at 01:19:09 AM EST

Something like programming is driven deterministically from source code. We compile and link because we don't speak machine code (the 1s and 0s issued to the processor).

As far as that goes, people with enough skill can change the binary codes in the resulting programs. But that's rare ... and yet is still deterministic. Playing with term 45 of my example 78-term polynomial is a purely random act; you don't know what will happen, and when you get a result, can't say to any certainty what the result will be if you continue to make changes.

The end results of GA generation are effectively random number generators when you try to change them. That's hardly having control of the process. Academics don't make their careers by admitting they have lost control of their creations. And that's essentially what I've been saying all along.

Luckily for people who disagree with this opinion, people can try out this "control" for themselves. The graphics ones are engaging, since people are intrigued by pictures (look at all the interest in Mandelbrot sets). Go ahead and grab some GA programs and spin some generators from a data set. Then take a look at the generators and start tweaking ... then come back and tell us how much control you had. The proof's in the puddin'.


[ Parent ]
still confused (none / 0) (#238)
by DDS3 on Fri Jul 16, 2004 at 12:44:44 PM EST

by the complexity of the issue...

You need to learn how humans solve complex problems.  It's called, using the best tool for the job.

So, without a GA (by hand or brute force), how much control do you think you have with your 78-term polynomial?  Exactly.  So you have two options here.  One, no control and no solution because it's too complex to address by hand, or two, a GA which provides control and many possible "best" solutions.

So, how is having a solution, whereby, one would not be available, be a loss of control.  Each time, you side step the obvious.  Ignoring the facts isn't somehow going to validate your opinion.


[ Parent ]

Still Dancing ... (none / 0) (#242)
by Peahippo on Sat Jul 17, 2004 at 09:32:15 AM EST

... around the nub of my gist, that being: academia -- the current repository of AI research -- cannot take AI where it needs to go since they are unable to accept controlling the end results.

Academia equates control with knowing. AI's end complexity being definably unknowable -- leading to beyond-Human intelligence -- strongly suggests that academia is not willing to pursue the matter to that end. This seems true since AI has wallowed in the pathetic shallows for decades.

As for the supporting premises: I equate complexity with control, and control with knowing. I don't expect academic advocates to deal with that, or to validate it. Validation would have more validation itself if it were linked to production ... which academia has miserably failed at.

In short, Sir, you're dismissed. Back to the lab for you; I hear tenure calling, and the men here wish to debate.


[ Parent ]
Hehe... (none / 0) (#243)
by DDS3 on Sat Jul 17, 2004 at 02:38:26 PM EST

Clearly ruffled your feathers.  I'll take that as acknowledgement that you know I'm right.

Simple fact is, real life proves you wrong.  So, there is nothig to debate about.  I'll let you know when you can join the ranks of "men".  In the mean time, go back to chatting with your imaginary friends which you call, "men".  Until such time, understand that you're clearly confused.


[ Parent ]

Comprehension vs Control (none / 0) (#239)
by kerinsky on Fri Jul 16, 2004 at 06:58:53 PM EST

You seem to be confusing comprehension, at all levels of detail, with control.  Most drivers have little to no comprehension how their cars work, but they control them none the less.

Also GA's are no less deterministic than code from non-GA methods just because a psuedo-random seed has introduced.  Likewise changing the machine code of a normal programs without any understanding of how they work will result in them being "effectively random number generators".

You can argue that it's harder, or impossible, to understand the output of a GA enough to hand modify it effectively, but that's not the point of the arguement here.

To produce normal code someone writes a text file and feeds it into a compiler to get an executable.  The average coder knows little about how the compiler works, little about the machine code and little about how to change the machine code other than to re-write the text file and completely recompile.  I argue that even in this case it's reasonable to say that the coder has control over the output.

I'm not really that knowledgeable about GA, but basically from what I can tell there are only two differences from the above case.  Firstly the first step is that you're writing code that will compile to an executable that will generate more code.  Secondly you feed that executable a psuedo-random seed and then it goes off and produces the final GA.  Everything is deterministic, the variables are your original source, choice of compiler and psuedo-random seeds.  Through your original code you control the ouput by selecting fitness criteria, number of generations and competetors per generation.

Even if you don't understand the steps thouroughly you're still in control.  Likewise you don't need to know or understand whether your car is steer-by-wire or direct rack-and-pinion, you fiddle with the wheel and it goes where you want, ergo you are said to be in control.

-=-
Aconclusionissimplytheplacewhereyougottiredofthinking.
[ Parent ]

Unbearable Equations of Being (none / 0) (#241)
by Peahippo on Sat Jul 17, 2004 at 09:19:59 AM EST

You seem to be confusing comprehension, at all levels of detail, with control.

No, kerinsky, I'm equating comprehension with control. Which all comes back to my implied assertion that academia isn't going to be producing eveolved AI since they are unable to comprehend the final product. Like I've said, "I don't know" and "we won't know" aren't acceptable terms in grant applications and theses. Academics are in the business of knowing, while engineers and technicians are in the business of doing. There's nothing wrong with that scheme, but it does illustrate that AI is currently in the wrong hands. To advance the practical art, either acacdemics will have to learn to let go of controlling their production, or lose primacy to some motivated engineering team.


[ Parent ]
Hats (none / 0) (#249)
by kerinsky on Tue Jul 20, 2004 at 07:24:26 PM EST

I asked you what the difference was between someone working on GAs and a CS 101 student who is clueless about machine code. You responded with some crap about determinism, and mentioned that some arbitrarily smart person could do the work of a compiler, or hand edit the machine code, if they were so inclined.

You seem to be using to radicaly different definitions of the word control as you see fit. You didn't object to my imlpicit assertion that a CS1 student is in control of their program even if they know nothing about compilers beyond how to get them to execute, and nothing at all about the actual machine code that gets generated. Furthermore you didn't respond to my car driver analogy at all.

If you're going to use such a rediculously constrained definition of control then please stand up and justify it.

I say someone is in control of something if they can reliably predict future behavior and states. If someone says "I'm going to get on that horse/car/plane and arrive at place X in the next 2 hours" then if they and their mode of transportation are in fact at place X 2 hours hence then they are in control of the horse/car/plane, even if only through intermediaries, so long of course as their mode of transportation would not have gone their anyway (ie an airliner that was already scheduled to take off)

The benefits here are that my defenition is more intutive, in line with common usage, doesn't waste two words on exactly the same concept as you seem to want to do, and is actually scientifically testable. You can do repeated testing with set thresholds for the accuracy of future predictions. Likewise for programs you can attempt to predict future output based on a specific input.

I propose that there is not however a scientific method for arbitrarily testing comprehension. Indeed, how could you test for comprehension on a subject matter so difficult that only one person in the world has the ability to comprehend it.

You assert that "academia isn't going to be producing eveolved AI since they are unable to comprehend the final product". Now you seem to be saying that academicst can't create something that they can't comprehend, which of course would violate your assertion that academics are creating GAs that they can't comprehend.

You speak of academics as if they have some fundamental unalterable characteristic. You say "Academics are in the business of knowing, while engineers and technicians are in the business of doing". I say being an academic or engineer is merely a label, a hat that you put on and take off when wanted. Nothing stops an academic from doing or an engineer from knowing. And this has nothing to do with the points in contention anyway.

-=-
Aconclusionissimplytheplacewhereyougottiredofthinking.
[ Parent ]
Control (none / 0) (#208)
by suquux on Tue Jul 13, 2004 at 10:48:16 PM EST

Which is why AI as it is, has been such a failure; these academics are unable to accept giving up control.

I disagree.

D. McDermott, "Artificial Intelligence Meets Natural Stupidity,"
in Mind Design: Philosophy, Psychology, Artificial Intelligence, J.Haugeland, editor, chapter 5, pp. 143-160, MIT Press, 1981.

This title for instance does not indicate that it was/is the case (though it more about the US-PhD production system in the field at that time). And I personally never felt that it was so.

CC.
All that we C or Scheme ...
[ Parent ]
Nope ... (none / 0) (#210)
by Peahippo on Tue Jul 13, 2004 at 11:19:00 PM EST

... I don't buy it. I don't buy the idea that some PhD candidate can have the phrase "and then something happens, I'm not sure, but I like the results" littered through an approved thesis. Furthermore, I find it hard to imagine that grants are thrown their way just to get more of the same results with uncertain production methods. Places where willy-nilly uncertainty is an accepted methodology, are definably rare, and it's to no surprise that little innovative AI has been produced therefrom.

Child-rearing goes along the same lines. To find success in raising a child, you have to let them find their own path, even if it's to their destruction, out of your control.


[ Parent ]
Disagree ... (none / 0) (#230)
by suquux on Wed Jul 14, 2004 at 08:08:16 PM EST

First, I was after the pun in the title which to me indicates that scientists are willing to give up control.

Besides, as a psychologist I am quite used not to have control, regardless whether you are in the 'laboratory' (only a few variables and with the issue that you may not generalize) or in the real world (where clients do what they want because they know better).

The grant issue that you raise in a way was McDemott's point who criticized that due to the call it PhD approval scheme procedures/research issues that already had been touched were abonded as the quest was (maybe is) to come up with something 'new'; that is the 'stupity' part.

If one translates willy-nilly uncertainty into fuzziness we plunge into not only AI but also into everything inbetween psychology and quantum physics. I would be interested to learn where there was innovation with certain production methods or at least a clarification would help with regard to this aspect.

If your last sentence intends to convey e.g. the position of Summerhill I would agree. However, I realize that the peak non-authoritarian era of the late 60'ies has passed away there. Quote from their site "The most important part is building and maintaining an environment where members of the community can co-exist in harmony and in personal freedom.". Now this implies rules and control, though of course in a subtle, transparent way.

But there you have touched my pet-area (never came about to get deeper into it but who knows). Child-rearing, which may be rephrased into natural-intelligence-building (more conservatively habituation/socialisation) takes a considerable time and never/nowhere so far have I found an acceptable theory why obviously this time is needed to reach a human level. AI-systems to my knowledge have not yet been granted long term socialization which I believe to be a prerequisite for human-like (irrespective of implementation-level details aka heuristics, neural-nets, brute force or even silicon vs. carbon) performance; 20Q might fit in, but is a rather 'mini-world' approach. Maybe Hofstadter works along these lines, quote "The cognitive modeling at CRCC is based on the thesis that mental activity consists of many tiny independent events and that the seeming unity of a human mind is merely a consequence of the regularity of the statistics of such large collections of events.".

Interestingly enough, it is said that you need about 15 years to achieve a level of Tai Chi enabling you to teach others, and for the time being I see this as - coin it post grad socialisation as a human being.

Well, so far for now. Thank you for giving me the chance to feel some nostalgia ;)

CC.
All that we C or Scheme ...
[ Parent ]
Turning test? (none / 3) (#75)
by United Fools on Sun Jul 11, 2004 at 06:49:32 PM EST

We turn round and round and round... we definitely pass this test in flying colors.
We are united, we are fools, and we are America!
Turing Test 9/11 (1.00 / 10) (#76)
by Hide Teh Hamster on Sun Jul 11, 2004 at 09:26:56 PM EST

ror, let that one run through your brain for a couple minutes. Rooor.


This revitalised kuro5hin thing, it reminds me very much of the new German Weimar Republic. Please don't let the dark cloud of National Socialism descend upon it again.
LOL 9/11 (1.00 / 6) (#79)
by Green Cup on Sun Jul 11, 2004 at 10:26:37 PM EST



[ Parent ]
YOU MUPPET (1.10 / 10) (#78)
by Green Cup on Sun Jul 11, 2004 at 10:26:16 PM EST

PLEASE TO PUT NO MORE THAN 2 PARAGRAPHS INTO THE ITNRO IN FUTURE. KTHX.

Heh, thanks. (none / 2) (#83)
by Farq Q. Fenderson on Sun Jul 11, 2004 at 11:34:05 PM EST

I get a kick out of being called a muppet.

I was thinking about this, but decided that what I have put there is more appropriate. The point of the artical is the proposal mentioned in the last paragraph.

farq will not be coming back
[ Parent ]

Fraggles live ! (2.50 / 4) (#84)
by bugmaster on Sun Jul 11, 2004 at 11:45:21 PM EST

You muppet ! "Article" is spelled "article".

Sorry, couldn't resist.
>|<*:=
[ Parent ]

Holy crap. (none / 3) (#86)
by Farq Q. Fenderson on Sun Jul 11, 2004 at 11:55:57 PM EST

I can't believe I wrote that.

I must have brain damage.

farq will not be coming back
[ Parent ]

Not intelligent! nah nah nah nah! (1.50 / 4) (#109)
by Kax on Mon Jul 12, 2004 at 08:58:09 AM EST

You thought you were smart, but you aren't!

nah nah nah!

[ Parent ]

new test (none / 2) (#94)
by modmans2ndcoming on Mon Jul 12, 2004 at 01:27:15 AM EST

Multiple levels of intelligence:

Type 1 AI: can it collect characteristics from its environment?

Type 2 AI: can it synthesis this new knowledge into what it knows and learn how to apply it appropriately?

Type 3 AI: can it accomplish everything in the previous levels  and solve problems presented to it and fill in the gaps of missing knowledge with a guess that is contextually appropriate to the problem and when it solves the problem can it then synthesis this knowledge that it guessed and found correct into what it knows and learn how to apply it appropriately?

Type 4 AI: can accomplish everything in the previous levels and think randomly about its knowledge and environment while idle, being curious about activities that happen around it?

Type 5 AI: can it accomplish everything in prior levels and does it have a sense of what feelings are, can it apply these ideas to physical situations presented to it and respond appropriately?

Type 6 AI: can it accomplish all prior levels and create a Holistic opinion on matters based on a sense of morality, can it form an opinion that defies the logical conclusions that it would come to based on the facts because of a moral imperative to do so?

Type 7 AI: can it accomplish all prior levels and think in an abstract philosophical way, can it be presented with a philosophical paradox and come to a conclusion based on its moral code?

Type 8 AI: can it accomplish all levels below AND rewrite its own code to change the way it thinks, can it bypass code placed in it to restrict its actions (3 law safe  type code and all that).

Type 9 AI:  can it accomplish all levels below and is it possible to replicate this AI and have it grow in a separate environment and have the end result be different based on that separate environment

Type 10 AI: can it accomplish all levels below and self replicate and "raise" the new AI on its own allowing it to introduce a different environment than it was given which will result in a slightly different AI than the parent. also, can the new AI decide to "rebel" and not act according to the "parent" AI's moral codes?

Type 11 AI: can it accomplish all levels below, and does it have an Ego, can it's feelings be hurt, can it feel disappointment if its "child" AI does not listen while learning or turns out to not have the same moral code as the "parent" AI?

one GIANT LEAP (none / 3) (#95)
by gdanjo on Mon Jul 12, 2004 at 02:00:52 AM EST

Type 1 AI: can it collect characteristics from its environment?
Paraphrased: Can it observe? That's fairly innocuous and simple, and we could say that we already have this type of knowledge created artificially.

Type 2 AI: can it synthesis this new knowledge into what it knows and learn how to apply it appropriately?
Paraphrased: Does it know what knowledge is? Does it know when new knowledge is presented? Does it know how to store, apply, modify knowledge (which it would need to know how to do for it to "apply appropriately"), etc. etc.

In other words: Can it think?

All the types listed after this are solved in the solution presented to this one, and therefore all the later types are redundant.

"Intelligence" will be present when we can answer yes to this question (only).

Dan ...
"Death - oh! fair and `guiling copesmate Death!
Be not a malais'd beggar; claim this bloody jester!"
-ToT
[ Parent ]

no, all that is required for type 2 (none / 0) (#120)
by modmans2ndcoming on Mon Jul 12, 2004 at 11:29:52 AM EST

is that when it gets new information, it takes that information, put it in a database and relate it to other information in the database. for the application part, it would need to be taught how to use that information and would require a teacher and input the information into a meta file for that node in the Database.

it can be done with simple methods. you jumped ahead to far and assumed too much for the step.

[ Parent ]

I guess we'll (someday?) never know (none / 0) (#149)
by gdanjo on Mon Jul 12, 2004 at 11:20:40 PM EST

is that when it gets new information, it takes that information, put it in a database and relate it to other information in the database. for the application part, it would need to be taught how to use that information and would require a teacher and input the information into a meta file for that node in the Database.
Is that all?(!)

What you've described is the very problem that AI is trying to crack, and my point is that once the above is solved, the rest of your "types" of AI are solved also.

The reality is that we have not yet been able to build a system that can take all types of knowledge, categorise and store this knowledge (whether in a DB or something else), and reliably relate it to other knowledge. If we ever build such a machine, we could simply feed it the design of itself and it would become "self-aware", and could therefore create "mini me's" of itself to process and store yet more knowledge.

Alas, the problem is not solved as of yet.

My point is that, in my opinion, "intelligence" is tucked in somewhere between being able to "observe" and being able to "process" (between your type-1 and type-2 AI), but it is more than just "being able to observe AND being able to process."

it can be done with simple methods. you jumped ahead to far and assumed too much for the step.
Well, I think you assumed far to little when you listed a set of criteria as an intermediate step towards "real" AI, whereas I believe that "real" AI is within that intermediate step.

I guess we'll see in a few (dozen?) years.

Dan ...
"Death - oh! fair and `guiling copesmate Death!
Be not a malais'd beggar; claim this bloody jester!"
-ToT
[ Parent ]

I don't see it any diffrent than children (none / 0) (#180)
by modmans2ndcoming on Tue Jul 13, 2004 at 09:55:26 AM EST

if Children are not raised correctly, they will lack the same abilities that you say AI lack today. the only thing that AI lack right now is the code segments to do the associations on their own, but AIs already have been built that can be told how to relate items (like a parent does with a child) and it can then draw conclusions based on its associations. it cannot how ever take in the information on its own and relate it. I think that if we want an AI that is capable of assisting humans in their environment, we need to do work on optical and auditory identification and classification which will require some hardware research as well, but for the simple autonomous relation research, you can use the "senses" a computer has now.

[ Parent ]
progression towards confusion (none / 3) (#108)
by Shren on Mon Jul 12, 2004 at 08:31:42 AM EST

Have you read An Eternal Golden Braid, I wonder?

Type 4 AI: can accomplish everything in the previous levels and think randomly about its knowledge and environment while idle, being curious about activities that happen around it?

What you are grabbing at somewhere around 4, but are missing, is : "can determine if problem A is a subproblem of problem B." If you can pull this one off, your other higher level types are practically trivial. Every day we try to solve one problem - "how do I survive and prosper" - and we spend our day solving subproblems of this one big question.

A human might stack cards and build card towers to develop it's dexterity - perhaps it thinks that such an activity is relaxing. We can teach a computer to use a robotic arm to stack cards, but it can not determine how that behavior might help it survive. The computer's problem is "how do i keep power flowing in that cable", but it can stack cards for a thousand years and never even grasp that stacking cards is not the totality of the universe, because the computer has not yet been built that can determine the relationship between two arbitrary problems.

[ Parent ]

actualy, all I was getting at was (none / 0) (#121)
by modmans2ndcoming on Mon Jul 12, 2004 at 11:36:08 AM EST

can it use the spare cycles of the machine or GRID to observe its environment and just go over stuff it knows, like "That cup is red, red is a color"

the point of this step is to assure that it is actively observing its surroundings and can place information into its database with out Human interaction. the information will be relational and allow it to make new connections in its database between the things it knows. I imagine that if it came across a totally new thing, it could be made to record it some how and ask a human teacher what it is when it gets a chance.

you were putting to much into it.

[ Parent ]

steep slope (none / 0) (#118)
by DDS3 on Mon Jul 12, 2004 at 11:12:22 AM EST

Some questions for ya.

can it accomplish everything in the previous levels  and solve problems presented to it and fill in the gaps of missing knowledge with a guess that is contextually appropriate to the problem and when it solves the problem can it then synthesis this knowledge that it guessed and found correct into what it knows and learn how to apply it appropriately?

There are many different types of problem solving.  Not to mention, many levels of complexity.  Some humans spend their life attempting to solve a single problem.  What you're attempting to describe is abstract thought and application of knowledge to both abstract and concrete problem solving.  I'm willing to say that the ability to demonstrate critical abstract skills is an important measure of intelligence, but I'm not willing to give a brownie point as phrased above. In of it self, is too vague and could mean just about anything.  I could use that same metric to argue that humans are dumb as bricks.  ;)

can accomplish everything in the previous levels and think randomly about its knowledge and environment while idle, being curious about activities that happen around it?

Is this really a measure of intelligence?  Random curiosity has been trivial for decades and yet I don't think it brings us closer to having true AI.  After all, many use this basic concept as a means of training their AI.

can it accomplish everything in prior levels and does it have a sense of what feelings are, can it apply these ideas to physical situations presented to it and respond appropriately?

Isn't this really nothing more than complex pattern matching?  Is pattern matching really a sign of AI?  AFAIK, there has already been some effort done here and the effort concentrated on facial and verbal pattern matching.  So, as I said, is it really a sign of intelligence or is it a sign of pattern matching of sensory input?


[ Parent ]

I don't think that it is to vegue (none / 0) (#122)
by modmans2ndcoming on Mon Jul 12, 2004 at 11:48:52 AM EST

to reach this level it does not need to be able to do it in every possible situation right away. there are levels within this level, and just like Piage, demonstrating one key aspect means that you have entered that level, not completed it.

also, you are forgetting about the fact that this is a cumulative ranking. all levels prior must be satisfied. I could probably add a Rubric that classified appropriate proficiency to consider it good enough to move on. Anyway, level 4 is important. up to this point the AI has not needed the ability to look and associate on its own or to identify a new situation and then ask a question of a "teacher" regarding this new, unfamiliar information.

it is pattern matching, but is the ability to know what a human is feeling not intelligent? humans pattern match to gauge feeling all the time, why is it that if a computer does it that it becomes not a sign of intelligence but a sign of nothing more than pattern matching? Again, remember that this is a cumulative, and the ability to recognize human emotions means that it can now add more knowledge and relate data in much deeper ways.

[ Parent ]

exactly (none / 0) (#147)
by DDS3 on Mon Jul 12, 2004 at 10:39:21 PM EST

to reach this level it does not need to be able to do it in every possible situation right away. there are levels within this level, and just like Piage, demonstrating one key aspect means that you have entered that level, not completed it.

I understand that.  Just they same, what is a key aspect?  What constitutes, "completing" a level?  That is exactly my point.  It's far, far too vague to be of any value.

it is pattern matching, but is the ability to know what a human is feeling not intelligent? humans pattern match to gauge feeling all the time, why is it that if a computer does it that it becomes not a sign of intelligence but a sign of nothing more than pattern matching? Again, remember that this is a cumulative, and the ability to recognize human emotions means that it can now add more knowledge and relate data in much deeper ways.

Well, again we back to something vague.  Pattern matching is great, but where does the line start and stop?  How fuzzy and/or abstract does the pattern have to be to be matched?  What if a fuzzy match differs from expected results yet still matches the machines abstract idea of another pattern?  See what I'm saying here.  These are far too vague to help scientifically classify a level of AI.

And this, is exactly my point.  After all, part of the problem of AI is simply defining what makes intelligence is something OTHER than vague descriptions or abstractions.  Sure, WE may know intelligence when we see it, but how do you define it?  Much like pornography, the "standard" is probably fairly gray.  Thusly, making the simple act of even classifying it, problematic.


[ Parent ]

Silly (none / 0) (#134)
by WorkingEmail on Mon Jul 12, 2004 at 03:50:24 PM EST

So your definition of intelligence is in fact humanity. Cute, but not very practical.


[ Parent ]
so (none / 0) (#143)
by modmans2ndcoming on Mon Jul 12, 2004 at 07:09:27 PM EST

abstract thought and the ability to look at what you are and change that is not intelligence?

sure, it has many qualities of human intelligence, but Human intelligence is the only sort of intelligence we know that offers abstract thought and reason.

why not create a machine that can think for itself and learn on its own and has consideration for the feelings of the humans around it? that is how you are going to get a machine to fit in to our lives seamlessly.

[ Parent ]

won't work (none / 0) (#141)
by projectpaperclip on Mon Jul 12, 2004 at 05:15:46 PM EST

too many humans can't pass level 5 or 6, how do you expect any machine to?

[ Parent ]
just because to many humans cannot pass level 5 (none / 0) (#144)
by modmans2ndcoming on Mon Jul 12, 2004 at 07:11:30 PM EST

or 6 does not mean this can't work. are you familiar with Kohlberg? do a wikipedia check on him if not. most people never make it past level 2 on his scale. intelligence is independent of what most humans have achieved.

[ Parent ]
Final level (none / 0) (#165)
by minamikuni on Tue Jul 13, 2004 at 04:59:03 AM EST

Type 12 AI: can it accomplish all levels below and send killer robots back in time to eliminate those who might threaten it?

:)

[ Parent ]

lets not try and test that one (none / 0) (#198)
by modmans2ndcoming on Tue Jul 13, 2004 at 03:48:16 PM EST

shall we :-)

[ Parent ]
Intelligence (2.40 / 5) (#110)
by smurf975 on Mon Jul 12, 2004 at 08:58:39 AM EST

I don't believe in intelligence in any being from human to virus. Its all about action and reaction based on information. Humans are able to hold a lot of information and base their actions and reactions on that information.

Other beings have better sensors (eyes, ears) then humans but limited storage of information. Humans went for bad sensors and good information storage and it seems to work.

However, (none / 0) (#115)
by Farq Q. Fenderson on Mon Jul 12, 2004 at 10:31:56 AM EST

What you describe is not the extent of human mental faculties. Humans not only learn data, they learn processes as well.

It's not just a matter of learning what reaction to make to what stimulus, or learning a reaction itself, but also a matter of learning things like logic. The human brain isn't setup to handle logic from the get-go, it has to be learned.

I think it's pretty astounding that (to a greater or lesser extent, and punctuated by personality traits) everyone learns logic. Even animals, but the complexity of their logic might be very little in most cases.

I don't think I can put it eloquently enough, and frankly I wasn't prepared to get into the intelligence debate, though I do view it as something broader than most people do. For example, is a hagfish intelligent? - my answer is this: does it learn to survive? The answer is demonstrably 'yes', and so I consider a hagfish intelligent.

farq will not be coming back
[ Parent ]

I think (none / 0) (#171)
by smurf975 on Tue Jul 13, 2004 at 06:43:27 AM EST

I think you should get into an intelligence debate, as you need to establish what is intelligence to be able to build an intelligent device. For me everything or anything alife that can survive is intelligent. Not in the human/mammal sense but as a species they are intelligent. People develop vaccines against diseases and virus and bacteria adapt to this as a species. Maybe they can't learn to develop a counter vaccine but they surely adapt to new influences in their environment.

So I think basically you can say something that is able to adapt to unforeseen changes in its environment is intelligent, the faster (better) it does it, the smarter it is. Adapting is like learning to me, however some species learn in their life times and other in generation and store the lessons learned in genetic code. Like if you put the wrong data into a computer program it crashes. A form of intelligence would be to handle a malfunctioning software routine, establish the cause of the problem and do something to stop the problem. Simple problem solving routines, I guess.

A real world example would be the problem that I have with my NVIDIA drivers. If I run seti@home in screensaver mode for a couple of hours, 80% of the time Windows blue screens due to a NVIDIA driver. I think the OS should not crash in such situations. It should unload the driver and load a generic driver and continue.

So at this point I would say that my computer would have intelligence if it had or could do:

  1. Pattern recognizing
  2. Problem solving
With pattern recognizing I mean seeing that an event that reoccurs could be a wanted or not. For example take a slashdotting or a DDOS attack. An AI should be able to see a pattern to determine if your network is under attack or just getting a lot of traffic. And then it should solve the problem by only serving static pages or deny the IP addresses incase of a DDOS.

Basically do what a sysadmin/networkadmin does.

However I think the AI thing about mimicking human intelligence is stupid and overrated. I think there is no need for such devices but devices that can understand human language, take commands, and execute them or that can maintain them self's. Basically I'm talking about something like the AI that you see in Star Trek the Next Generation, which is all that I would need.

Want to talk to someone then login into an IRC channel. Need something more intimate then call an escort service and just talk. I understand the geek appeal that is has but practically its not needed.


[ Parent ]

nitpick (none / 1) (#117)
by DDS3 on Mon Jul 12, 2004 at 10:56:41 AM EST

Humans went for bad sensors and good information storage and it seems to work.

It's not that humans went for "bad sensors", rather, we simply have, "good enough", because our intelligence allows us to augment, via technology, as needed.  Not to mention, based on experiences, our brain is able to form pictures and opinions based on incomplete data.  Thusly, it makes "good enough", "better", because it allows us to minimize brain space for sensory input and maximize higher level abilities, even when faced with lessor detail.

I think I'm pretty well in line with Farq Q. Fenderson's reply.  So, I'll not drive down the same road.

[ Parent ]

Right. (none / 0) (#168)
by basj on Tue Jul 13, 2004 at 05:53:23 AM EST

Do you seriously want to argue the word `intelligence' is meaningless?

How about `smart', `stupid', `clever', `sharp', `genius', `dumb', etc. etc.?
--
Complete the Three Year Plan in five years!
[ Parent ]

No I don't (none / 0) (#172)
by smurf975 on Tue Jul 13, 2004 at 06:53:04 AM EST

But I say is that what you call intelligence in humans is in some way true for any lifeform and perhaps even the matter that makes up the universe.

[ Parent ]
Okay (none / 0) (#175)
by bugmaster on Tue Jul 13, 2004 at 08:00:17 AM EST

Show me a bacterium that can solve algebraic equations in its head... er... in its vacuole ?
>|<*:=
[ Parent ]
Pick 10 random people (none / 0) (#220)
by smurf975 on Wed Jul 14, 2004 at 04:09:17 AM EST

Pick 10 random people in a shopping mall that can do that.

But I'm saying that humans and mammals have one way of surviving and that is by being fast learners, other lifeforms have other forms of surviving.

Can't you say that the genetic code of a bacterium is its memory of what it learned during the past billion years on Earth? And that it learns new stuff by trying out random mutations and see if they work?

However you may argue that its not aware of its actions.

[ Parent ]

Key Difference (none / 0) (#223)
by bugmaster on Wed Jul 14, 2004 at 06:19:50 AM EST

Humans are capable of abstract thought, and of thought period. This means that they can figure out solutions to problems during the lifetime of the individual, not over hundreds of thousands of generations. It also means that humans can think up all kinds of crazy things that do not immediately address the problem of their survival -- stuff like society, science, art, and whatnot. So yeah, learning and abstract thought, not to mention thought, period... That's one of the many, many things that make us different from bacteria.

You may be right in saying that bacteria may be more capable of survival; however, I am not convinced that survival == intelligence. By your logic, rocks are the smartest things of all -- because they'll be here even after all the bacteria are gone.
>|<*:=
[ Parent ]

Well... (none / 0) (#226)
by smurf975 on Wed Jul 14, 2004 at 08:58:20 AM EST

I did say that humans and mammals are fast learners compared to other lives.

What I'm saying is that bacterium are not interested in your intellectual abilities, they don't need awareness and all the other human abilities and technically they have more knowledge as they know more about chemistry and nano manufacturing, yes human cells also know this but just saying that if you compare their knowledge encoded into their genes with human knowledge in libraries they are not that stupid.

society, science, art, and whatnot
I read once that women fancy creative people more then say great hunters. So sexual selection played a big role in this ability and still is. So you can argue that it is part of survival, that is if you want to mate.

BTW: I also saw a docu that mentioned that in most life forms sexual selection is the biggest force behind evolution and not survival directly. So you can be the biggest and strongest peacock but if your smaller opponent has a nicer tail and better mating dance he will have a bigger chance of mating.

So yeah, learning and abstract thought, not to mention thought, period... That's one of the many, many things that make us different from bacteria.

This is not a pure human ability. If you have a pet dog or cat your will see that when they sleep they like humans have a REM period. So, I would think if they can dream they should also have thoughts? Dolphins can have abstract thoughts.

Well anyway what I really wanted to say to me it looks like that lower life forms don't have individual thoughts but thoughts as a group. Like ants. It seems like chaos but if you zoom out you will see patterns. Like if you are in a city and everything looks chaotic but if you look at satellite pictures you will see a pattern a pattern like a bacterium colony or an ant colony.

[ Parent ]

Abstract (none / 0) (#229)
by bugmaster on Wed Jul 14, 2004 at 04:52:56 PM EST

What I'm saying is that bacterium are not interested in your intellectual abilities, they don't need awareness and all the other human abilities...
It sounds like we agree.
...and technically they have more knowledge as they know more about chemistry and nano manufacturing, yes human cells also know this but just saying that if you compare their knowledge encoded into their genes with human knowledge in libraries they are not that stupid.
Or not. I think you have convinced me: rocks are the smartest things on the planet. They know all about crystallization, friction, and even gravity, and we haven't really understood that even today.

You're right about sexual selection, but, again, it takes a very different form in humans than in, say, bacteria: humans end up performing a lot of totally unnecessary (from the reproductive point of view) tasks, such as doing your taxes, which require massive amounts of information processing; most animals just have to smell good.

REM sleep is not abstract thought; abstract thought is, as I see it, the ability to generalize concepts to the point where they don't refer to anything you see immediately next to you, or any specific thing at all for that matter. For example, when you are thinking "2+2=4", you aren't thinking of two monkeys or two apples; you're thinking of "2" in general. Dolphins might be able to do that; dogs almost certainly can't.

The jury is still out on how intelligent ant colonies are; it's an interesting idea that has been brewing in sci-fi for a while, but I haven't seen any actual evidence. Bacterial colonies on petri dishes are still pretty basic, though, nowhere near even to a dog's cognitive level.
>|<*:=
[ Parent ]

Make up your mind. (none / 0) (#178)
by basj on Tue Jul 13, 2004 at 08:22:31 AM EST

I don't believe in intelligence in any being from human to virus.

But I say is that what you call intelligence in humans is in some way true for any lifeform and perhaps even the matter that makes up the universe.

So is intelligence everywhere or nowhere?

And if you say that's the same, my objection will still stand.
--
Complete the Three Year Plan in five years!
[ Parent ]
Both. (none / 0) (#197)
by Gluke on Tue Jul 13, 2004 at 03:39:14 PM EST

So is intelligence everywhere or nowhere?

It's everywhere and nowhere, always and never. It's nonlocal.



[ Parent ]
Nah what I mean (none / 0) (#218)
by smurf975 on Wed Jul 14, 2004 at 04:03:00 AM EST

What I meant with that statement. If you don't have a clear definition of what is intelligence. You can basically say that matter is intelligent as its aware of things happening at the other side of the universe (at least it reacts to it) and it adapts to new situations. It even has some kind of memory to a prefered state.

[ Parent ]
Which means nothing. (none / 0) (#222)
by basj on Wed Jul 14, 2004 at 06:14:30 AM EST

So basicly you are saying "without a definition of the word `intelligence', you can take it to mean anything"?

That's saying nothing of course. That goes for every word.

But if you say:

Its all about action and reaction based on information

You seem to be giving a definition for intelligence yourself! Intelligence, you say, is all about action and reaction based on information. One can agree or disagree with that, but it's still some sort of definition.

What puzzles me, is that before that definition, you argue there is no such thing as intelligence. And then, that if no clear definition is available (even though you just provided one), you can take a word to mean anything.

Strange, no?

You seem to be adhering to (at least) two different definitions of intelligence. First, your action-reaction definition, and second some `common' definition of the word.

This common definition, you seem to be saying, is void: there is no such thing as intelligence thus understood, you say.

But your own definition, you agree, is very very broad, so almost everything can be called intelligent by that definition.

Is that not a huge clue that if we, in ordinary language, use the word `intelligence' we do not mean `action and reaction based on information'? That is to say, your definition is flawed?

And if our common definition is not meaningless (as you agreed to), it is not useless, and there really are things that are, by that definition intelligent?
--
Complete the Three Year Plan in five years!
[ Parent ]

You are right (none / 0) (#227)
by smurf975 on Wed Jul 14, 2004 at 09:18:57 AM EST

I'm not really clear on my views. This is because I haven't expressed them before and they are not fully developed.

But what I meant is that who says that humans are intelligent? Aren't their doings just action and reaction based on information?

Ok humans individuals are capable of processing complex information and other animals are more limited.

Isn't most science really just a means to gather more information about your environment with the end goal to better control it? Also art is really logical if you dissect it and just another way to communicate with your kind your feelings.

[ Parent ]

Bad sensors? (none / 0) (#200)
by Peaker on Tue Jul 13, 2004 at 04:54:11 PM EST

On any scale, humans have pretty-good sensors, surpassed by few animals.

Artificially, we cannot build sensors as good or process the sensed information into usefulness as quickly or as well.

As a side note, I don't believe in money, I just believe in trading tokens marked with numbers in place of valuables.

[ Parent ]

what are your thinking ? (none / 1) (#123)
by hswerdfe on Mon Jul 12, 2004 at 12:04:07 PM EST

are you at troll? no really, this Idea is crazy! replacing the turing test with an attempt to mimic animals. I agree the turing test is a little bit odd, and not worth the atention it recieve. But your idea is crazy Totally and completely! any away...yah
--- meh ---
Hasn't drduck already passed the Turing test? nt (none / 0) (#125)
by Bill Melater on Mon Jul 12, 2004 at 01:17:16 PM EST



*splutter* (none / 0) (#150)
by Ta bu shi da yu on Mon Jul 12, 2004 at 11:28:02 PM EST

Hardly.

---
AdTIה"the think tank that didn't".
ה
[ Parent ]
Definition of intelligence (none / 0) (#126)
by jdoeii on Mon Jul 12, 2004 at 01:49:23 PM EST

Actually, it's not that difficult. Intelligence is an ability to distinguish similar experiences. For example humans can easily distinguish simalarly-sounding words, while apes can't. Human-level intelligence is an added ability to influence thought process by thought process. For example, a dog can control movements of its limbs. Humans can control not just limbs, but the thought process which leads to the movement of the limbs. It's an overlay.

doesn't work (none / 0) (#194)
by jbuck on Tue Jul 13, 2004 at 02:33:41 PM EST

First off, humans are apes, otherwise "ape" is an incoherent category. Humans and chimpanzees are far more closely related to each other than either is to orangutans or gibbons, with 95 to 98% of DNA in common, depending on how you measure (the lower figure is if you include the "junk DNA" that does not code for proteins).

Many animals besides humans would meet your definition of intelligence, and many animals can distinguish stimuli that humans can't (dolphin sonar, for example). And animals other than humans have thought processes.

No non-human animal appears to have even rudimentary grammar, so only humans have complex language. Dolphins, whales, and non-human apes communicate, but their communications don't have the complex structure that characterizes human language. Grammar might well have been the breakthrough that allowed human beings to become sentient.

But if grammar was the breakthrough, that suggests that the Turing test might well be the right test after all.

[ Parent ]

For sentience. (none / 0) (#196)
by Farq Q. Fenderson on Tue Jul 13, 2004 at 02:51:25 PM EST

Not intelligence.

I don't find it such a rude test for sentience. It's not perfect, but with the proper rigor in testing, it would be a damn good heuristic.

farq will not be coming back
[ Parent ]

You missed the point (none / 0) (#215)
by jdoeii on Wed Jul 14, 2004 at 01:24:47 AM EST

First off, humans are apes, otherwise "ape" is an incoherent category

Ape was just an example. If you don't like it, substitute "apes" with "ants" or "snails". The point is still valid. Besides, humans are also different - 1 y.o. is not the same as 30 y.o. or someone with dementia.

Many animals besides humans would meet your definition of intelligence

That's the point. They are intelligent to a degree. The pattern separation power is a measure of intelligence. Animals meet that criteria. Pattern separation power in humans is much greater than in any animal due to the brain design.

can distinguish stimuli that humans can't (dolphin sonar, for example)

That's irrelevant. You are talking about sensory stimulation. I am talking about neuronal firing patterns. Humans can distinguish similar patterns better than dolphins.

No non-human animal appears to have even rudimentary grammar, so only humans have complex language.

Verbal communication is just a manifestation of a degree of intelligence. So, humans have much greater degree of intelligence. We knowthat already.

What if an intelligent entity is much smarter than humans so we can't conprehend it. Acording to the test it would not be considered intelligent.

But if grammar was the breakthrough, that suggests that the Turing test might well be the right test after all

Turing test is a really poor recursive test. Basically, take what you believe to be an intelligent entity and get an opinion from it about the intelligence of the other entity.

Turing test is a test for human-like communication ability. It's conjectured that if it communicates like human, then it must be intelligent. The conjection is not proven. I personally believe it's wrong.



[ Parent ]
Turing Test (none / 1) (#140)
by jefu on Mon Jul 12, 2004 at 05:08:54 PM EST

Over the years I've seen a number of criticisms of the Turing Test, and mostly they seemed to be arguing that the Turing Test was imperfect because it didn't do (insert favorite AI thingummy here). (I'd put the Chinese Room into this category as it seems to argue that AI is impossible on this level because only natural brains can be intelligent and the Chinese Room is not a natural brain so...)

These can mostly be classified as (somehow) theoretical criticisms - the Turing Test does not match the particular theory being held up and is thus flawed. The Turing Test, on the other hand, looks like an operational definition - advancing no particular theory of what intelligence is - just saying that something that behaves like a human - that is like something we know to be intelligent (as thats pretty much how we define intelligence) is likely to be intelligent. While the behavior is seriously limited in scope (verbal communication), it generalizes well as the human side of the process can pose problems, ask questions that extrapolate from common sense and so on.

The Turing test is not perfect. Its possible to imagine intelligent beings that could not pass it, and to imagine non-intelligent beings that might pass it (very sophisticated chatterbots perhaps). But nothing has passed the test yet.

I agree that we should try to find other tests for intelligence and think seriously about the topic, but until we have better reasons to do so, the Turing test is at least a good first step.

I certainly think we have good reasons. (none / 0) (#152)
by Farq Q. Fenderson on Mon Jul 12, 2004 at 11:48:50 PM EST

First off, why have the Loebner Prize at all?

It's quite simple: it's Loebner's money and he can do what he likes with it.

The Loebner Prize might not be good incentive to get people working on true intelligence, but it does get them working. What's more, there's a degree of popularity that comes with it.

I think these combined are good enough reason to have a more appropriate test. I can't think of a better one than what I've proposed. Part of my condifence in it is that I know there exists software that would do well in it, and for good reasons.

It does have its own limitations, yes. To begin with, it's a measure of behaviour and nothing more, but I have to ask something - have you yourself ever known anyone beyond their behaviour? It all has to go through that interface, though much is read into it.

Personally, I feel that an alternative is a good idea, and right now. If competitions like this were being held, I'd be participating, and I'm sure others would be too. Besides, think of all of the people who might get into writing automata because of the competition.

farq will not be coming back
[ Parent ]

ELIZA (none / 0) (#207)
by suquux on Tue Jul 13, 2004 at 10:31:02 PM EST

The program was written at the Massachusetts Institute of Technology (MIT). The programmer, Joseph Weizenbaum, named the program "Eliza", to honor Eliza Doolittle -- the woman in My Fair Lady and Pygmalion, who learned to speak English and have a good conversation.

Some people think Weizenbaum's program shows that computers can communicate as well as psychotherapists. But Weizenbaum himself holds the opposite view; he believes the program shows that psychotherapists communicate as poorly as computers.


loc. cit.

If I recall right, it was psychiatrists who evaluated positively but as a psychologist I might be biased.

CC.
All that we C or Scheme ...
[ Parent ]
Turing test not a "metric" (none / 1) (#142)
by tgibbs on Mon Jul 12, 2004 at 06:25:33 PM EST

The Turing test is not a metric. Passing the Turing test is a sufficient condition for intelligence, not a necessary one. Depending upon how you define intelligence, a program might well fail to pass the Turing test and still be considered intelligent in some sense. For that matter, there are probably some humans who would fail the Turing test. The point of the Turing test is simply that, if a computer could behave in a way that is (in blind, unrestricted, communication) indistinguishable from a human, then (assuming that we judge other human beings as intelligent based upon our communication with them) we would have no choice but to accept it as intelligent as well.

As such, it is a philosophical argument for the proposition that a machine could be intelligent rather than a practical criterion. Prizes for passing the Turing test may be of value in encouraging certain types of AI development, but there are doubtless intermediate goals of greater practical value.

However, emulation of animal behavior seems to me to have little relevance other than in attempting to model and understand the behavior of real animals.

Yes (none / 0) (#162)
by Zabe on Tue Jul 13, 2004 at 03:34:52 AM EST

Parent is right.  If a machine could converse in such a way that it is indistinguishable from human communication then it would be intelligent.

Chatterbots are just the first step towards this.  No doubt better designs will be entered into the contest evenutally.
Badassed Hotrod


[ Parent ]
Machines cannot think, just like boats cannot swim (none / 0) (#166)
by Chakotay on Tue Jul 13, 2004 at 04:59:40 AM EST

and just like airplanes don't flap their wings to fly, and cars don't walk.

But that doesn't mean machines will never be able to EMULATE thought, and arrive at the same results as human thought, through different methods.

At least, that's the way I see it...

--
Linux like wigwam. No windows, no gates, Apache inside.

Right. (none / 0) (#167)
by basj on Tue Jul 13, 2004 at 05:48:40 AM EST

Do you consider cars to `emulate' walking and airplanes to `emulate' flapping their wings?
--
Complete the Three Year Plan in five years!
[ Parent ]
Well, maybe my wording was off. (none / 0) (#176)
by Chakotay on Tue Jul 13, 2004 at 08:06:27 AM EST

I mean more like, emulating the effect.

The effect of walking is that you change your position across the earth's surface. A car does that too - and in many respects even better than walking. But in some respects a car is less able to do that (stairs, for example...).

The effect of flying is to move through the air using aerodynamic lift. Birds do that by flapping their wings - airplanes by using another form of thrust.

The effect of swimming is to move across the surface of water (or under the surface of water). Various animals have various methods to do so. The way boats (and submarines) do it doesn't resemble the way animals do it, but it has the same effect.

And thus, machines could achieve the same effects that humans achieve using thought, but it would not be true thought. But then again, if it has the same result as thought - who bothers?

Btw, that's exactly what the Turing Test measures: the ability to achieve, with whatever means possible, the same result as humans with thought.

--
Linux like wigwam. No windows, no gates, Apache inside.

[ Parent ]

But. (none / 0) (#177)
by basj on Tue Jul 13, 2004 at 08:17:20 AM EST

If it is about the effect, there is no emulating.

If the effect of and wheels and legs is locomotion, wheels do not emulate the effect of legs. Since both wheels and legs simply have the effect of locomotion.

And the same goes for thought.
--
Complete the Three Year Plan in five years!
[ Parent ]
Vi pravas (none / 0) (#179)
by Chakotay on Tue Jul 13, 2004 at 09:49:37 AM EST

You're right. But hey, I'm not a native English-speaker, so I have the right to scr*w up a word definition once in a while, don't I? (insert angel smiley)

--
Linux like wigwam. No windows, no gates, Apache inside.

[ Parent ]
Geeft niet. (none / 0) (#182)
by basj on Tue Jul 13, 2004 at 10:26:33 AM EST

But it really isn't about semantics anyway. I think you made a -- quite common actually -- conceptual mistake.

Because you simply can't say a thinking computer emulates thought, for it thinks (ex hypothesi).

And furthermore, you can't say a `thought-emulating' computer doesn't think if it `only' shows the effects of thought. Simply because effects are, well, not emulated, but just shown.

Therefore, emulation is totally irrelevant in AI;  a point some AI-critics miss.

(Hoe is het weer in Esperantie trouwens?)
--
Complete the Three Year Plan in five years!
[ Parent ]

Point taken. (none / 0) (#221)
by Chakotay on Wed Jul 14, 2004 at 04:44:48 AM EST

You say exactly what I wanted to say but I didn't find the right words for it, basically :)

(Het weer hier in Nantes is bewolkt - normaal dus.)

(Cxu ankaux vi parolas Esperanton?)

--
Linux like wigwam. No windows, no gates, Apache inside.

[ Parent ]

Machines will emulate thought ... (none / 1) (#191)
by WorkingEmail on Tue Jul 13, 2004 at 02:12:11 PM EST

Just as well as humans emulate thought.


[ Parent ]
Dreyfus revisited (none / 0) (#206)
by suquux on Tue Jul 13, 2004 at 10:14:02 PM EST

Too easy.

From the review of K.M. Ford, C. Glymore, P.J. Hayes, Android Epistemology, AAAI press, Menlo Park, CA 1995, ISBN 0 262 06184 8, $25.00, 334 pp.
The conclusion was provided by Marvin Minsky. With great recalcitrant pleasure, he describes a fictitious dialogue between two extraterrestrials. Clearly equipped with a form of intelligence superior to our own, these creatures discuss our shoddy cognitive constitution. Minsky shows what, to me, is one of the attractions of research into artificial intelligence: an excellent medicine for man's inflated ego. As the compilers already mentioned in their foreword, Minsky's contribution in itself is reason enough to buy this book.
My guess is that there will come the time when machines question human intelligence. Hmm, I exactly recall the situation when I was discussing whether machines could play chess on a grandmaster's level. This was 1970 and I was the smug.

CC.
All that we C or Scheme ...
[ Parent ]
Meta-Turing test (none / 0) (#181)
by codejack on Tue Jul 13, 2004 at 10:25:08 AM EST

A machine is intelligent if it produces tests to see if something is intelligent.

Yes, I stole this, and I don't know where from :P


Please read before posting.

Intelligence, who cares? (none / 0) (#216)
by SlashDread on Wed Jul 14, 2004 at 03:47:24 AM EST

Just give me a -consious- program.

"/Dread"

Consciousness, who cares ? (none / 0) (#224)
by bugmaster on Wed Jul 14, 2004 at 06:21:15 AM EST

What do you mean by "conscious" ? Please explain this in terms that do not require me having telepathy to understand.
>|<*:=
[ Parent ]
Conciousness (none / 0) (#236)
by SlashDread on Fri Jul 16, 2004 at 06:19:35 AM EST

Awareness of ones self.

"/Dread"

[ Parent ]

Try again (none / 0) (#237)
by bugmaster on Fri Jul 16, 2004 at 08:05:00 AM EST

How would I know that you have this "awareness of one's self", without being able to read your mind ?
>|<*:=
[ Parent ]
The Turing test tests little in a practical sense (none / 0) (#247)
by k24anson on Tue Jul 20, 2004 at 11:37:42 AM EST

When I first read what the Turing test was about I had to remind myself it was devised in the 1950's. And even back then while some good ideas came from the sages into this neonatal field of artificial inteligence, for me myself to think of designing anything to measure what? ... a heap of software and electrical components thrown together and supposing it to be said now it can only what? mimic intelligence! The definition of intelligence wasn't even clearly defined, much less some shallow presentation which some would watch and critique blahly. The Turing test is childish play, and as time goes on only more irrelevant and obsolete as an exercise to realize a computer that mimics a living cognitive process.

I believe the most successful endeavors in this regards will come about by individuals who mimic certain qualities of already present and existing forms of life. These successful people of the future will have "carefully" taken the time to soberly play God and to imagine how God made the paramecium do that what it does when seen under the microscope. Or looking at the life-sized plastic model, or of the picture in the text book of the naked spinal cord extending out from the spherical human brain, see those eyeballs too? and those blue lines of nerve fiber extending all over the place? To take the time to just stare at that nervous system there and ponder the design of it, what is going on. It won't hurt or even kill someone to think some "thing" made it like that though I know some people just hate to even smell rationales that entertain the concept of a Creator, a God. Just a few examples though to give anyone reading this an impetus to ask themselves the question, "What is going on?" and asking such a question with the perspective that some "thing" made it that way, for a reason. You could spend a lifetime uncovering and unraveling things pertaining to AI with this perspective in use, I think. And I think the most productive and rewarding moments in AI will come about by those who had taken this view that "something made it that way, for a reason." Just my opinion now.

Trying to just devise some thing that measures some shallow aspect like intelligence is so premature a task to begin with. Good luck.
KLH
NYC

Stay focused. Go slow. Keep it simple.

Why Turing Did It That Way (none / 2) (#254)
by czolgosz on Fri Jul 23, 2004 at 07:21:08 PM EST

I thought that Turing's point was to break out of the sterile debate in which each faction tried to force machine intelligence into a particular implementation model. So he proposed a "black-box," operational definition: "If it quacks like a duck..."

This gets you away from the pointless "But it's not REALLY intelligent" arguments. It's intelligent if its behavior is indistinguishable from the behavior of a creature that is, by convention, accepted to be intelligent. End of story.

The proposal in the article doesn't really improve on this feature of Turing's test. Instead, it adds complexity and introduces a number of highly debatable restrictions. And I'm highly supsicious of the position that some kinds of chatterbots nearly pass the Turing test, but that it's some kind of problem that they do so via a simple-minded algorithm. That misses Turing's whole point: infer intelligence from behavior. What happens inside is irrelevant. Effective, compact algorithms are no more "faking it" than some AI theorist's newest, most convoluted brainfart, or perhaps the behavior of a person. "Real" and "fake" are irrelevant distinctions if the outcome is the same.


Why should I let the toad work squat on my life? --Larkin
Rethinking the Turing Test | 258 comments (225 topical, 33 editorial, 1 hidden)
Display: Sort:

kuro5hin.org

[XML]
All trademarks and copyrights on this page are owned by their respective companies. The Rest 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!