Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

Has the Turing Test been passed?

By Blarney in Technology
Sun Jan 21, 2001 at 11:20:14 AM EST
Tags: etc (all tags)

As I'm sure most of you already know, the great scientist Alan Turing once devised a test by which a machine could be deemed to have human intelligence. In this test, a human judge would communicate with some entity, either the machine or another human, by typewritten messages. He would be asked to guess whether he was speaking to a human or to a computer. If a computer was created that the judge could not distinguish from a human, it would be considered to be intelligent.

It is generally assumed that no such intelligent machine has yet been created. However, we spend a lot of time - some would say far too much - communicating by email, form letters, and voice mail systems. We assume that we know when we are talking to a human or to a computer. But do we really know? Has Turing's Test been passed already?

A strange thing happened to me at school today - actually, it wasn't a very rare sort of event, but it seemed strange nonetheless.

I was attempting to register for a seminar course, and had already met with the instructor. The registration system required that I have the instructor's "consent" to enroll. I emailed the instructor about this, and he instructed me to email his department's secretary, who would grant me permission to enroll through her computer system.

I forwarded his reply to her, adding that I would, indeed like to be enrolled in this course and informing her of my student number. Student numbers in my school are printed on our ID cards in the format "1234 5678" with an underline underneath (I would render that in HTML, but I don't seem to have the tag available). I mailed the number to her in the format 1234-5678.

She replied later, with the following message:

I'm very overworked right now. Please check your student ID# and replace the "-"

I showed this reply to a few of my classmates, who found it quite humorous. They attributed it to a cranky bureaucrat - an incompetant, lazy woman who'd rather bang out an snippy reply than remove one single "-" from a student number. However, it occurred to me that were the style of the message altered slightly, while leaving the content intact, they would have a totally different hypothesis as to the message's origin:

Device or resource busy. Syntax error. Invalid character "-"

If they saw this message, they would assume that it had been sent by a computer. Yet the difference between this and the actual message is only superficial!

I don't seriously think that I was dealing with a computer. After all, computers don't usually have names like "Wilma Jean" on their email accounts. Besides, the instructor of the course would probably not refer to a computer as a "secretary" - he'd call it a "system" or something.

This is a strange situation. I'm assuming that there was a human at the other end, but only because of the choice of words in the message and the name of my correspondent. Still, I don't really know if it was a human or not, which makes me wonder if our machines have passed Turing's Test already. Am I the only one who feels this way?

Perhaps Turing's Test can be passed a different way then Turing himself thought it would be. He might have intended machines to improve in language parsing and logical processing thereof until they could appear to be perfectly normal people over a text connection. But maybe - we've grown so accustomed, so subservient to our computer systems, so used to "Push 1 on your telephone ... now! beep" - that we've become like them. Maybe, the more we use these things, the more we become incapable of communicating - or behaving! - like the humans we would like to be. And eventually, we can't tell whether we're talking to other people or to computers - or we just won't care.

After all, the hypothetical woman "Wilma Jean" was only doing her job. She tried to input the number, the computer spat out an error, and she relayed her frustration with the machine back to me. She may think of the computer as her enemy, she may resent people like me forcing her to use it, but the more she fights against it, the more she becomes indistinguishable from it.

When I can't tell Wilma's computer from Wilma herself, it has passed Turing's Test and become intelligent.


Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure


Related Links
o Also by Blarney

Display: Sort:
Has the Turing Test been passed? | 44 comments (39 topical, 5 editorial, 0 hidden)
back to front (4.75 / 8) (#1)
by danny on Sat Jan 20, 2001 at 02:06:08 AM EST

When I can't tell Wilma's computer from Wilma herself, it has passed Turing's Test and become intelligent.

Surely that should be

When I can't tell Wilma from her computer, Wilma has clearly passed the Gnirut Test and become an automaton.

(I hope Wilma isn't her real name, btw!)

[900 book reviews and other stuff]

Rules of the test (4.60 / 5) (#2)
by tftp on Sat Jan 20, 2001 at 02:56:17 AM EST

I think the test will be passed only when any number of any judges can not tell the difference regardless of how much time they spend trying. You can not prove a theory (or a theorem) by finding a single case when it is true. You'd better prove that all cases will be true, or you can disprove it by finding a single case (within the scope of the theory) where it fails.

The test outlined in the article is way too simple. You could describe a credit card validation service where you tell them the c/c number, name and expiration date and they tell you if the card is valid. You can't determine if those are computers or people either. It does not make that service an AI.

Re: Rules of the test (4.00 / 1) (#3)
by Blarney on Sat Jan 20, 2001 at 03:00:11 AM EST

Ah ha! But we can't prove it your way either, unless we have an infinite amount of time.

[ Parent ]

One of many definitions of the test (4.50 / 2) (#5)
by tftp on Sat Jan 20, 2001 at 04:23:39 AM EST

Ok, here is the description of the test taken from here:

What is the Turing Test? The Turing Test was developed during the 1950's by a man by the name of Alan Turing. Basically, it is a test for artificial intelligence. Turing concluded that a machine could be seen as being intelligent if it could "fool" a human into believing it was human.

The original Turing Test involved a human interrogator using a computer terminal, which was in turn connected to two additional, and unseen, terminals. At one of the "unseen" terminals is a human; at the other is a piece of computer software or hardware written to act and respond as if it were human.

The interrogator would converse with both human and computer. If, after a certain amount of time (Turing proposed five minutes, but the exact amount of time is generally considered irrelevant), the interrogator cannot decide which candiate is the machine and which the human, the machine is said to be intelligent.

This test has been broadened over time, and generally a machine is said to have passed the Turing Test if it can convince the interrogator into believing it is human, without the need for a second, human, candidate.

The test does not really need to be infinitely long, at very least because the number of possible questions is finite. In reality, some "sufficiently long time" should be allowed, and some sufficiently complete set of questions has to be asked. The test is done when you, as a judge, are convinced, and every judge who tries the test may have different threshold. We are not talking about binary value; the matter in question is how close the test subject resembles a human intellect as judges know it. After the AI gets "close enough" it passes the test. The margin of "closeness" depends on a judge.

In any case, the AI must have some internal desire to fool the judge; if asked directly it must lie; if asked indirectly it must concoct an answer similar to what a human would provide. This requires a lot more than an ability to send a preformatted string over email.

[ Parent ]

An idea for a harder Turing Test. (none / 0) (#35)
by Andreas Bombe on Sun Jan 21, 2001 at 08:00:39 PM EST

So the original test involved one judge with two separate connections to another human and the AI program, and the judge would converse with both to find out which is human and is bot.

However this leaves relatively much room for programs that just keep around massive amount of prefabricated phrases and dialogues. With increasing memory sizes even more so. We have only one isolated judge to fool.

If however this would be expanded to a three way chat so that the test human could converse with the test AI and the judge at the same time, it would get a lot harder for the AI since it would also be attacked and ridiculed by the human. Now the AI also has to convince the judge that it is the human and that the real human is in fact a poorly programmed AI.

The ultimate test would be to put a number of judges, a number of human test subjects and a number of AIs together into a chat channel, without anyone (not even the AIs) knowing who is what. The judges can rule out channel members from being human by a secret vote, kicking them out if enough votes have been casted against them, votes against judges being ignored. If an AI stays in the channel with only other judges left, it wins.

And let that be known as Turing Test Deathmatch.

[ Parent ]

Real life example of the Test (4.50 / 2) (#12)
by seb on Sat Jan 20, 2001 at 09:39:23 AM EST

There's a $100,000 prize for the first computer to pass the 'Turing Test' as set out by Turing (although exactly what this was is up for debate.

[ Parent ]
I don't think that counts (4.33 / 3) (#6)
by skim123 on Sat Jan 20, 2001 at 04:23:59 AM EST

Asking a single question and getting back a single answer is different from holding a dialog with a computer and not being able to determine if it is a computer or human. Chances are you've experienced the converse: talked to a human but thought it was a computer. This is easy to do, just call the technical support of some large corporation! :-) You'll swear that you're talking to a person, but their approach will follow a set algorithm regardless of what you say.

Money is in some respects like fire; it is a very excellent servant but a terrible master.
PT Barnum

Reminds me of AOLiza (4.27 / 11) (#7)
by rusty on Sat Jan 20, 2001 at 05:55:08 AM EST

See AOLiza. The short version is, someone wrote a version of Eliza (that old chestnut from the BASIC days) that talks over AOL. The conversations it has with random AOLers are hilarious. What I want to know is, does this prove that the machine is smart, or that we're just dumb? That is, the Turing test might need to specify "an *intelligent* listener"... but then how do you determine if your human is intelligent? Argh! Recursion!

Not the real rusty
Wow! (none / 0) (#13)
by joto on Sat Jan 20, 2001 at 09:59:47 AM EST

Great link.

[ Parent ]
Donations (none / 0) (#14)
by Potsy on Sat Jan 20, 2001 at 10:19:59 AM EST

I love the little link he has so that people can donate 50 cents to his grad school eduation via PayPal.

[ Parent ]
I even considered clicking it... (none / 0) (#36)
by nstenz on Mon Jan 22, 2001 at 12:33:23 AM EST

I figure I could spare 50 cents. However, I'm too lazy to create a PayPal account right now. =)

Why isn't there a character for 'cents' on a standard U.S. keyboard? That always annoys me when I'm talking about < $1.00... *grumble*

[ Parent ]
These examples of are much more interesting (5.00 / 1) (#17)
by turtleshadow on Sat Jan 20, 2001 at 11:35:55 AM EST

Back during Christmas there was a story from skim123 about kids asking the questions of the Internet regarding Santa .

As I lark I went to a few other chatterbot sites and did the same .

When Eliza asked If I really wanted to be bad, the session turned a little risque.
Perhaps the Turing test is missing the element the tester needs to be of a particular quality, Little Kids and a few adults are easily duped. Perhaps there's the Double Blind Turing test where computers will decide if they are talking to a human or not?

[ Parent ]
I think its possible (2.60 / 5) (#10)
by unstable on Sat Jan 20, 2001 at 08:43:16 AM EST

with a BIG database of replies/coments... a fast DB engine.. and a load of proc power i thing its possible to fool alot of people.... unless they know what to ask

A scene from Bladerunner comes to mind....
"a tortise?"

there are some replies that are learn from human expirience and it would be real tough to get a computer to respond right to them all.

Reverend Unstable
all praise the almighty Bob
and be filled with slack

Another test (4.33 / 6) (#11)
by ContinuousPark on Sat Jan 20, 2001 at 08:43:37 AM EST

Forget for a second about the Turing test; we've been working on that for a long time. We should try for a while what could be called the "Thelonious Test" (more info on Thelonious Monk here), can computers play jazz? I wonder what test will be passed before?

wilma... (3.66 / 3) (#15)
by lucid on Sat Jan 20, 2001 at 10:29:47 AM EST

Are you sure it wasn't Wilma herself who passed the Turing test?

Seriously, though. I haven't read anything about Turing's Test, so this is my first encounter with it. If it is that easy to pass his test, perhaps we should question the validity of it. It would seem to me that an actual conversation would have to transpire before your judge would make his or her conclusion. This was only one exchange.

It seems to me that Turing's Test would either be somewhat simple, or impossible to pass. Consider this: one of the easiest ways for you to find out whether or not Wilma is Secretary Wilma or wilma.school.edu is to try to offend her. Instead of the cut-and-dry "I would like to enroll in Foo 101", say "Hey Sweet Cakes, you better enroll me in Foo 101 now, or else." For a human the response would generally have angry tones. I think it would be impossible for a computer to mimic the anger, or even detect the offensive nature of the original message. However, if you stick to polite chitter-chat, it would probably be much easier to write a program to converse with a human. Polite discourse doesn't really involve large amounts of emotion, so I think it would be possible for a program to converse with a human with few or no awkward spots, depending on the devotion of the programmer.

Unfortunately, you based your entire thesis on the similarity of the two-sentence response to a computer error message. Imagine this. I'm a judge at the 53rd annual Turing Test Days. I ask both contestants, "How are you?" Both answer, "I'm fine." Holy shit! According to your methodology, the one that's the machine is now intelligent. I can't tell which one it was.

It sounds to me like this is an interesting subject, but the essay is fatally flawed by the foundation of your case, the E-Mail Exchange.

Bingo (3.00 / 1) (#21)
by B'Trey on Sat Jan 20, 2001 at 01:55:58 PM EST

The whole point of the Turning test is that, in order to be considered intelligent, a computer has to be able to demonstrate all aspects of a human personality. That includes percieving insults and reacting appropriately, getting and making jokes, etc. It isn't necessariy impossible but it is quite difficult. Designing and/or programming a truly intelligent computer is a difficult task. If it were easy, we'd have done it long ago.

As an aside, note that among other things the Turing test requires the computer to be able to lie. It should be able to answer "No" to the question "Are you a computer?" It should also be able to realize that there are certain things that very few people can do (multiply large numbers rapidly, for example, or demonstrate perfect recall of a past conversation) and simulate the inability to do those things as well.

[ Parent ]

Turing Test (3.00 / 1) (#22)
by Matrix on Sat Jan 20, 2001 at 02:07:07 PM EST

As other posters may have mentioned, the Turing Test consists of a human engaging in conversation through a computer interface with another party. The other party is either a computer or a person, and its up to the tester to determine which. Ideally, IIRC, there are a number of testers and a number of "other parties", with a random mixture of both human and computer. Some of the human "other parties" may also be testers.[1] If a certant percentage of testers are convinced by their conversation with computers that the computer is really a human, then the computer has passed the test.

No matter how simple it sounds, this test is surprisingly hard to pass in practice. Very few computers can converse in a convincingly human fashion for long enough to convince a perceptive tester. I also can't remember if its been passed or not - but if it has, and I'm not misremembering this, the programs that did so were very specialized, designed specifically to pass the Turing test. As you said, its quite hard to get a program to pick out the subtleties of human communication in English - I'd think it would be much harder to write one to correctly converse in Japanese or some other equally complex language.

[1] - This may be slightly inaccurate. Its from memory, and I haven't read a proper description in a couple of years.

"...Pulling together is the aim of despotism and tyranny. Free men pull in all kinds of directions. It's the only way to make progress."
- Lord Vetinari, pg 312 of the Truth, a Discworld novel by Terry Pratchett
[ Parent ]

You miss the point of the turing test.... (4.33 / 3) (#16)
by delmoi on Sat Jan 20, 2001 at 10:51:43 AM EST

A turing test is like any other computer benchmark. The question is along the lines of: "Can this thing display 100 billion pollygons?" "Can it process 800 transactions per second?".

And secondly a Turing test can't be done with just one line of text. Obviously if you talked to this person more she would sound much less like a computer. One "computer-like" response dosn't mean anything.
"'argumentation' is not a word, idiot." -- thelizman
The difference is it's not completely specified (none / 0) (#24)
by goonie on Sat Jan 20, 2001 at 05:52:01 PM EST

Most benchmarks are completely specified and produce hard, repeatable numbers - mostly they can be *performed* by the same computer that you are benchmarking, sometimes they are performed by a seperate machine or machines. The Turing test relies on the performance and subjective judgement of a human. Giving the human a script to perform off, and providing him/her a list of criteria to make the judgement with, might remove most of the ambiguity of the test, but would render it useless.

So, therefore, the same computer program could, theoretically, "pass" the Turing test with one or two people (who were distracted or particularly credulous), but fail with many more.

[ Parent ]

specs shouldn't have to specify repeatability (3.00 / 1) (#26)
by _peter on Sun Jan 21, 2001 at 01:51:59 AM EST

So, therefore, the same computer program could, theoretically, "pass" the Turing test with one or two people (who were distracted or particularly credulous), but fail with many more.
The way the Turing Test is administered now -- in the annual contest which is held I-forget-where -- multiple machines talk to multiple humans. The humans are not experts in AI, and they are routinely fooled by the more rational human participants (who are trying to be natural, not computerish). Sometimes one of the human judges does get fooled by one of the entered programs as well. However, no program has been able to fool close to a majority of the judges, which, IIRC, is the criteria used by the contest organizers.

In summary, the Turing Test cannot be passed by getting lucky and fooling one or two people. When in any matter pertaining to human judgment is the opinion of one or two non-expert people used to justify reasonableness? It's already a bad test; this suggestion would make it utterly useless.

[ Parent ]

You misinterpret my comment (3.00 / 1) (#27)
by goonie on Sun Jan 21, 2001 at 03:02:18 AM EST

The Turing Test as described *in Turing's paper*, which was really a thought experiment rather than a serious methodology, discussed no such controls. Yes, the actual contests use controls to avoid the "one stupid person" situation.

However, my real point was that the Turing test was not just another repeatable numerical benchmark - the results rely on the interpretations of human judgements, and making definitive statements about a system based on the results of a Turing-test contest (if such a system passed a full contest, for instance) is risky at best.

[ Parent ]

Turing test is bad for relying on judgment, agreed (4.00 / 2) (#28)
by _peter on Sun Jan 21, 2001 at 03:22:39 AM EST

I agree with your statements above. It was the original statement
[a] computer program could, theoretically, "pass" the Turing test with one or two people, but fail with many more.
that I was reacting to. I don't split the theoretical idea of to pass up from the practical in such a way that the 'theoretical' aspect is useless. And it bugs me when other people do it. Sorry for nitpicking.

[ Parent ]
There's a great point here... (3.50 / 2) (#20)
by Grimmtooth on Sat Jan 20, 2001 at 01:27:32 PM EST

There have been more than a few simplistic comments regarding one's ignorance of the Turing test and / or its validity ... typical geek discussion, the POINT gets lost in the clutter. :-)

Whether the writer is an expert in AI or not is irrelevant. The question still stands:

Are there AI systems out there right now, totally fooling everyone? How would you know? And wouldn't this be the ideal environment for such things?

Is the author of this article an AI? Am I? Is rusty? :-) ::black helicopter mode:: If not, prove it! :-)

I've always suspected my sysadmin, to be honest ....

// Worst. Comment. Ever.

The Turing test and logic (4.66 / 3) (#23)
by kmon on Sat Jan 20, 2001 at 05:46:01 PM EST

"When I can't tell Wilma's computer from Wilma herself, it has passed Turing's Test and become intelligent."

I think the reverse of your statement is true. If a person can't tell me from my computer, it is not because my computer got smarter. It is because I'm acting like a fool. Computers have gotten easier and easier to interface with over the past forty or fifty years, and as such, I think you're observing the midpoint, where a person who is unable to handle even the slightest variation from routine, can only communicate slightly better than a computer can.

There are bots for chat clients that can fool people, but it is due to the human being's disorientation more than the computer's superior intellect ;). We tend to see what we expect to see, until an event occurs that makes us realize what's actually happening. When we expect a human on the other side of the email or IM or whatever, our minds will "fill in the blanks" to make the computer seem more human. Only after something happens to trigger our awareness that we're talking to a bot (perhaps something they say or a certain pattern of words) do we realize.

Perhaps you've taken note of dream logic in the past. In a dream, it may seem perfectly normal to have a talking alligator in a top hat. You may feel like something is slightly awry, but you can't quite place it. The same thing happens when talking to bots. Something doesn't seem right, but when they say something that allows you to classify them as a bot, you're aware all of the sudden of what's going on.

Anyways, you do have a good point, but I don't think it really has much to do with the Turing test. More or less, shows that some people are becoming indistinguishable from computers, but not vice-versa.
ad hoc, ad hominem, ad infinitum!
Nicolai (4.66 / 9) (#25)
by kmself on Sat Jan 20, 2001 at 06:16:57 PM EST

In the essay "A Coffeehouse conversation on the Turing Test" (first published in Scientific American, republished in Metamagical Themas, Douglas Hofstatder describes a similar encounter with a "collective machine intelligence" -- really a group of undergraduate students from University of Kansas, Lawrence. Communicating over dedicated lines -- essentially an IRC connection -- a human could converse with the "program". Behind the scenes, the members composing the responses would collectively decide on an appropriate response.

Some examples:

DH: What are feet?
N: Feet are 12.0 inches


DH: What are arms?
N: That information is classified.


DH: What does "to screw mean"?
N: In what context?

DH: In bed
N: "To screw in bed" means "to have sex".

DH: What is having sex all about?
N: I can recommend an excellent book on that subject.

DH: Tell me, tell me!
N: The book is called "Godel, Escher, Bach".

At this point, Hofstadter (whose earlier book is, in fact, titled "Godel, Escher, Bach") begins to get very suspicious -- he includes a running commentary on his own internal dialog over the course of the exchange.

DH: I like to call it "GEB", myself.
N: That is very nice, Doug, What else do you call "GEB"?

...at which point the jig is up, the room bursts into laughter, and the hoax is revealed.

Hofstadter notes:

The marvelous thing about his game is that it was, in many ways, a Turing Test in reverse: a group of human being masquerading as aprogrm, trying to act mechanical enough that I would believe it really was one. Hugh Kenner has written a book called The Counterfeiters about the perennial human fascination with such compounded role flips. A typical example is Delibes' ballet Coppélia, in which human dancers imitate life-sized dolls stiffly imitating people. What is amusing is how Nicolai's occasional crudeness was just enough to keep me convinced that it was mechanical.


In retrospect, I am quite amazed at how much genuine intelligence I was willing to accept as somehow having beien implanted in the program. I had been sucked into the notion that there really must be a serious natural-language effort going on at Fort Leavenworth, and that there had been a very large data base developed, including all sorts of random information: a dictionary, a catalogue containing names of miscellaneous people, some jokes, lots of canned phrases to use in difficult situations, some self-knowledge, a crude ability to use key words in a phrase when it can't parse it exactly, some heuristics for deciding when nonsense is bieng foised on it, some deductive capabilities, and on and on. In hindsight, it is clear that I was willing to accept a hugge amount of fluidity as achieable in this day and age simply by putting togetiher a large bag of isolated tricks--kludges and hacks, as they say.

Elsewhere, Hofstadter notes that computers are reasonably good at imitating somewhat crazy people -- brain damaged (the Turing Test of a comatose patient is largely attained), paranoid, neurotic, psychotic, etc. More recently, various trollbots on weblogs, IRC, and discussion lists show that within the confines of a specific topic, a passable imitation of human intelligence is largely attainable.

Karsten M. Self
SCO -- backgrounder on Caldera/SCO vs IBM
Support the EFF!!
There is no K5 cabal.

The Turing Test from the horse's mouth . . . (5.00 / 2) (#29)
by goonie on Sun Jan 21, 2001 at 07:50:09 AM EST

Seeing we're all blithely discussing the Turing Test here, it might be appropriate to point to Turing's original paper.

While it's not exactly in an easy-to-read form, it might well be worth printing it out and perusing. Find out what Turing actually said, rather than the Chinese-whispers interpretations of it many seem to hold.

Mind you, I'm not saying I agree with everything Turing says in it (after all, I have the benefit of the AOLiza experience), but seeing the paper is concise, well-written, and extremely accessible, give Turing the courtesy of reading his paper before blithely discussing the "Turing Test".

in my opinion (2.33 / 3) (#30)
by boxed on Sun Jan 21, 2001 at 11:25:09 AM EST

...the secretary failed the Turing Test. I have many a time seen people failing the Turing Test over IRC not because of computers getting much better at these things but because of the regularity of humans. What I mean by that is that many humans are more predictable than a properly written expert system, leaving some people to the conclusion that they are not humans but computers. The

Turing Test is not a test of AI but of linguistic response imho.

Just thought I'd note... (none / 0) (#37)
by nstenz on Mon Jan 22, 2001 at 12:50:20 AM EST

I gave this a 5 because of the last line. Sometimes you don't have to say a lot to say something intelligent. =)

[ Parent ]
Turing Test was passed in Regina in 1997 (5.00 / 2) (#31)
by psychonaut on Sun Jan 21, 2001 at 01:20:17 PM EST

That's right -- the Turing Test was passed four years ago in an experiment at the University of Regina (not too far from Saskatoon, where Kuro5hin is based). See The Saga of Roter Hutmann for complete details.

Sniff..Sniff..Nostalgia (none / 0) (#42)
by Mantrid on Mon Jan 22, 2001 at 02:14:40 PM EST

Now you have me thinking about my good old days at UofR...I was trying to figure out if I was there then, and whether I knew this guy hehe. I didn't realize that K5 had such close links to good 'ol Saskatchewan! hehe

[ Parent ]
Bablefish (none / 0) (#32)
by lgwb on Sun Jan 21, 2001 at 01:28:50 PM EST

If we ever really get something close to what is being described in the Turning test, we will see real evidence of it first probably in the form of a Bablefish machine. The ability to intelligently translate a message from one language to another, that has the same meaning to anyone who speaks both languages. Now that to me would require, and be a strong indication of an "intelligent machine." As for the reply recieved, I concure that this is just an example of a lazy dumb response. Certanly not a example of true intelligence!

Language differences (none / 0) (#38)
by drhyde on Mon Jan 22, 2001 at 05:29:12 AM EST

> The ability to intelligently translate a message from one language to another, that has the same meaning to anyone who speaks both languages.

Not even the most fluently bi-lingual human translators with countless hours of experience of both languages right from the cradle can manage this, so I find it unlikely that an early AI will. It's a simple fact that there are words and concepts which can not easily be translated. Even two languages as closely related as English and German suffer from this - the German word 'doch' has no equivalent in English.

As I understand it - and I'm not anywhere near fluent in German - doch as a way to give an unambiguous positive answer to a question like 'are you not happy?'. In English, we could say "yes" meaning "I am happy" or "yes" meaning "I am indeed not happy". Doch, as an answer to that particular question, would always mean "I am happy". Now, of course we can translate 'doch' to 'I am happy' but there are fine shades of meaning attached to words which do not translate at all well.

[ Parent ]
a little MLP for everyone (none / 0) (#33)
by SEAL on Sun Jan 21, 2001 at 02:15:18 PM EST


It's a good archive of various bots that attempt to fool people in this manner. I think the first time I came across one of these was on a BBS where it was handling the "chat with sysop" duties ;)


It's only after we've lost everything that we're free to do anything.

is that important to you? (none / 0) (#34)
by cryon on Sun Jan 21, 2001 at 03:33:24 PM EST

What do you think? What do you mean by that exactly? I ask myself the very same question. Do YOU think it has?

Shell script (none / 0) (#39)
by Koo on Mon Jan 22, 2001 at 10:15:08 AM EST

hmmm. Maybe Wilma was replaced by a small shell script ? :)

Exactly.. (none / 0) (#44)
by jdtux on Thu Jan 25, 2001 at 05:52:25 PM EST

.. what I was thinking.

[ Parent ]
The Turing Test is not a test of intelligence... (none / 0) (#40)
by Trencher on Mon Jan 22, 2001 at 11:18:35 AM EST

...as an earlier poster commented, rather a test of the ability of the machine to communicate. An example given in class from my college daze:

Our goal is to construct software that will pass the Turing Test. Take a guy, call him Greg. Presumably, Greg will pass the Turing Test. If we create a database containing all phrases Greg could ever possibly utter, linked with a rules engine defining which phrases Greg would actually use in response to any possible phrase spoken to him, we have a piece of software capable of generating any conversation Greg could have.
For example, if you said to the Greg-emulator, "Hi there", the rules engine would be capable of finding every phrase that Greg himself might respond with, and then picking one. Based on your next statement, another phrase would be chosen from the database. This continues as long as you like. By the definition of the software, you have generated a conversation that is within the set of all conversations Greg could possibly have.

While this conceptual software construct could not be created with today's technology, it would not be possible for an outside viewer to differentiate between Greg and the Greg-emulator. However, the Greg-emulator is merely pulling text strings from a database based on all previous text strings in the conversation. The Greg-emulator is not capable of creating an independent thought, so how can it be called intelligent?

"Arguing online is like the Special Olympics. It doesn't matter if you win or lose, you're still a retard." RWR
Re: The Turing Test is not a test of intelligence (none / 0) (#41)
by Koo on Mon Jan 22, 2001 at 12:54:09 PM EST

Turing test is indeed a test of the ability of the machine to communicate, but to pass the test the machine must also exhibit some inteligence. What you describe is more of an Eliza-like program. Its answers would be rather 'dumb'. In order to fool you to think it is human it would need to put up a coherent answer, follow the context of the conversation or maybe even throw up a joke. Off course intelligence does not have to mean self awarness.

[ Parent ]
Communication does not require intelligence (none / 0) (#43)
by Trencher on Mon Jan 22, 2001 at 03:29:26 PM EST

If the system I described were built, then the only responses that would be given to your comments would be meaningful, in context. The system must know the current state, ie what comments have been made by each participant of the conversation, and chooses the response to the next comment from a list of the possible responses that a given person would actually make to that comment in the current context. With this design, every conversation you could have with the software agent would be exactly one conversation that you could have with the person being emulated.
You make a comment, and the system chooses one of the many responses that the emulated person would actually use were you talking to them. Your next sentence is used in combination with the previous statements to choose another response, again a comment that the person being emulated would actually make in the context. This continues, and each time you made a comment the software would choose a valid and meaningful comment from the set of possible comments the emulated person would make.
Of course, for the sofware to function properly it must emulate a "sane" person, one who would in fact respond to your comments in a meaningful fashion.
I do not believe that simply spitting out phrases which are meaningful in the context of the conversation constitutes intelligence. As a purely hypothetical exercise, this shows that passing the Turing Test does not require intelligence.

"Arguing online is like the Special Olympics. It doesn't matter if you win or lose, you're still a retard." RWR
[ Parent ]
Has the Turing Test been passed? | 44 comments (39 topical, 5 editorial, 0 hidden)
Display: Sort:


All trademarks and copyrights on this page are owned by their respective companies. The Rest © 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!