Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

[P]
AI progress and the Turing Test

By jmzero in Technology
Mon Jul 08, 2002 at 11:06:13 PM EST
Tags: Technology (all tags)
Technology

The Turing Test has been a focal point of AI discussion since its introduction in 1950.  The Turing Test gives an appealing definition of an intelligent machine, but does it provide a useful benchmark for today's researchers?


What is the Turing Test?

In a typical Turing Test, an interrogator converses with both machines and humans (through some sort of mediator, perhaps e-mail).  A machine is said to pass the Turing Test if the interrogator believes that machine is human.  The original test Turing described is somewhat different in execution, and indeed many kinds of "human imitation" tests have been devised.  For our purposes here, we'll think of a Turing Test as testing how well a computer can imitate a human in natural language conversation.

How have machines done?

Jason Hutchens has a good history of conversational programs here.  Jason has been a participant in a few Loebner Prize competitions (and winner of the 1996 competition), and has some good insight on the nature and history of the Turing Test.  Computers have passed several instantations of the Turing Test - including the most famous, the above mentioned Loebner Prize.  In the case of the Loebner Prize, the only "win" occurred in the first year of the competition (1991), when rules strictly controlled the topic of debate and investigators were not specifically "computer people".

So, when can I buy my talking robot companion?

Despite success in fooling some investigators, current conversational software is terribly unimpressive.  The 2001 Loebner competition was won by Alice.  Alice received a Bronze medal, meaning that the program was the best entered in the competition, but did not "pass" the Turing Test per se.

Here is some sample Alice dialog:

HUMAN : Your hair is cool.

ALICE : I'll tell my botmaster you said I am hair is cool.

This is admittedly an unfair example, but gives some insight into how Alice works.  Alice relies on a database of templates and responses.  To see why the above template gives such an odd answer, you need only imagine this conversation:

HUMAN : Your really not very smart.

ALICE : I'll tell my botmaster you said I am really not very smart.

You can read a longer interview with an Alice-esque bot at the BBC, and more commentary at ZDNet.  With work and imagination, templates can be very satisfying.  Imagine judging the following interchange:

HUMAN : I wish Linux had more powerful image manipulation tools.

COMP  : Linux is my favorite OS.  Love that Tux!

Unless you realize the computer (or "other-site" reader) has simply keyed on the word Linux, you may see this as a perfectly human response. Another program might look up the human's sentence on Google, and pull a sentence containing some key words.  Giving that a try (and using a little imagination), I came up with this:

HUMAN : I wish Linux had more powerful image manipulation tools.

COMP  : If you are after an office suite that can do presentations and handlle image manipulation in a very satisfactory manner, Hancom Office 2.01 Suite For Linux is the office suite for you.

Again, this is fairly satisfactory, even though the computer has no idea what's going on.  Imagine the difficulty of writing a program that could actually understand the sentence, be able to put it in meaningful context, and write a reply that sounded in any way human.  Each of these three problems is tremendous.  Combined, they are insurmountable - at least for today's generation of computer.  The disparity between how well this sort of program performs, and how well the "trick" programs perform means that enterprises like the Loebner Prize effectively discourage "legitimate" attempts at intelligence.

How can we get there from here?

We need to demand less from artificial intelligence, at least for now.  In reality, AI is progressing by leaps and bounds. Sometimes, though, even serious researchers seem to be biting off more than their computers can really chew.  As with any science, it is usually a combination of many small advances that will lead to real progress.  The Turing Test is a great goal for computer intelligence, but not just yet.

If you are interested in testing out some of the current "chatterbots", you'll find a list at botspot.

Won't God be mad if computers start thinking?

There's certainly lots of more philosophical questions that could be addressed here - but I'm no expert on these matters.  For a map of some of these questions, I'll refer you here.

Sponsors

Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure

Login

Related Links
o Google
o ZDNet
o Turing Test
o introducti on
o here
o Loebner Prize
o Alice
o BBC
o ZDNet [2]
o leaps
o bounds
o serious researchers
o chew
o botspot
o here [2]
o Also by jmzero


Display: Sort:
AI progress and the Turing Test | 162 comments (153 topical, 9 editorial, 0 hidden)
Comment (3.60 / 5) (#2)
by qpt on Mon Jul 08, 2002 at 05:31:57 PM EST

Your closing comments seem to suggest that you think that a computer passing the Turing test would be indicative of it thinking. However, Searle's Chinese room argument seems to show otherwise.

Do you disagree with Searle's conclusion, or could a non-thinking computer pass the Turing test?

Domine Deus, creator coeli et terrae respice humilitatem nostram.

Good question (4.00 / 1) (#7)
by jmzero on Mon Jul 08, 2002 at 05:48:34 PM EST

The nice folk at macrovu (whoever they are) have a map of that debate too.  I haven't read it, and really I haven't thought about the issue much.  

From the hip, then:

My answer to whether the Chinese Room system is intelligent depends on how it works.  If the "instructions" used are simply a list of every possible question and every possible response, then the system is not intelligent - it is an echo of the intelligence of its creator.  

If the "manual" solves problems by moving through states corresponding to parsing the sentence and formulating a reply, then I think we could say that system is indeed intelligent.

.
.
"Let's not stir that bag of worms." - my lovely wife
[ Parent ]

Intuitive, but wrong (none / 0) (#14)
by greenrd on Mon Jul 08, 2002 at 07:02:37 PM EST

It's trivial to show that every algorithm for a finite machine can in principle be formulated as a finite lookup table (mapping complete input histories to outputs).

So if you think that software for a finitised Turing machine could be as intelligent as a person, you have to accept that a lookup table could be, too.


"Capitalism is the absurd belief that the worst of men, for the worst of reasons, will somehow work for the benefit of us all." -- John Maynard Keynes
[ Parent ]

However (none / 0) (#15)
by greenrd on Mon Jul 08, 2002 at 07:04:40 PM EST

Actually though, if the lookup table is a representation of an algorithm that learns just like a human baby learns, that isn't quite so far-fetched.


"Capitalism is the absurd belief that the worst of men, for the worst of reasons, will somehow work for the benefit of us all." -- John Maynard Keynes
[ Parent ]

Hmmm. (none / 0) (#23)
by jmzero on Mon Jul 08, 2002 at 07:26:25 PM EST

It's trivial to show that every algorithm for a finite machine can in principle be formulated as a finite lookup table (mapping complete input histories to outputs).

Is this the case?  I can write a finite program to generate n prime numbers.  This program would respond to arbitrarily high n's and produce the correct output.

However, I cannot write a finite list that would do the same.

Likely I'm misunderstanding what you're saying.  Is our Chinese Room only supposed to respond to n's up to a certain value?

.
"Let's not stir that bag of worms." - my lovely wife
[ Parent ]

Ooops. (none / 0) (#26)
by jmzero on Mon Jul 08, 2002 at 07:36:48 PM EST

I suppose that wouldn't actually work, as my prime number machine would need an infinitely long tape.  I suppose I've got to cheat to come up with an example then.  

How about a room that tells whether a given integer number is odd or even?  It would ignore all the numbers until the last one, then process that and spit it out.  

(I told you I was shooting from the hip here...:)
.
"Let's not stir that bag of worms." - my lovely wife
[ Parent ]

Heh (none / 0) (#39)
by greenrd on Mon Jul 08, 2002 at 08:45:35 PM EST

2 bit lookup table:


0 0
1 1

Takes the LSb of the last number and looks it up :)

OK, that's cheating a little. But your example doesn't disprove my theorem, anyway, because the input can be any size, so your room can only be approximated by a finite machine.

And I don't see how algorithms that can operate on infinite amounts of data would help you to create intelligence.


"Capitalism is the absurd belief that the worst of men, for the worst of reasons, will somehow work for the benefit of us all." -- John Maynard Keynes
[ Parent ]

Heehee (5.00 / 1) (#67)
by jmzero on Tue Jul 09, 2002 at 10:27:38 AM EST

It's not that I can't think, it's just that I'm out of practice...
.
"Let's not stir that bag of worms." - my lovely wife
[ Parent ]
Detail (none / 0) (#47)
by ma luen on Tue Jul 09, 2002 at 01:32:32 AM EST

The original comment said algorithm for a "finite machine" not "finite program". So the statement "It's trivial to show that every algorithm for a finite machine can in principle be formulated as a finite lookup table (mapping complete input histories to outputs)." is true if we are talking about algorithms for computable functions (that is the answer is finite among other things). There is a finite ammount of space in any given finite machine say n chars, and say each char is binary, just for example any finite values work. So each input is of max length n. Thus there are 2^n possible inputs and each output is of finite size, say the largest is of length k. Then we just need a machine that can handle a table of size n*k*(2^n) chars.

[ Parent ]
I disagree. The origional poster was right. (2.00 / 1) (#83)
by acronos on Tue Jul 09, 2002 at 01:40:47 PM EST

If a lookup table can add new entries, (learn) then it is using states. If it cannot add new entries (learn) then it is not as intelligent or capable as a human. I agree with the original poster. Using a pure unchanging lookup table could not generate human caliber intelligence, because it could not learn. Using a state machine and parser in the lookup table, that could add new rules, could possibly generate human caliber intelligence. The ability to add new entries to the table radically affects my view of the machine's intelligence.

[ Parent ]
Searle was a numbskull (n much t) (3.50 / 2) (#33)
by _cbj on Mon Jul 08, 2002 at 08:14:11 PM EST

Search k5, big threads in past.

[ Parent ]
Thanks! (5.00 / 1) (#37)
by qpt on Mon Jul 08, 2002 at 08:34:32 PM EST

That is just the sort of helpful comment that keeps me coming back to K5.

Domine Deus, creator coeli et terrae respice humilitatem nostram.
[ Parent ]

My inestimable pleasure! (2.00 / 1) (#38)
by _cbj on Mon Jul 08, 2002 at 08:38:17 PM EST

I knew that.

[ Parent ]
Point not even remotely established (none / 0) (#63)
by Simon Kinahan on Tue Jul 09, 2002 at 08:28:30 AM EST

Search K5. Big threads in the past.

Simon

If you disagree, post, don't moderate
[ Parent ]
Do you want me to establish on yo ass? (nt) (2.00 / 1) (#72)
by _cbj on Tue Jul 09, 2002 at 10:46:33 AM EST



[ Parent ]
Come and have a go ... (none / 0) (#93)
by Simon Kinahan on Tue Jul 09, 2002 at 03:50:57 PM EST

... if you think you're hard enough.

:-)

Simon

If you disagree, post, don't moderate
[ Parent ]

Put 'em up, put 'em up! (4.50 / 2) (#97)
by _cbj on Tue Jul 09, 2002 at 04:54:27 PM EST

What were we talking about?

Oh, right.  That.  Okay...

So, IIRC (it's been a while), Searle wants to attack "Strong AI", and he attempts this by trying to show that semantics can't come from syntax, using the method of contradiction: the Chinese-speaking room is fluent yet non-conscious, and the cleanest test for consciousness, due to Turing, is conversing with one.  The room works by only syntactic rules, it converses yet isn't conscious, so blammo to the idea of semantics from syntax and therefore blammo to Strong AI.

Is that the backstory?  Please amend to taste.

The mistake is one any non-scientist, non-child could make.  No matter how you rewrite the Chinese Room to make ever more ludicrous the entity that it is absurd to suppose is conscious (ya follow?), it is beyond the power of Searle to judge the room non-conscious.  If he wants the room's consciousness or otherwise to logically imply something, it is simply not his choice whether the conscious room is absurd.  If we counter Searle with three people who believe the room is conscious, does that mean the proposition "syntax can't spawn semantics" is 25% true?  

No.  Because matters of taste are inadmissable.  The Chinese Room proves nothing except Searle's lack of imagination.

[ Parent ]

Aha, but ... (3.00 / 1) (#98)
by Simon Kinahan on Tue Jul 09, 2002 at 05:41:29 PM EST

It may well be beyond our powers to judge the room non-conscious. However, it is also beyond our powers to judge it or anything else to be conscious. We can only make an approximate call, and by and large the only thing we judge to be conscious with any likelihood of being right is other humans.

Why ? Because consciousness by definition is an experience only available from a first person perspective. We have, at present, absolutely no idea how consciousness arises from non-conscious systems, or how to distinguish a non-conscious from a conscious system, apart from ourselves, of course.

The only conscious system we know of is the human brain (or possibly the whole human). We have a very rough idea how the human brain works. We know it is absolutely nothing like the chinese room. Brains don't manipulate symbols by means of rules, or at least that is not their primitive mode of operation, and quite a lot of what we think of as essential to our consciousness is nothing to do with symbol manipulation at all. There is no evidence at all that syntactic manipulation is sufficient for consciousness, and several reasons to suspect that it cannot be.

Now, you'll note I haven't exactly reproduced Searle's argument. He thinks the whole idea computers can be conscious is absurd. I think we know so little about the subject, that the idea that even rocks are conscious should be taken seriously. It doesn't make much difference to the argument. The point is that the whole idea at the base of "Classic Strong AI", that computer programs can be conscious if they just manipulate the right symbols, is completely baseless.


Simon

If you disagree, post, don't moderate
[ Parent ]

Yes, quite (4.00 / 1) (#100)
by _cbj on Tue Jul 09, 2002 at 06:12:25 PM EST

That's about the size of it.  Searle a nonsense-monger and the only sane position a non-committed one either way, until actual science has come galumphing home with results.

What reasons have you to suspect syntactic manipulation is insufficient, though?

[ Parent ]

Syntactic Manipulation (none / 0) (#114)
by Simon Kinahan on Wed Jul 10, 2002 at 10:43:55 AM EST

I suspect it is insufficient for two reasons:

1. Physically, the human brain does not look like a device for manipulating symbols. If it were, it probably look more like a digital computer. As it is, it is very analogue and messy. It looks more like what, evolutionarily, you would expect: a device for learning, triggering and coordinating complex sequences of behaviour.

2. Introspectively, and from experiences with others, humans are not very good at manipulating sybols. We suck at maths. We get probabilities wrong instinctively. We find some problems (such as the Wissen test), much easier if they're presented in terms of a life-relevant situation.

3. Philosophically, there is no syntax in nature. Searle actually grants too much to the "Strong AI" crowd here, I think. Syntax and symbols are all to do with patterns and with reference. Things in nature don't refer to one another. They may cause one another, but there is no reason, other than human preference, to think of this as reference.

Now, of course, it is quite possible that there are several routes to consciousness, or that consciousness is a property of the universe and possessed by all things, or that the human brain, at some intermediate level we don't see yet really is manipulating symbols, or that larger scale patterns really exist in nature, and not just in the mind, but there is no evidence for these things, and therefore the negatives above seem provisionally quite convincing.

Simon

If you disagree, post, don't moderate
[ Parent ]

Actually (none / 0) (#117)
by i on Wed Jul 10, 2002 at 01:42:23 PM EST

neurons are pretty much digital devices. They are either "on" or "off".

and we have a contradicton according to our assumptions and the factor theorem

[ Parent ]
Not so (none / 0) (#125)
by Simon Kinahan on Wed Jul 10, 2002 at 03:34:31 PM EST

That is true for neural nets, I believe, but it is not true for real neurons. Neurons fire to different degrees, and other neurons respond differently to them.

Simon

If you disagree, post, don't moderate
[ Parent ]
Hm. (none / 0) (#130)
by i on Wed Jul 10, 2002 at 03:58:45 PM EST

I always thought that these different degrees are actually implemented as different numbers of fires in short succession. Am I wrong? I shall look it up when I go home.

and we have a contradicton according to our assumptions and the factor theorem

[ Parent ]
Not sure (none / 0) (#131)
by Simon Kinahan on Wed Jul 10, 2002 at 03:59:48 PM EST

Let me know what you read.

Simon

If you disagree, post, don't moderate
[ Parent ]
Books I have seem to confirm (none / 0) (#148)
by i on Thu Jul 11, 2002 at 09:07:55 AM EST

this idea, but then they are light pop-sci books. I'll have to search further for something more serious.

and we have a contradicton according to our assumptions and the factor theorem

[ Parent ]
Why would you do that, guy? (none / 0) (#119)
by _cbj on Wed Jul 10, 2002 at 01:56:08 PM EST

You do understand that those objections are really very weak?  There is no evidence in favour because the science of artificial intelligence is still very primitive.  Which is also why there is no evidence to the contrary.  The questions haven't been properly formulated, never mind answered, so why would you even adopt a position?

Personally, I hope very much that Strong AI is correct.  I could create artificial women and have power over them and they would love me, always.  Therefore I am optimistic.  Why are you pessimistic when you don't even believe the Chinese Room?

[ Parent ]

Well, ... (none / 0) (#126)
by Simon Kinahan on Wed Jul 10, 2002 at 03:38:45 PM EST

I wouldn't say they are weak. I would say they are not airtight, but I'm only arguing for a balance of the probabilities against a digital computer being able to become conscious the way a human is. Having such arguments is better than the position the strong AI "true believers" are in, which is to have no arguments.

As I think I said, I believe the Chinese room arguments supports my position, but is not strong enough to support Searle's. There is no reason to believe such a system could be aware, and it seems more likely that it could not.

Simon

If you disagree, post, don't moderate
[ Parent ]

To clarify... (none / 0) (#127)
by _cbj on Wed Jul 10, 2002 at 03:49:54 PM EST

Your position is that a collection of circumstantial evidence sways you (fair enough) without being conclusive, and the Chinese Room supports that by proving nothing.  Is that it?  

What value remains in Searle's Chinese Room when its door is off the hinges and local neds have put graffiti all over it?

[ Parent ]

The chinese room supports it .. (none / 0) (#129)
by Simon Kinahan on Wed Jul 10, 2002 at 03:57:21 PM EST

... by showing that there is no reason to believe, and that, indeed, it is counterintuitive to believe, that a computer can be conscious.

Simon

If you disagree, post, don't moderate
[ Parent ]
My sweaty arse... (none / 0) (#133)
by _cbj on Wed Jul 10, 2002 at 04:13:42 PM EST

...shows no reason to believe that a computer can be conscious.  Kiss that.

Now that was my good retort, but I have a dull one too.  The Chinese Room shows not that there is no reason to believe, but rather shows no reason to believe.  Like my arse it comments not a jot on any of the reasons to believe.  If you're left with intuition, that's fine by me.  I equate that with lack of imagination when we're talking about conscious rooms and rocks.  I find them perfectly plausible.

Don't get me wrong.  I don't rule out the possibility of interesting and convincing rhetorical arguments against Strong AI.  I do rule out Searle having invented one of them.  So can we forget about that latterday Goethe-without-the-talent-in-other-areas?  Can we call him a numbskull?  Do I win the cow?  

[ Parent ]

Nearly ... (none / 0) (#135)
by Simon Kinahan on Wed Jul 10, 2002 at 04:37:09 PM EST

The Chinese room argument would merely show no reason to believe in the possiblity of syntactic manipulations producing consciousness, if there were any argument that showed that it could. There is none, or at least none I know of, so it illustrates that fact quite nicely. I'll agree with you that it doesn't prove what Searle thinks it proves. Is that bovinely sufficient ?

I'm quite happy with the idea of conscious rooms and rocks. I just don't see any reason to suppose these really exist. Yet.

Simon

If you disagree, post, don't moderate
[ Parent ]

I'll leave it there. Good morning (nt) (none / 0) (#145)
by _cbj on Wed Jul 10, 2002 at 07:12:53 PM EST



[ Parent ]
The chinese room and Turing test are both flawed (3.00 / 1) (#35)
by squigly on Mon Jul 08, 2002 at 08:29:43 PM EST

The Chinese room example (It would have been helpful to include a summary) is slightly flawed itself.  Can you actually produce a decent lookup table that will give sensible responses to chinese questions?  However, it is pretty much a direct analogy for the problem with Alice and Eliza type programs.  These are not particularly intelligent, but have managed occasionally to convince an interrogator that they are human.  

But this isn't the only criticism of the Turing Test.  Certain data mining applications and neural networks can arguably be said to be intelligent.  They are too specialised to pass the Turing Test  We also have the uncanny capability of humans to act like machines, adding more problems to the accuracy of the test.

Personally, I feel that while Turing has undoubtedly provided Computer Science with a lot of valuable concepts and ideas, and the Turing Test is an interesting basis for a thought experiment, we should not rely on it for testing intelligence.  We should be way of treating everything that Turing said as Gospel.  Turing was speculating before computers existed, and long before programs of the type of complexity of ALICE were around.  His views are going to rely on certain assumptions that may have turned out to be invalid.

[ Parent ]

Missing the point. (4.00 / 1) (#36)
by qpt on Mon Jul 08, 2002 at 08:34:06 PM EST

The point of Searle's argument is not that we can produce the required lookup table, but that the existence of such a lookup table existing is apparently possible. Moreover, if such a table existed, and was employed by a machine to engage in a conversation, we would not consider the machine to be intelligent.

Domine Deus, creator coeli et terrae respice humilitatem nostram.
[ Parent ]

That was indeed his little nub (3.00 / 1) (#43)
by _cbj on Mon Jul 08, 2002 at 10:21:54 PM EST

...if such a table existed, and was employed by a machine to engage in a conversation, we would not consider the machine to be intelligent.
And as you haven't stated whether you agree with him or are merely rousing interest, I'll credit you with the perception to see the mistake in his overarching pronouns. Truly Searle wishes, wishes so very hard, that he is not alone. Childhood issues, unquestionably.

[ Parent ]
It's the "royal we". (none / 0) (#57)
by squigly on Tue Jul 09, 2002 at 06:56:57 AM EST

Very useful for this sort of philosophy.  It focusses the oponents views on the concepts rather than on the speaker.

[ Parent ]
Very useful for misdirection, more like (3.00 / 1) (#74)
by _cbj on Tue Jul 09, 2002 at 10:52:31 AM EST

Searle needs that "we" to mean everyone for his reductio ad absurdam to be accepted, and even then it wouldn't attain rightness as he's well outside any formal system where such games are allowed.

[ Parent ]
Can it be done with something so simple? (none / 0) (#56)
by squigly on Tue Jul 09, 2002 at 06:55:07 AM EST

My point is that the Chinese room may not be able to convince a Chinese speaker that the other person is Chinese if it is just a lookup table.  

Such a system would need to be very complex, and involve a certain amount of storage of facts and some degree of induction.  For example, if I said "My name is Fred", and some time later asked it what my name is, then I would expect it to remember that.  If I said that A is B and B is C, then asked whether A was C, I would also expect it to be able to deduce that.  Given that computers can store this sort of information, I see no reason that the Chinese room shouldn't be able to do this.

As a set of rules, this would not be too hard to implement, given enough time to produce a decent set of answers, and a large set of writeable tables to handle implications.  A chinese person who understood the workings of the room could go into the room, look at the symbols, and find out what has been learned.  Then he could continue the conversation, and understand it.  

This is a variant on the systems reply.  The entire room would be intelligent enough to speak Chinese.  

[ Parent ]

Well ... (4.00 / 1) (#64)
by Simon Kinahan on Tue Jul 09, 2002 at 08:31:33 AM EST

If it cannot be done with a lookup table, it cannot be done with any finite computing device, because any finite state machine can be reduced to a lookup table. Of course, the table gets really really big, but if we're playing philsophy and not computer engineering, that doesn't matter.

Simon

If you disagree, post, don't moderate
[ Parent ]
Computers can do more than a simple lookup table (none / 0) (#85)
by acronos on Tue Jul 09, 2002 at 02:05:56 PM EST

What most people envision when we say a lookup table is a big list of rules. For any input, give this output. There is no allowance for rules that affect internal states(add new rules.) Computers contain an element not listed in such a lookup table. They contain memory. This enables them to create new states. They can use a rule to add new rules. I once made a coke machine money changer out of a EEPROM. It used internal states but it included several outputs connected directly to inputs. While it is possible to construct such a memory out of a table with a feedback loop, most people don't know this and are not thinking about it when they envision a lookup table. A standard lookup table would not have these feedback circuits. Because computers do have such feedback circuits, they can do things that a standard lookup table can't.

[ Parent ]
No (5.00 / 1) (#94)
by Simon Kinahan on Tue Jul 09, 2002 at 04:00:30 PM EST

In terms of what you can make it do, there is not difference between a computer and a lookup table. Consider the following steps:

1. A computer with a fixed amount of storage is equivalent to a finite state machine, because it only has a finite amount of state. The computer transforms itself into a new state according to its inputs, and its current state.

2. For any given, finite stream of inputs, the result the computer will give is perfectly determined by its initial state.

3. Therefore, for any given starting state, and stream of inputs of a given length, we can make s table of the outputs for every input. It is a huge table, but it is still a table.

4. Since there is only a finite number, m, of possible states, streams of more than m inputs will eventually return the computer to a state it has occupied before. Therefore, if we create a table from start state, input stream pairs for all m possible states, and all strings of length m or less, we have all the computers possible responses.

QED. Ifc you still don't believe me, go take a class in computability theory.

Simon

If you disagree, post, don't moderate
[ Parent ]

No need for classes (2.00 / 1) (#102)
by acronos on Tue Jul 09, 2002 at 08:15:54 PM EST

I have already taken plenty of classes. I have a degree in computer engineering. Your lookup table cannot learn. A computer can. If you do not understand this then you do not understand computers.

If every possibility is already known then you have an omniscient computer that knows everything. Here in the real world the computer has the ability to learn and your lookup table does not. Computers can use outside clocks and noise circuits to create random numbers. Computers have very flexible access to memory. Computers can easily rewrite their own code. None of this is possible in a pure lookup table.

I am very aware that any logic design can be represented in a lookup table. However, a lookup table cannot be programmed to learn without feedback. All states must be mapped out at design time. If you are going to add feedback, memory, registers, etc. then you are not thinking of a lookup table, you are thinking of a computer. While this can be done, it is deceptive to discuss it this way. When I talk about a lookup table I am talking about a static design. A design that cannot learn. Such a design would be incapable of remembering the previous things said in a conversation, but a computer could remember.

One of the interesting things about computers is that you can often make something out of something else. "Nand" gates can be turned into an "and", an "or", and an "inverter". "Nand" gates can even be wired with feedback on themselves to create memory. A complete computer can be built out of this one simple component if it is wired up correctly. You can make a "nand" gate out of a lookup table but, a "nand" gate is not capable of everything that a computer is.

Lookup tables could be designed in the hardware so that they feedback on themselves. Such a design would be a computer made out of lookup tables. It would have to be done in the hardware. A pure lookup table is not designed this way. To use a lookup table one looks at a table on the left side(address lines) to find the answer on the right (data lines.) There is no way to make changes to the "rules" in such a design. A lookup table cannot learn. A computer can make changes to it's "rules." A computer can learn new rules so it can do more than a lookup table.

[ Parent ]

"Learning" (5.00 / 1) (#113)
by Simon Kinahan on Wed Jul 10, 2002 at 10:08:51 AM EST

I don't really understand in what sense you think "learning" makes any difference. Are you saying you *cannot* make an equivalent lookup table for any computer (in which case you are practically correct but uninteresting, and theoretically wrong), or are you saying that the lookup table is fundamentally different from the computer even though it can do all the same things ?

If you're arguing the former, you really need to take some computability theory classes. The equivalence from a computer, to a finite state machine, and then on to a lookup table, is well established. Modifying code makes no difference: the code is just part if the *finite* state. Clocks and random noise make no difference: those are just inputs.

If you're arguing that there's something different about the processes that go on inside a computer that means the computer can be conscious but the lookup table not, then you *might* be right. We know so little about the topic, just about any explanation is somewhat credible.

The question then is, if you accept the potential existence of such a table, whether you think the computer can be conscious, but the lookup table not, then why ? What exactly is it about the process of modifying internal state that leads to consciousness ? My car modifies its internal state, but has as yet to show signs of intelligence.

Simon

If you disagree, post, don't moderate
[ Parent ]

I do not accept the existance of such a table. (none / 0) (#123)
by acronos on Wed Jul 10, 2002 at 03:10:44 PM EST

Yes, a computer is fundamentally different from a lookup table.  I agree that if we had a full knowledge of every possible computer state, and infinite memory, it is possible to build a lookup table that could emulate any computer.  To do so would require knowledge of every possible future input and every possible internal state.  For simple situations, where the inputs are limited and the internal states are static, this is possible.  For more complicated situations, where the inputs are infinite and the internal states are unlimited, it is impossible for us to build a finite lookup table.  Since we do not have perfect knowledge of the future (creating infinite possibilities) and computers are capable of adding or altering their internal states (creating infinite possibilities), some computer software cannot be modeled with a finite lookup table.  The distinction is that a computer can add new states.  A computer can learn.  A lookup table cannot.

People innately know that the full diversity of human intelligence is not possible with a static lookup table.  When computer scientists, who they trust, tell them that a computer is just a static lookup table, they come to the correct conclusion that computers will never be capable of human level intelligence.  However, computer scientists were not fully honest with people.  The truth is that while theoretically it is possible to model any situation a computer could encounter in a lookup table - practically speaking this is impossible.  Computers are not synonymous with static lookup tables.  The static finite rulebook being impossible is just one among many of the reasons that the Chinese room argument fails.

[ Parent ]

The possibilities are *not* infinite (none / 0) (#128)
by Simon Kinahan on Wed Jul 10, 2002 at 03:55:38 PM EST

Look, the number of possible states, and the range of possible inputs, of a digital computer can be determined in advance, and is *finite*, because that is what being digital means. Get it ? *finite* ! You can keep going on about infinite possibilities until the cows come home, and of course the universe does contain infinite possibilites, but inside a computer, everything is reduced to bits, and you only have so many bits of state, and so many bits of input, and time, too, is quantised by the clock. We actually go to great efforts, in designing digital circuits, to make the number of possibilities very firmly finite.

Think about it for a moment. Take all the registers, RAMSs and configurable logic devices inside the machine, and stick all the bits in them together into one string. That is the machines state. Now take all the values, on all the input lines, and do the same with them. That is the machine's input. Now, on any given clock step, the two together perfectly determine the machine's output. Append all the inputs over n steps together, and add the starting state, and that perfectly determines the output after n steps. In all your posts, and all your blether about infinities, you haven't presented an argument as to why this is not the case.

Now, unless you're actually going to present some kind of argument as to why you believe the possibilities are not finite but infinite, I'm giving up on this conversation.


Simon

If you disagree, post, don't moderate
[ Parent ]

Giving up (none / 0) (#134)
by acronos on Wed Jul 10, 2002 at 04:33:35 PM EST

It is the interaction with the outside world and the ability to internalize that interaction that makes it infinite.  Any given set of states can be modeled.  All possible states, including the outside world, cannot.  That is the difference between the machine that can learn and the one that cannot.  The one that can learn includes the interactions outside of itself changing internal states and messing up the rulebook.  Also, the one that can learn can do the same thing using VASTLY smaller resources.  Your rulebook is pointless because all it does is confuse people.

I am also giving up on this conversation.  It doesn't look like we are going to get anywhere.

[ Parent ]

One more try ... (none / 0) (#136)
by Simon Kinahan on Wed Jul 10, 2002 at 04:42:57 PM EST

1. The interactions between the computer and the outside world don't provide infinite possibilities, because the inputs have to be digitised for the computer to accept them. On any given clock step, the computer can only accept as many bits of input as it has digital input lines. Thus, no infinities.

2. Resources are irrelevant. This is a philosophical argument. I can turn several whole universes into a lookup table if I need to.

3. It may be confusing to look at a computer as a lookup table, but it gets us away from the terrible anthropomorhisms people, especially AI researches project onto computers. Its much harder to anthropormorphise a lookup table than a computer, even though they're doing thhe same thing.  

Simon

If you disagree, post, don't moderate
[ Parent ]

There is more than one clock cycle involved (none / 0) (#137)
by acronos on Wed Jul 10, 2002 at 05:06:45 PM EST

A computer can process an object that is billions of bytes of data using only 64 data and address lines.  It doesn't only happen in the same cycle.  There are an infinite number of cycles available.

[ Parent ]
Yes (none / 0) (#138)
by Simon Kinahan on Wed Jul 10, 2002 at 05:14:14 PM EST

I covered that already. Now you're getting the idea, go back to my first post in this subject. There is only a finite amount of state available, therefore every 2^(bits of state) cycles the computer must repeat its state. Therefore: still a lookup table. Thats the whole computer, incidentally, not just the CPU.

Simon

If you disagree, post, don't moderate
[ Parent ]
Design (none / 0) (#140)
by acronos on Wed Jul 10, 2002 at 05:23:15 PM EST

Alright, picture a computer connected to a random noise generator.  This computer uses the noise to shape a fractal design on the screen.  There is no state machine that you can envision that can predict the output of the fractal before it is finished.  Learning can create things that cannot be modeled in a pre-designed package.  Yes, your lookup table could manipulate the data in the same way that the computer did, but the lookup table fails in the next step.

Now envision taking the fractal and using it to generate your new algorithm.  The next fractal will use this algorithm and the random noise.  Now your lookup table is screwed but the computer keeps on chugging.


[ Parent ]

Nope, still OK (none / 0) (#141)
by Simon Kinahan on Wed Jul 10, 2002 at 05:31:23 PM EST

The number of possible states a computer can be in is vast. That is all your example shows. It is no harder for the computer to be in a state in which it is using a random algorithm and random inputs to make pictures than it is for it to be in a state where it is using a predetermined algorithm, and deterministic inputs to do the same thing. Indeed, each of these states, or actually groups of states, occupies only a tiny part of the lookup table. The lookup table contains every possible program, every possible state of every possible program, and every possible input, plus the vast majority of states that are just illegal, therefore generating programs and inputs by whatever means you please makes no difference.

The number of possible inputs, the number of possible programs and the number of possible state is still finite. Therefore: still a lookup table.  

Simon

If you disagree, post, don't moderate
[ Parent ]

Hmmm (none / 0) (#143)
by acronos on Wed Jul 10, 2002 at 05:58:22 PM EST

This discussion is going nowhere.  I am not conceding my point.  I am conceding my ability to convince you.  

Some physicists say that the universe is made of finite quanta or energy levels.  That space is quantized.  Using your example then we model a human being as all of the quantum bits.  Now we predict every single possible position and energy level for this human.  Now I have a human lookup table.  But wait, it's finite because it is made of bits.  Therefore, humans are not intelligent.

People are crazy when they are defending their position as the center of the universe.  Computers are not lookup tables.  Computer can be built out of lookup tables, although in your case practically infinite ones.  Computers are different from lookup tables because they have memory and internal states.  Today's computers are built out of transistors.  But a computer is not a transistor either.  Humans can be built out of dirt, but that does not make us only dirt.  There is something more to us and that something is our relationship with the outside world and our internal life (states).  Enjoy your closet and blinders while they last.  Maybe we will not create human level AI, but if we do, will you still be in denial?


[ Parent ]

I wish I had written that differently (5.00 / 1) (#144)
by acronos on Wed Jul 10, 2002 at 07:01:12 PM EST

I do concede the point.  A computer can be completely modeled with a finite lookup table.  That lookup table would have more states than there are atoms in the universe, but I don't guess that matters, it's finite.  I apologize for the venomous attack in my other post.  You were right.  I did not understand the sheer magnitude of the table you were considering.  I still don't think it is a fair analogy because no one can conceive of the sheer magnitude of possibilities such a table generates.  Yes I admit that a computer can be completely modeled so.  But, I also admit that a similar table could model the entire universe.  Even if the universe was truly analog, which I doubt, such a table could approximate it.  I think it is deceptive to say a lookup table can do everything a computer can, even if it is true, because almost no one is envisioning the implications of a lookup table that huge.  Sorry it took me so long to come around to what you were actually saying.

[ Parent ]
Finite, infinite, who cares (none / 0) (#154)
by bugmaster on Thu Jul 11, 2002 at 08:30:46 PM EST

I find the whole finite vs infinite discussion a bit pointless. Would it make any practical difference if the possible number of responses was finite, and set to about 10^9999999999 ? Probably not. I think the real question is, "is it reasonably probable that anyone will be able to store the lookup table on a computer smaller than the Universe in size ?". In this case, the answer is probably still "no"; but it doesn't really matter -- I doubt anyone is seriously considering implementing any computer as a giant lookup table. Of course, learning stateful algorithms and such are a different story altogether.
>|<*:=
[ Parent ]
The point ... (none / 0) (#155)
by Simon Kinahan on Fri Jul 12, 2002 at 05:33:40 AM EST

... is a philosophical one. If people content that a computer program can be conscious, the question is, do they think the same program implemented as a lookup table would be conscious ?

"Yes" is, I think, a pretty difficult answer to sustain. "No" implies there is something about the physical processes inside a von Neuman machine that makes it capable of sustaining consciousness, which makes the quesion "what ? and why, then, are they not conscious all the time ?".

"Such a table is practically impossible" is a bit of a cop-out, as an answer, unless you can can link that impossibility to the possibility of consciousness somehow. Someone - it might have been you - came up with a rather elegant argument somewhere in this thread along those lines.

Simon

If you disagree, post, don't moderate
[ Parent ]

Wasn't me (none / 0) (#156)
by bugmaster on Fri Jul 12, 2002 at 07:47:36 AM EST

Sorry, elegant arguments aren't my forte; I am really more of a troll :-)

Regarding the Von Neuman machine, I would argue that in principle, an infinite, read-only lookup table would be equivalent to the machine. However, the machine has the ability to learn; that is, to change its own rules. Thus, a finite lookup table will probably not be able to approximate it, since it may turn out that the space of all possible state paths in the machine is infinite (due to the unpredictable input and learning ability). Note that I don't think I know enough about physics to actually defend the statement "the space of all possible inputs is infinite"; but it seems a safer assumption to make.

In summary, a finite lookup table would not be identical to the learning algorithm in the Von Neuman machine, but an infinite one would be.
>|<*:=
[ Parent ]

Universal constraints (3.00 / 1) (#110)
by jig on Wed Jul 10, 2002 at 07:53:15 AM EST

You're right, in terms of the output produced all finite state machines have an equivalent look-up table, in an infinite universe.

But, if the universe is finite - as it is believed - and it has k number of possible states, any machine with enough memory for which the size of the look-up table equivalent will be bigger than k, can have no such equivalent look-up table within this universe. Thus, not all computers will have look-up table equivalents.

-----
And none of you stand so tall
Pink moon gonna get ye all

[ Parent ]

True (5.00 / 1) (#112)
by Simon Kinahan on Wed Jul 10, 2002 at 09:13:48 AM EST

But then the question arises: Why does it matter ? Suppose we had a computer program capable of consciousness, and the corresponding looking table was too big for the universe to hold: we could hypothesise a universe just the same as ours, but big enough to hold the lookup table. The question is: would that lookup table then be conscious ?

If you contend that such a universe is actually impossible, then the question is, why ? and what is the connection between its impossibility and the problem of building a conscious machine ?

Simon

If you disagree, post, don't moderate
[ Parent ]

Why does it matter? (none / 0) (#139)
by jig on Wed Jul 10, 2002 at 05:17:48 PM EST

But then the question arises: Why does it matter ?

That's a very good question. I asked that very same question once in a philosophy class concerning the knowledge of other minds. (People should ask it more often in philosophy classes.) Everyone laughed, but the question remained.

And the answer is, of course, that it doesn't matter. My position is pretty much existentialist, in that I hold the actual constitution of the AI (or other minds) to be irrelevant. It is its actions that matter. I don't particularly care if it 'actually' had thoughts or 'actually' felt pain, so long as it does things in a manner that is consistent with whatever I define to be 'intelligent'.

I wasn't trying to make any points in particular with my reply to your post. I was only arguing against the theoretical look-up tables for fun. You used the practical constraints of memory to argue for the possible existence of equivalent look-up tables in all cases. I used the practical contraints of the universe to argue against it.

-----
And none of you stand so tall
Pink moon gonna get ye all

[ Parent ]

Searle's lookup table is bigger than he thinks. (5.00 / 1) (#118)
by Boronx on Wed Jul 10, 2002 at 01:52:46 PM EST

Searle envisions a pattern such as

Entry:

Response:

Entry:

Response:

Where each entry is an input, and each response the output to his box. But to show train of thought, a real turing machine will have to modify itself dynamically between inputs (learning).

Therefore, the input to the machine is not a *single entry* but the entire history of entries for that instance of a program. In other words, if you talk to a learning program for 100 years, non stop, the whole history of conversation is a *single* input.

You can't look that up in any look up table, no matter how big your universe is.

Here's another argument: If we make the universe big enough to encompass a lookup table, we can make a computer big enough to surpass that lookup table.
Subspace
[ Parent ]

Ah ... (none / 0) (#142)
by Simon Kinahan on Wed Jul 10, 2002 at 05:49:48 PM EST

Now that I like. A very interesting argument. I'm not sure you're right, but it's good nonetheless. I take my hat off to you, sir.

Simon

If you disagree, post, don't moderate
[ Parent ]
While I admit I was wrong in the previous post (none / 0) (#149)
by acronos on Thu Jul 11, 2002 at 01:14:29 PM EST

There is more to the output of a computer than a simple lookup table. Yes, your are correct that a computer can be completely modeled internally by a lookup table. I was wrong, but I want to address two features of this lookup table. The first, and lesser, concern is the size of the lookup table. The second, and far greater concern is that tables interaction with the outside world. My purpose is not to argue with you because I have no knowlege of whether you disagree with me on this or not. My purpose is to make people who read that a computer is only a lookup table aware that things may not be as simple as they first appear.

First Issue: The number of atoms in the universe is 4e79 or approximately 2265 That is a 4 with 79 zeros after it.

To get all the possible states for a 100GB hard drive you would need approximately 2800000000000000 rules. 100GB X 8 bits/byte=800x109 bits per state generating 2800000000000000 states. I think the implications of a table that large would make it hard to use real world human intuition to determine its properties.

Second Issue: However, I think there is a deeper problem with this type of thinking. Digging deeper, the processor itself can be modeled with only 8 32 bit registers. (Lets exclude SSE for simplicity) Lets only allow 1 32 bit input/output data line and a 32 bit address line. To do this we just need 8X32+1X32+1X32=320 bits per state modeled so 2320 states. That is still a few more states (entries in the lookup table) than there are atoms in the universe but lets not worry about that now.

Lets take the above computer and use it to draw a picture of a flower on the screen. Lets make the picture be 600X400 pixels with 8 bit color depth. This generates 240000 bytes or 1920000 bits per state and 21920000 possible states. How can a computer with only 2320 states completely control a device with 21920000 states? Or said another way, how can a computer with only 10 bytes to manipulate control a screen with 240000 bytes? The answer is the "lookup table" used data outside of itself to generate a result outside of itself. It read the data from the hard drive or camera to generate the output on the screen. This vastly expands the possibilities of what such a "lookup table" can do.

Can you model the exact internal behavior of any computer using a lookup table? Not really, but theoretically yes. But, unless you are going to isolate that computer from the outside world, you cannot use that table to predict the behavior of that computer because the outside world influences the computer. If you are standing on the outside of the computer it's behavior is vastly more complicated than any of its individual parts or any of it's entries in a lookup table. So long as a computer can process data outside of itself, you can cannot model the complete output of any computer unless you include the universe in the model. The fact that you can completely model the internal states does not give you the output. Because this is a difficult point, let me say yet it another way. There is no way you could have guessed the picture was a flower using only the internal lookup table before the outside world data was presented. The resulting output was far more complicated than that lookup table. Yes, if you included all the inputs in the lookup table then you could have made the prediction. But, including all the inputs in the lookup table means including the universe in the lookup table.

[ Parent ]

Chinese Room (4.66 / 3) (#45)
by bugmaster on Tue Jul 09, 2002 at 12:16:44 AM EST

As far as I understand, the Chinese Room argument concludes that the person inside the room doesn't really speak Chinese. This is totally true, but also totally irrelevant. The person is just a component of the system. While the person does not speak Chinese, the system as a whole does. Similarly, the DAC in your DVD player can't play DVDs; but the entire DVD player can.

Also, AFAIK there are 2 versions of the Chinese Room argument. In one version, the rulebook is just a lookup table. This is, of course, doomed to failure. However, in the other version, the rulebook has state; it contains rules for modifying the rules. In this case, it is not quite so clear that the room will never be able to speak Chinese. What if the rulebook contains a program for learning the language ?

Some people have also claimed that human brains have random quantum fluctuations that the room, being fully deterministic, will never be able to emulate. This is still easily remedied, however - just point the room at random.org, and you're done.

All these objections have been raised before, of course, and eventually Searle replies that, while all this may be true, human brains have certain "semantic contexts" that computers can never have for some reason. In other words, humans have souls, computers don't, and that's that. I suppose this is quite persuasive, if you are into dualism... I never really subscribed to dualism, though.
>|<*:=
[ Parent ]

Dichotomy (4.00 / 1) (#79)
by Khedak on Tue Jul 09, 2002 at 12:54:45 PM EST

If the room as a whole can read Chinese, then you're saying the room can think. How are you not dualist? What separates the Chinese Room from any other room, giving it the ability to think? Does it have a soul? Is it the fact that it's carrying out a language algorithm? If you say the algorithm is independent of the hardware, isn't that no different than saying the soul is independent of the body?

[ Parent ]
My take. (5.00 / 1) (#107)
by i on Wed Jul 10, 2002 at 02:48:56 AM EST

The language agorithm carries intelligence much like the agorithm that's executing inside my brain carries intelligence. You can imagine that it's possible to capture that latter algorithm and transfer it to another physical brain, or even to an electronic computer. Is that's what you mean by "soul"?

and we have a contradicton according to our assumptions and the factor theorem

[ Parent ]
Capture? Transfer? (none / 0) (#116)
by Khedak on Wed Jul 10, 2002 at 01:41:40 PM EST

The language agorithm carries intelligence much like the agorithm that's executing inside my brain carries intelligence. You can imagine that it's possible to capture that latter algorithm and transfer it to another physical brain, or even to an electronic computer. Is that's what you mean by "soul"?

Well, what is the algorithm that you're talking about? I mean, what makes up the algorithm, and how do you seperate that from the hardware? The example I was speaking of dealt with a room with a man in it. The man in the room did not know Chinese, but the "System as a Whole" did. So, where is the intelligence? My point is that an algorithm is an abstraction, so how can you say something that exists only in an abstract sense is intelligence, unless you're pulling the ultimate abstraction and saying that it has a soul. If you claim that intelligence is sufficiently abstract so that it doesn't depend on the physical components on which it is run, what's the difference empirically between that and dualism?

Another interesting question: Algorithms are being carried out all the time in the universe. Your cells are carrying them out. A set of dominoes, while collapsing, is carrying out an algorithm. A snowflake forming is carrying out an algorithm. Which algorithms carry intelligence, and how do you know? Is it a matter of teleology, of the intent of the algorithm?

All I'm saying is that the original author viewed "dualism" with repugnance, but I'm showing his point of view doesn't really solve any more problems than dualism and in many ways is actually equivalent. The only real difference is in the "immortal soul" idea, but since everything dealing with the afterlife is non-falsifiable, this is a philosophical difference and not an empirical difference.

[ Parent ]
The algorithm. (none / 0) (#121)
by i on Wed Jul 10, 2002 at 02:19:20 PM EST

If I can map all the neurons and their states and interconnections in my brain, then presumably I can duplicate this information in either hardware or software. That be the algorithm. It is known how each individual neuron works, so it would not be hard (conceptually) to simulate the whole shebang.

Intelligent algoritms are those that manifest intelligence. That is, when executed on actual hardware, they may pass the Turing test.

I don't think I fully understand this whole dualism thing. Replace "can speak Chinese" with "can play checkers". What exactly changes? Why speaking Chinese requires soul and playing checkers doesn't? Is that because we can right now build a machine that does the latter but not the former?

and we have a contradicton according to our assumptions and the factor theorem

[ Parent ]

The question is... (none / 0) (#122)
by Khedak on Wed Jul 10, 2002 at 02:56:35 PM EST

If I can map all the neurons and their states and interconnections in my brain, then presumably I can duplicate this information in either hardware or software. That be the algorithm. It is known how each individual neuron works, so it would not be hard (conceptually) to simulate the whole shebang.

We're quite a bit away from being able to do this, even in principle. The brain is designed to sit inside a human body and talk to muscles, organs, and other tissues through the nervous system. Unless you plan to build a computer the same size as a human infant brain and place it inside a live human infant child, there's no reason to expect you could even "grow a brain" this way. It's just way too complicated to even imagine at this point, you've got hundreds of billions of years of neurological innovation to simulate. There is simply no evidence linking the physical process of neurons firing to actual thought processes, because that kind of link is impossible to observe. You can't ask someone to think of a subject and then scan the state of the 20 trillion neurons in their brain. To claim we can is like claiming we can replicate food, in principle, like on Star Trek, molecule by molecule. To be sure there's nothing that directly violates the law of physics, but it's still science fiction.

It's tempting to think that the brain works as a neural net, but remember that the neural theory of thought cannot at this time be supported by empirical evidence, and just because it happens to match with nicely with the new (and limited) field of computer science doesn't make it the correct theory.

As to replace "Speak chinese" with "Play checkers", that takes intelligence out of the equation. There are lots of intelligent checkers-playing programs, yet nobody would suggest that these are "Intelligent" in the manner a human is. This is the ultimate goal of AI, to generate intelligence that can deal with problems the way a human can, to understand and reason abstractly. This is where the dualism comes in: Simply assuming that the algorithm can be extracted from the hardware, without even knowing what that algorithm is, seems nonsensical. Searle objected that would mean that he could sit in the Chinese room, obey an algorithm enabling him to apparently converse in Chinese, and yet not know Chinese. His opponents say this is okay because Searle isn't the one who knows Chinese, it's "The System as a Whole." So if we can have an intelligent human sitting within a room that is, itself, intelligent, doesnt that seem odd? Searle, sitting in a room, not speaking Chinese but being inside a room that can, in a strong sense of the word? Is that room as conscious as an actual Chinese person? According to Searle's opponents, yes it is. This seems like an awkward and difficult positiont to take, for the reason's I've outlines in previous messages.

[ Parent ]
Hm. (none / 0) (#124)
by i on Wed Jul 10, 2002 at 03:26:35 PM EST

The brain is designed to do that, yes. What's about the minimal amount of such talk? A brain can function in a completely paralysed body. That is, its interaction with tissues and muscles is pretty minimal. Why it can't be simulated?

We can and do observe electrochemical activity in the brain. I think we can confodently link this activity to thought process on the one hand, and with neurons firing on the other hand. Of course we can't map individual neurons at our current level of technology, but I don't see why we can't map big lumps of them and say "this lump is more active when I think about X and that lump is more active when I think about Y".

We can have an intelligent human sitting within a room that is, itself, intelligent. It's not odd the least bit. We can have a man that can play checkers executing a checker-playing algorithm written by somebody else. Why intelligence is different? Is it because we can't write an intelligence algorithm yet?

and we have a contradicton according to our assumptions and the factor theorem

[ Parent ]

Um, well, I think... (none / 0) (#132)
by Khedak on Wed Jul 10, 2002 at 04:08:42 PM EST

The brain is designed to do that, yes. What's about the minimal amount of such talk? A brain can function in a completely paralysed body. That is, its interaction with tissues and muscles is pretty minimal. Why it can't be simulated?

The autonomic nervous system is still very active in persons in a coma, and is interacting with their various internal organs, regulating them and keeping the person alive. This is how people in a coma are not dead. I don't know exactly how complex this operation is, but I don't think that simulating it is currently within anyone's possibility, by a very large margin. If you disagree, I challenge you to try.

We can and do observe electrochemical activity in the brain. I think we can confodently link this activity to thought process on the one hand, and with neurons firing on the other hand. Of course we can't map individual neurons at our current level of technology, but I don't see why we can't map big lumps of them and say "this lump is more active when I think about X and that lump is more active when I think about Y".

Observing activity in lumps of grey matter is a vast, vast difference from concluding that the human brain, on a neural level, functions like a neural network. It's not only possible but likely that as we develop the ability to observe living brains in greater detail, we will find inconsistencies with our current understanding of how the brain works. These could include anything from an understanding of the processes that guide the growth of new axons, to an understanding of the role of cytoskeletal computation in the individual neurons of the brain. Or maybe something completely different. Some people believe that each individual neuron is a quantum computer, using the cytoskeleton of the neuron. If this is so, it would make whole minds fundamentally unobservable (and unpredictable, hence free will). But you could still build one from scratch, I suppose. Anyway, I digress.

We can have an intelligent human sitting within a room that is, itself, intelligent. It's not odd the least bit. We can have a man that can play checkers executing a checker-playing algorithm written by somebody else. Why intelligence is different? Is it because we can't write an intelligence algorithm yet?

So, if the man leaves the room, is the room still intelligent? Is it not? What if he comes back? Is the room aware that something has happened? Why or why not? You can see that this situation is at least unusual. Why not at least for the moment accept that some other explanation, like the self-determination of a quantum computational system, is just as likely with our current lack of knowledge?

[ Parent ]
Hm. (none / 0) (#147)
by i on Thu Jul 11, 2002 at 09:03:27 AM EST

If we instantly freeze a human, is he still intelligent? Is he not? What if we manage to thaw him instantly such that his brain is undamaged? And so on. These are academical questions of course.

Oh, and if each neuron is a many-qubit quantum computer, then of course only another many-qubit quantum computer can realistically simulate it, but then one might wonder how neurons manage to avoid decoherence. Single-qubits are not a problem since entanglement cannot be communicated by classical synapses.

and we have a contradicton according to our assumptions and the factor theorem

[ Parent ]

Late reply, sorry (none / 0) (#150)
by bugmaster on Thu Jul 11, 2002 at 05:16:00 PM EST

Sorry for the late reply; I was pretty busy at work. Moving right along:
If the room as a whole can read Chinese, then you're saying the room can think. How are you not dualist? What separates the Chinese Room from any other room, giving it the ability to think? Does it have a soul? Is it the fact that it's carrying out a language algorithm? If you say the algorithm is independent of the hardware, isn't that no different than saying the soul is independent of the body?
This question has been plaguing philosophers since the Ancient Greek times (at least), in a broader sense. What are numbers, and logical concepts ? Are they independent dualistic entities, as Plato would say ? Are the laws of arithmetic discovered, or just invented ? The same question applies to logical rules like modus ponens, and to algorithms, which are after all made up of symbols, rules and numbers.

The jury is still out on that question, however. Personally, I tend to view numbers and algorithms as convenient abstractions that humans made up in order to manipulate their knowledge of the world. Saying "this program implements the bubble-sort algorithm" is just a convenient way of saying, "when I punch these lines into the computer, it can manipulate its electrons... (long physics description)... and eventually orders this array in O(N^2) time". Note that "array", "O(N^2)", etc. are also abstractions, just like the algorithm. Note, however, that many world-class philosophers have argued back and forth on this issue for centuries; I do not presume to know more than they do.

In the broader view, the algorithm is really not important. The ultimate assertion of the Turing Test is that a system that behaves as though it was intelligent is, in fact, intelligent. That is, if you cannot distinguish the way the system behaves from the way a human behaves, then the system is human for all intents and purposes. We can then use terms like "algorithm" to explain why the system behaves the way it does, but they would all be abstractions.

Searle, however, denies Turing's claim. He claims that even if the system behaves as if it was human (mentally, I mean), it is still not human, because it's missing something or other that is equivalent to a soul. This is the viewpoint that I originally objected to; Occam's Razor seems to slice it away.
>|<*:=
[ Parent ]

Searle's Objection (none / 0) (#151)
by Khedak on Thu Jul 11, 2002 at 06:08:36 PM EST

Searle, however, denies Turing's claim. He claims that even if the system behaves as if it was human (mentally, I mean), it is still not human, because it's missing something or other that is equivalent to a soul. This is the viewpoint that I originally objected to; Occam's Razor seems to slice it away.

Actually I think Searle's objection lies with the assumption and not with the conclusion. Hence the Chinese Room example: The assumption that just doing an algorithm is enough implies the Chinese Room is as good a vehicle for an Turing-test passing intelligent machine as any other. Searle counters this by trying to show that the Chinese Room is ridiculous. If the Chinese Room doesn't work, then the assumption (that performing an algorithm is sufficient to make up an intelligence) must be false. He's saying it's not possible to make a Chinese Room, beause where would the intelligence be? He objects to notions that it arises from the holistic system as dualist, and hence the assumption that you can build a turing machine in the first place is probably wrong. Searle doesn't say that if you build something that acts exactly human but has no brain it isn't human; he's saying it's impossible to having something that acts exactly human but isn't.

[ Parent ]
Re: Searle's Objection (none / 0) (#152)
by bugmaster on Thu Jul 11, 2002 at 07:16:18 PM EST

Actually I think Searle's objection lies with the assumption and not with the conclusion. Hence the Chinese Room example: The assumption that just doing an algorithm is enough implies the Chinese Room is as good a vehicle for an Turing-test passing intelligent machine as any other.
Well, in that case, the "argument" seems to reduce to basic faith, just as many "does God exist"-type arguments. Searle's main point was a straw man argument: to demonstrate that the guy inside does not speak Chinese. This is true, but irrelevant. However, if we look at the room as a whole, it is not obvious (at least, not to me) that the room will never be able to speak Chinese. It all boils down to Turing's assertion vs. Searle's.

Personally, I like Turing's assertion (naturalism/behaviorism) better, because dualism has internal consistency problems (see the other threads). Furthermore, it seems intuitively cleaner: after all, I can't peek inside of other people's heads -- all I can see is their behavior. This is especially true on the Internet. Consider the problem of determining the age of some k5 user. Usually, someone might say, "bugmaster posts like he's a 13-year old brat" -- and this opinion will persist even if I claim that I am a 900-year old Tibetan guru. All that matters is my behavior. It should be easy to see how the general Turing Test can be extrapolated from this -- even though it would still require some faith, seeing as analogies are not logically valid arguments.
>|<*:=
[ Parent ]

Chinese Box (4.66 / 3) (#55)
by jig on Tue Jul 09, 2002 at 05:59:02 AM EST

Here's the formulation of the argument from the Internet Encyclopedia of Philosophy:

(A1) Programs are formal (syntactic).
(A2) Minds have mental contents (semantics).
(A3) Syntax by itself is neither constitutive of nor sufficient for semantics.

From these three axioms, Searle draws the conclusion that:

(C1) Programs are neither constitutive of nor sufficient for minds.

The problem with this argument is that, while it is sound (ie. the conclusion logically follows if the axioms are true) it is still easily refuted. All you have to do is disagree with one or more of his three axioms. I disagree with (A3) strongly and isn't so fond of (A2) either. It may be intuitively true that 'semantics' is more than just 'syntax', but it requires more than intuition to show that there actually is a difference. Searle hasn't done that, and so to me he's proved nothing.

-----
And none of you stand so tall
Pink moon gonna get ye all

[ Parent ]

The Flawed Chinese Room (none / 0) (#161)
by jasonhutchens on Tue Jul 16, 2002 at 12:19:16 AM EST

I disagree with Searle's conclusion. Why? Because the Chinese Room "intuition pump", as Dennett would have it, is merely a compelling thought experiment that panders to our intuitions. Unfortunately, it is full of rather large holes.

For example, what if the person in the room was able to communicate in Chinese? What if they could understand the questions and answers? Would Searle have concluded that the Chinese Room was intelligent?

Or what if the algorithm didn't converse in Chinese, but merely added two integers together. And what if the integers were disguised by the person in the room by being encoded in some strange symbolic language, so that the person in the room was entirely unaware that they were working to add two integers together. Would Searle have concluded that computers can't really add integers together, that they merely simulate the process?

I personally believe that simulating intelligent behaviour is perfectly reasonable. Simulated intelligence would have plenty of applications. I also believe that, over the next decade, the greatest advances in AI will come from the electronic entertainment industry. The PlayStation2, GameCube and Xbox will succeed where MIT has failed.


- Jason L Hutchens www.amristar.com
[ Parent ]
cool (4.74 / 27) (#3)
by tps12 on Mon Jul 08, 2002 at 05:35:24 PM EST

I am working on such a bot, known as TACO. Here are some recent trial dialogues, with commentary:

HUMAN: Without help from web designers, using browsers like Mozilla and Opera will effectively cut off our ability to view web sites 'correctly.'

TACO: My pet peeve is when sites hype and announce new-and-improved sites, and then they come out and they are simply a gigantic flash application.

In this example, TACO identifies the keywords "Mozilla" and "Opera," and (correctly) categorizes the human's statement as a browser rant. His response is randomly chosen from a small database of similar bitchy complaints, and nearly appears human.

HUMAN: Here is an article about eBay purchasing PayPal.

TACO: The article is mostly numbers and money related stuff, but it also briefly mentions some of the controversy surrounding eBay.

Since TACO would have to comprehend the article to truly respond, he scans the article for keywords, identifying enough financial terms to label it "numbers and money related stuff" (while staying in character), then tacks on a vague reference to some controversy. Only if someone actually read the article could they tell that TACO's response is forged.

HUMAN: Talk about control, Apple has now decided it can block users from recieving media passes at MacWorld Expo. It blacklists these users by deciding if they run "rumor promoting" publications. This includs the webmasters of sites which have little to do with rumors or speculation such as Graphicpower.com.

TACO: Probably just bitter cuz Steve's thunder seems to get stolen at every show, and their lawyers can't seem to stop it ;)

Another random response that appears nearly human. This is mostly due to the smiley, granted, but that input was a lot to grok for a perl script.

In any case, progress continues. It's an exciting time for the AI field.

Latent semantic analysis (none / 0) (#11)
by nusuth on Mon Jul 08, 2002 at 06:15:15 PM EST

LSA may help you for selecting most relevant sentences amond documents, and most relevant documents among a corpus. I intend to get my hands dirty with it soon.

[ Parent ]
Mac Discussion Sites (4.00 / 1) (#20)
by isaac_akira on Mon Jul 08, 2002 at 07:13:02 PM EST

Ah! So it's YOUR BOT that's been posting to all those Mac discussion boards. Could you please stop now?

[ Parent ]
Taco, eh? (3.50 / 2) (#32)
by J'raxis on Mon Jul 08, 2002 at 08:03:19 PM EST

Taco, eh? I always knew Slashdot was run entirely by a script.

— The Raxis

[ J’raxis·Com | Liberty in your lifetime ]
[ Parent ]

Nah (4.50 / 2) (#40)
by greenrd on Mon Jul 08, 2002 at 08:48:40 PM EST

This Taco spells half-decently.


"Capitalism is the absurd belief that the worst of men, for the worst of reasons, will somehow work for the benefit of us all." -- John Maynard Keynes
[ Parent ]

True (5.00 / 1) (#42)
by J'raxis on Mon Jul 08, 2002 at 09:53:18 PM EST

It can probably fare better on a Turing test, too!

— The Raxis

[ J’raxis·Com | Liberty in your lifetime ]
[ Parent ]

surprisingly (none / 0) (#61)
by tps12 on Tue Jul 09, 2002 at 07:54:17 AM EST

Those are all direct quotes from Sunday and Monday (and most of the HUMAN stuff is real, too). Maybe he got a spellchecker.

[ Parent ]
Kudos (none / 0) (#101)
by p3d0 on Tue Jul 09, 2002 at 08:02:32 PM EST

You got the joke. Good work.
--
Patrick Doyle
My comments do not reflect the opinions of my employer.
[ Parent ]
Hook it up to the web like AOLiza... (none / 0) (#58)
by gusnz on Tue Jul 09, 2002 at 07:05:35 AM EST

AOLiza is a page detailing a Mac user's attempts to contrast the intelligence of your average AIM user and the ELIZA chatbot, which results in some pretty funny conversation at times.

So seriously, try wiring this bot up to an "other site" account, and posting smartass replies to stories as they appear. All bets on whether it karma caps within a week are off...

(Or perhaps you already have done this, and none of us have noticed the difference :)




[ JavaScript / DHTML menu, popup tooltip, scrollbar scripts... ]

[ Parent ]
response (4.30 / 10) (#4)
by dipierro on Mon Jul 08, 2002 at 05:38:49 PM EST

I'll tell my botmaster the Turing Test has been a focal point of AI discussion since its introduction in 1950.

Turing was a clever guy (4.50 / 4) (#10)
by Perianwyr on Mon Jul 08, 2002 at 06:00:44 PM EST

The fact that he was the sort of fellow he was leads me to believe that he never really intended the "Turing test" to be a real definition of AI. His point seems to have been that what humans look for in AI is unlikely to actually be any real indication of intelligence.

Generating and detecting intelligence seems to be sort of a halting problem style of situation. Since it's arguable whether we can really understand what intelligence is, can we sit down and say that something is intelligent and self-aware? No, we'll always be looking for the smile on a dog.

It is my belief that a smile on a dog is exactly what we're looking for in our machines.

A creature with full reasoning intellect and self-awareness but no freedom of action beyond a set list of tasks and no recourse to change is also known as a slave. Machinery that can sort data and perform extremely rapid interpretation seems to be a matter of training on the part of the operators, and good understanding of the task at hand from the programmers.

Determing the existence of real artificially intellect seems to be a good example of Godel's theorem. Either the intellect will be so alien we cannot understand it, or we have insufficient understanding of ourselves to attempt the problem.

Turing Test > * (4.66 / 3) (#21)
by greenrd on Mon Jul 08, 2002 at 07:15:14 PM EST

His point seems to have been that what humans look for in AI is unlikely to actually be any real indication of intelligence.

But whatever his original point was, the fact remains that the Turing Test (as that term is understood today) has never been discredited by a false positive. No bot has ever even close to passing a strong Turing test, i.e. one with unrestricted conversation and expert judges.

I believe that the Turing Test is a useful tool for detecting human-level intelligence. It could have false negatives, but not false positives. If a piece of software can understand complex human concepts without making stupid mistakes where its ignorance shines through, I would say it's intelligent. And the number of questions you could ask it is so vast, there is literally no hope of faking things with an Eliza-style lookup table.


"Capitalism is the absurd belief that the worst of men, for the worst of reasons, will somehow work for the benefit of us all." -- John Maynard Keynes
[ Parent ]

Advanced intellect is curiosity and free action (4.00 / 1) (#29)
by Perianwyr on Mon Jul 08, 2002 at 07:57:09 PM EST

Task-based evaluation is likely to result in things that are very good at particular tasks. You get what you ask for.

On the other hand, something that is truly capable of insight based on individual experience and also possesses a drive to create truly new things is only marginally a machine, and has undefinable parameters. It is also something which, once we've created it, we have a responsibility to. It's a child, essentially.

A contest that's designed to create better Eliza bots is likely to result in very good expert systems that can work with what a human is expecting to hear. The ability to create truly new and unexpected things is not measured by such a test (in fact, the bias of the test seems to be toward the expected, as we're working with an expected result.)

So, we're back to the circular question: how to define the truly novel? That feels a lot like a halting problem to me.

[ Parent ]

The Turing Test (4.00 / 2) (#62)
by Simon Kinahan on Tue Jul 09, 2002 at 08:24:46 AM EST

Is somewhat better than a simple task-based test. As greenrd says, noone has even come close to passing the test as Turing proposed it, after many years of trying to create better and better "Eliza bots".

I think you (and many researchers) underestimate the difficulty of holding a conversation. Even a very stupid human can handle an extraordinary range of topics, and introduce new topics themselves, respond to challenges, and so on. A bot that tries to limit the scope of the conversation to what it has in its lookup table is going to become obvious pretty fast.

I would agree with you that ultimately, curiosity and free action are what we're looking for (I would also add consciousness, but I don't want to get sidetracked onto that one), but since we can't test for those things easily, and we can test them somewhat through the Turing test.

Simon

If you disagree, post, don't moderate
[ Parent ]

Oh, my (4.66 / 3) (#78)
by miller on Tue Jul 09, 2002 at 12:43:42 PM EST

A bot that tries to limit the scope of the conversation to what it has in its lookup table is going to become obvious pretty fast.

My manager is a bot?

--
It's too bad I don't take drugs, I think it would be even better. -- Lagged2Death
[ Parent ]

Yes. (4.00 / 3) (#81)
by DavidTC on Tue Jul 09, 2002 at 01:09:22 PM EST

It's very very hard to fake a human being, even with insanely large lookup tables. What happens when you start talking in pig latin, or ask it to multiply all numbers it gives you by two, or whatnot?

A Turing test isn't one of those little games that gets run every year or so, or those chatbots that fool people. With knowledge that I may be talking to a machine, and freedom to ask and say anything, I can tell if someone's 'real' or not, and will be able to do so for quite a few years. Well before anything comes close to passing a real Turing test, we'll have real, functional, natural language interfaces, that I can type things like 'Find my cousin a pizza place nearby.' and it will do what I'm expecting it to do like some all-knowing secretary. Until then, and probably even sometimes after then, (After all, a computer can go 'I don't understand, please rephrase.'.), nothing will ever pass a real Turing test, because being able to parse language is the cornerstone of converstation, and anything else is just a gimmick.

-David T. C.
Yes, my email address is real.
[ Parent ]

That's interesting (4.00 / 4) (#28)
by _cbj on Mon Jul 08, 2002 at 07:55:18 PM EST

Turing believed in the power of his imagined game, and his special, idiot-smart insight was to spot that from the set of all intelligences, the one most likely to be recognised as such by a human is a human intelligence, and so a test to satisfy a human may as well be thoroughly biased in that respect.  He was very pragmatic, after all.

I can't recall in detail what he supposed of alien intelligences (I've an inkling that he speculated about conversing with them), and I didn't get past Hofstadter's fawning 20-page introduction to the biography.

[ Parent ]

A smile on a dog? (3.00 / 1) (#65)
by ethereal on Tue Jul 09, 2002 at 09:19:54 AM EST

IIRC, it's religion that's a smile on a dog. But it's been a while since I've heard that song.

--

Stand up for your right to not believe: Americans United for Separation of Church and State
[ Parent ]

Actually may be easier if we reevaluate test (3.20 / 5) (#12)
by MickLinux on Mon Jul 08, 2002 at 06:31:53 PM EST

I think that it may actually be easier to pass the turing test, if we try to compare the computer to a 2-year-old or 3-year-old kid.

All of these samples involve specialized knowledge of one kind of another, as well as specialized language skills.  Yet, if you look for intelligence in a kid, you aren't looking for specialized knowledge; you are looking for an ability to learn, and an ability to apply that learning generally.

As such, it makes the task seem abnormally difficult when you try to test the computers at an adult or older-juvenile level.  

Much better would be to find something that you could not identify from a little kid.  "Read me Fluffy the Logic Chip again!"

Then, once you had passed that test, it would be easier to make the next step to a more specialized command of the language, or a more specialized knowledge database.

[Caveat:  IANAH.  Anything I say may be only so much random ranting.]

I make a call to grace, for the alternative is more broken than you can imagine.

correct (4.00 / 1) (#24)
by SocratesGhost on Mon Jul 08, 2002 at 07:30:02 PM EST

all these do is prove that we are still stuck in the Chinese Room.

-Soc
I drank what?


[ Parent ]
Humans are so arrogant! (4.00 / 4) (#13)
by jabber on Mon Jul 08, 2002 at 06:52:22 PM EST

The Turing Test is too anthropocentric in that it implicitly defines "intelligence" as "human ability to communicate". Some of the most intelligent humans are lacking in the simple social skill of communication, for one reason or another. For example, some truly brilliant scientists regard social interaction with absolute disdain, and, due to their great intellect, have pursued skills other than effective, convincing, inter-personal communication. And then there is Stephen Hawking.

The Turing Test is far from a fair measure of true intelligence, where intelligence is the ability to solve problems by synthesizing adequate information from incomplete data. Until we have a fair definition of "intelligence", one that takes into account that computers are great at math and lousy at human grammar in any particular language, we will not be able to determine if we have given rise to "intelligence". It may not be like ours, but if it manages to get the job done, it's certainly present.

IMHO, the best we've managed to recognize (and I use that word intentionally) is the digital analog (sorry, bad pun) or reflex, and little more than rote regurgitation of facts. The Cyc project is quite intriguing since it's a rote memorization with a heuristic paradox elimination backdrop, but even Cyc is little more than a self-adjusting knowledge-base.

Humans, of the vanilla variety and AI researchers alike, need to take a step back and see themselves as a point on a continuum, not the epitome of intellectual evolution.

As the old feminist quip goes, a woman who seeks to be equal to a man, lacks ambition. Similarly, a Computer Scientist working on Artificial Intelligence lacks ambition if (s)he tries to create a "being" which communicates, or even reasons, as a human.

[TINK5C] |"Is K5 my kapusta intellectual teddy bear?"| "Yes"

Test for human-ness (none / 0) (#16)
by bugmaster on Mon Jul 08, 2002 at 07:07:55 PM EST

AFAIK, the Turing Test doesn't really test for intelligence. There are other IQ-specific tests, like the SAT test for instance. Anyway, the goal of the Turing Test is to specifically find out whether the entity in question is human, mentally speaking. In other words, if the computer can "talk" (IM, email, whatever) exactly like a human being, then, for all intents and purposes, it's a human being. IQ-wise, it may be a genius, or an idiot, but that's really not relevant.

Actually, k5 and other discussion sites are a great example of this. All we know of other community members is what they post. Mrgoat (or any other k5 celebrity) may be a meat-based person, or he can be a giant IBM mainframe sitting in some basement somewhere. For the purposes of k5, it's totally irrelevant, and we will probably never find out for sure. However, for convenience's sake, we might as well assume that mrgoat is human, since it seems to be the most probable scenario.

Of course, many (if not most) people believe that behavior alone does not define a human being. These people would say that, in addition to behavior, a being needs a soul (also known as chi, spirit, semantic properties, or whatever), and that only meat-based humans can have one. I always found this viewpoint a bit difficult to defend, and I will leave it to the real believers to do so.
>|<*:=
[ Parent ]

It is difficult to defend (none / 0) (#22)
by greenrd on Mon Jul 08, 2002 at 07:22:39 PM EST

However, it is easy to show that people must have a soul. Consider the experience of seeing something blue, as opposed to red. You can analyse that as photons hitting your retina, and then neurons firing in your brain. But that all misses out on something. The actual blueness of blue. Why does it look like this, as opposed to how red looks?

Any description of the purely materialistic properties of experiencing blue will miss this most crucial bit out. Philosophers call it qualia. But clearly, there must be something in us all that experiences qualia, and it can't be purely material, because that wouldn't make sense.

The missing ingredient, one can call "soul" or something else, but that's irrelevant. My point is really that whatever you call it, something beyond the material exists in all of us. I'm not joking, I'm completely serious.

I also think that plants and computers do not - and can not - have such a thing, although I can't prove that.


"Capitalism is the absurd belief that the worst of men, for the worst of reasons, will somehow work for the benefit of us all." -- John Maynard Keynes
[ Parent ]

Re: It is difficult to defend (none / 0) (#27)
by bugmaster on Mon Jul 08, 2002 at 07:53:36 PM EST

Well, now that you're defending, I can attack :-)
Consider the experience of seeing something blue, as opposed to red... [snip] ...Any description of the purely materialistic properties of experiencing blue will miss this most crucial bit out. Philosophers call it qualia. But clearly, there must be something in us all that experiences qualia, and it can't be purely material, because that wouldn't make sense.
Sorry, it's not all that clear to me. As far as I see it, my brain is a big organic neural network (loosely speaking; I am not a neurobiologist). When those blue photons hit my retina, the electric impulse travels through a bunch of dendrites, bounces around in my head for a bit, and eventually puts part of my brain in the state that I call "seeing blue". Fortunately, most people's brains develop in a similar way -- due to the fact that our brains are coded by similar DNA, and that all human babies grow up in similar environments -- and so, they probably experience similar sensations of blueness. Thus, I can communicate with them.

Note, however, that some unfortunate people (who are colorblind or totally blind) cannot experience this feeling of blue. Once again, this can be relatively easily explained through purely mechanistic means -- the "circuitry" in their brains that is responsible for sensing colors is damaged in some way, and thus they cannot perceive blue or red or whatever, regardless of how many photons hit their retinas.

I don't see why "something beyound the material" is needed to explain that people can see blue colors. Furthermore, the dualistic worldview (woo ! I always wanted to use that phrase) introduces more questions than it answers. For example, how come colorblind people don't have the blue "qualia" ? How is the soul, if it is totally immaterial, able to affect the material world at all ? How does a biological human get a soul to begin with, and why is it that a computer cannot get this soul ? What if we took a soul of some dead human and put it in the computer, would that work ? How does one "take" a soul anyway ?

I could go on, but you see my point. While it is clear and sensible to you that souls must exist, the notion seems somewhat strange to me.
>|<*:=
[ Parent ]

Qualia (5.00 / 1) (#44)
by swr on Tue Jul 09, 2002 at 12:07:53 AM EST

Fortunately, most people's brains develop in a similar way -- due to the fact that our brains are coded by similar DNA, and that all human babies grow up in similar environments -- and so, they probably experience similar sensations of blueness. Thus, I can communicate with them.

Why do they need to experience the qualia at all? In fact, how can you even know that they do? I know that I experience qualia, because that is fundamental to the way I experience everything. But I don't know that you experience qualia; I can only take your word for it. Just because we can both talk about blueness doesn't mean we experience it the same way, it just means we have a common external frame of reference. Qualia, on the other hand, are entirely internal. You certainly don't need it to communicate.

Note, however, that some unfortunate people (who are colorblind or totally blind) cannot experience this feeling of blue. Once again, this can be relatively easily explained through purely mechanistic means -- the "circuitry" in their brains that is responsible for sensing colors is damaged in some way, and thus they cannot perceive blue or red or whatever, regardless of how many photons hit their retinas.

Not quite. Just because their eyes are not capable of sending the nerve impulses, does not mean that their conciousness is incapable of experiencing that particular qualia. It just never happens. Or if it does (say in a dream or other purely internal experience), they may simply have no association to the external stimulus and can't know to call it blue. To draw an analogy, conciousness is the canvas, qualia are the paint, and the firing of neurons is the paintbrush. But the paintbrush is not the painting.

I don't see why "something beyound the material" is needed to explain that people can see blue colors.

Because so far qualia seem to defy material explaination. Material explaination (presumably) can covers the input/output and the processes involved, but that does not include qualia. Qualia are totally unnecessary to the materialist theory. Applying Occam's razor, one would conclude that qualia don't exist. And yet, denying the existence of qualia while experiencing qualia first-hand would be the most extreme case of ignoring evidence in order to fit the theory.

How is the soul, if it is totally immaterial, able to affect the material world at all ?

That is a very good question. The fact that we can discuss qualia at all seems to suggest something physical is going on, although it's hard to draw any conclusions without more to go on.

How does a biological human get a soul to begin with, and why is it that a computer cannot get this soul ?

I'm not convinced that computers cannot experience qualia, although others probably disagree. My best guess is that all things, including inanimate objects, experience qualia. Less complex things presumably experience proportionally less complex qualia.

<LEAP>It strikes me that the question of qualia - something extra that comes about for no apparent reason when a bunch of neurons talk to each other - is remarkably similar to the question of why the universe exists - something extra that comes about for no apprent reason when ???. Presumably, if someone discovers four lines of Mathematica that describe all behaviour in the physical universe, that won't explain why the universe exists in a concrete form beyond that abstract model. Likewise, if someone explains in complete detail the function of the human brain, there is no indication that that will explain the ineffible blueness of blue. They are both questions about why there is something where there does not need to be anything at all. In both cases it seems that there is something fundamental about being within a system that can not be "had" by examining that system objectively.</LEAP>



[ Parent ]
Pointless neural nets? (none / 0) (#53)
by Jel on Tue Jul 09, 2002 at 05:05:07 AM EST

Not quite. Just because their eyes are not capable of sending the nerve impulses, does not mean that their conciousness is incapable of experiencing that particular qualia. It just never happens. Or if it does...

As I understand it, neurons develop into networks capable of understanding precisely because stimuli comes from somewhere, and it is relevant and needed elsewhere.  If you never see anything which could eventually be understood as "blue", then you never need to spend time growing that part of your brain which would understand it.  I could be wrong here, not being an expert in this, but I believe this is all a widely accepted and understood part of neuroscience.
...lend your voices only to sounds of freedom. No longer lend your strength to that which you wish to be free from. Fill your lives with love and bravery, and we shall lead a life uncommon
- Jewel, Life Uncommon
[ Parent ]

Unfortunately not (none / 0) (#106)
by bugmaster on Wed Jul 10, 2002 at 01:29:50 AM EST

Well, unfortunately neurons are totally irrelevant to his argument. Presumably, blue qualia (which are completely nonphysical) would exist in my head (or elsewhere... it's confusing) regardless of what kind of neurons I have in it.
>|<*:=
[ Parent ]
Re: Qualia (5.00 / 1) (#105)
by bugmaster on Wed Jul 10, 2002 at 01:28:14 AM EST

Why do they need to experience the qualia at all? In fact, how can you even know that they do? I know that I experience qualia, because that is fundamental to the way I experience everything. But I don't know that you experience qualia; I can only take your word for it.
Well, in that case, qualia seem to be irrelevant. If you define qualia in such a way that they are unique to your subjective experience, and cannot be communicated, then it seems that any kind of discussion about them is impossible by definition.
[1] Qualia are totally unnecessary to the materialist theory. Applying Occam's razor, one would conclude that qualia don't exist. [2] And yet, denying the existence of qualia while experiencing qualia first-hand would be the most extreme case of ignoring evidence in order to fit the theory.
While I agree with [1], I disagree with [2]. What is the "theory" that the data is being tailored to ? The statement "qualia exist" is really just an assertion, not a theory or even a hypothesis. It seems that the naturalistic explanation of the way the human visual cortex works (photons hit retina, yada yada, part of your brain transitions to the "seeing blue" state) is much simpler than the qualia-based explanation (photons hit retina, ???, blue qualia do something). I guess it all boils down to how you view yourself. If you view yourself as basically a very complex, gooey machine, then the question of your subjective perception of the color blue can be stated as, "what happens inside my brain when I feel as though I am seeing blue things ?" Currenty, we lack the tools to answer this question adequately, but it is not unanswerable by definition. On the other hand, if you view yourself as a vessel for an immaterial soul, then qualia become relevant. However, it seems that the choice of worldview depends on faith alone, and thus cannot be resolved one way or the other.
That [how do immaterial things affect the material world] is a very good question. The fact that we can discuss qualia at all seems to suggest something physical is going on, although it's hard to draw any conclusions without more to go on.
But this question is crucial. You have built the following statement:
  1. The soul (qualia, etc.) is a totally nonphysical entity. It cannot be detected by physical means directly or indirectly.
  2. The soul is able to affect the physical world. For example, when my soul wants me to lift my finger, I do so. When blue photons hit my retina, the blue qualia activates.
But statements 1 and 2 contradict each other. If the soul cannot be detected by physical means, then it cannot affect the physical world. Because if it could, then we could detect it. Note that this contradiction does not depend on our current level of spiritual/technological progress; statement 1 states that the soul is undetectable in principle. Until you can resolve the contradiction, your statements are akin to saying "a square circle" -- i.e., meaningless.
>|<*:=
[ Parent ]
Qualia and Souls (none / 0) (#95)
by Simon Kinahan on Tue Jul 09, 2002 at 04:21:28 PM EST

There are philosophical problems with the idea of qualia as such, so I prefer not to use the term. Consider, for instance, the first time you tasted beer. Horrible, wasn't it ? But (probably) you now quite like beer. Is the "taste of beer" qualia you experience now the same one you experienced the first time ? or not ? How could you tell ? Dennet wrote a very interesting essay on this topic.

I don't agree with greenrd that we have souls, but I do think the fact human beings have subjective experience is very important. We don't know how that comes about. It seems fundamental to our intelligence. Now, I believe that there must be some physical, material process that gives rise to consciousness, but I don't believe we have any idea what it might be, or even how we could work out what it is. Nonetheless, until we find something out about that, any attempt to build an intelligent machine is going to work.

Simon

If you disagree, post, don't moderate
[ Parent ]

you assume much (none / 0) (#31)
by SocratesGhost on Mon Jul 08, 2002 at 08:00:07 PM EST

Currently, our understanding of how the mind works hasn't been exhausted. Science may eventually explain away your concerns. The soul may merely be the currently unexplained parts of our epistemological existence that a completed neuroscience could tell us about. Of course, I'm theorizing as much as you are, but at the rate we're going, we can arguably anticipate that this may be the case. The soul keeps finding fewer and fewer places to hide: first the liver, then the stomach, then connected to the pineal gland, and now a mind located halo-like around the brain. To me, this seems the harder conclusion to draw, given our current understanding and our rate of progress.

By the way, I know you are serious. I got my degree in philosophy and there are many people that also have a problem with naturalized epistemology. As a Catholic, I pondered these questions for long periods of time, reconciling my faith to the fact that I may be nothing more than just meat.

-Soc
I drank what?


[ Parent ]
Souls (none / 0) (#96)
by Simon Kinahan on Tue Jul 09, 2002 at 04:28:07 PM EST

I'm not convinced the existence of subjective experience is evidence for the existence of souls, although I do think it is a credible position to take.

Why can't subjective experience be generated by some material process ? Personally, this is what I think probably happens. The problem with a dualist position - which seems to be roughly what you're advocating - is figuring out how the non-material soul interacts with our material brain processes.

Simon

If you disagree, post, don't moderate
[ Parent ]

Testing.. (none / 0) (#17)
by jmzero on Mon Jul 08, 2002 at 07:08:56 PM EST

I agree that something needn't be able to pass the Turing Test to be intelligent.  I do think, though, that anything that would pass the Turing Test would be intelligent.

That said, certainly there must be a better way to define and test intelligence in a cross-being sort of way.  Perhaps someone else has some idea what that test or definition would look like...
.
"Let's not stir that bag of worms." - my lovely wife
[ Parent ]

Testing (none / 0) (#49)
by jabber on Tue Jul 09, 2002 at 01:49:16 AM EST

See, I disagree that anything that passes the Turing Test is necessarily intelligent. It's splitting hairs, really, but all it takes to fool an unsuspecting human is some creative programming.

I'll grant that any set of algorithms capable of passing the Turing provides a reasonable facsimile of human communicative intellect, but nothing more.

This spirals into a semantical mess, wherein we claim that a true human is just a set of conditioned responses, effectively no different than a complex state machine, but bear with me.

Can you not imagine a rote stimulus-response system which "understands" grammar, which has access to an extensive fact database that is adequate to add context sensitivity to I/O, and which can "creatively" misunderstand or joke with the interviewer?

I can. I can't code one, but I've been to enough parties to know that it takes virtually none of what I consider "intelligence" to have a conversation with the average human.

Now, if we raise the bar to that of an interviewer who is deliberately trying to trip the AI, to essentially psycho-analyse the algorithm, then maybe genuine intellect could come into play - but we would have to allow for conversation about the nature of the interviewee's consciousness, and that's too much of a stretch at this point in the research.

Ultimately, what I think we'll need is a more finely tuned definition of what intelligence is. If we define it as the ability to perform one's assigned task effectively in ambiguous situations, then a whole lot of truly simple-minded devices, like ATM's, suddenly qualify.

These devices exhibit what I would call rote reflex that is no different than the light-seeking behavior of a moth, so they're hardly intelligent by OUR standards. But what about their standards? They're certainly intelligent enough for their environment, no?

As I said, what we need is either a non-prejudicial definition of intelligence, so that we would be able to acknowledge intelligence other than our own as such; or we need to get off our high horse and admit that passing the Turing demonstrates nothing more than the ability to convincingly resemble a human-like communicative intelligence.

To wit, passing the Turing may be our goal in creating intelligent systems, but for an emergent intelligent system, passing our ego-centric little test may be a means to recognition on our terms of something that is not like us at all.

We could, for all our boastfulness, be socially engineered by something we created without ever knowing.

[TINK5C] |"Is K5 my kapusta intellectual teddy bear?"| "Yes"
[ Parent ]

Turing (4.00 / 2) (#70)
by jmzero on Tue Jul 09, 2002 at 10:38:25 AM EST

Now, if we raise the bar to that of an interviewer who is deliberately trying to trip the AI, to essentially psycho-analyse the algorithm, then maybe genuine intellect could come into play - but we would have to allow for conversation about the nature of the interviewee's consciousness, and that's too much of a stretch at this point in the research.

I agree that it wouldn't be very impressive to fool the unsuspecting participants of a random chat room.  I think the only real Turing test is one in which the interviewer has knowledge about AI, and is seeking specifically to challenge the machine.  In this case, the examination is really going to examine not just communication skills but overall cognitive ability.  A good interviewer would test the machines capacity for learning new words and ideas.  

Certainly this is different than Turing's original vision, but I don't know that Turing would have imagined bots like Alice.

As such, I see the Turing test is not an "imitation game" so much as a flat test.  Humans are not being aped, but being measured against.
.
"Let's not stir that bag of worms." - my lovely wife
[ Parent ]

This has drawbacks though... (none / 0) (#80)
by Mul Triha on Tue Jul 09, 2002 at 12:57:56 PM EST

When the interviewer knows they may be conversing with an AI and try to trip it up, the conversation often is such that it would trip up some people. This is way on occassion when subjecting an AI system to the Turing Test, human beings fail.
QUACK!
[ Parent ]
Oh, I seeeee (none / 0) (#73)
by Rogerborg on Tue Jul 09, 2002 at 10:51:23 AM EST

    It's splitting hairs, really, but all it takes to fool an unsuspecting human is some creative programming.

Indeed. Here you are, typing English words in a sensible consecutive order, and yet there's obviously no intellect behind them.

Or rather, the level of intellect that assumes that anything that it doesn't understand must be easy. Sigh. If this was so simple, can you explain why Alice is such a dunce?


"Exterminate all rational thought." - W.S. Burroughs
[ Parent ]

Alice (none / 0) (#104)
by jabber on Tue Jul 09, 2002 at 11:43:01 PM EST

Alice is a dunce due to excessive time spent at Adequacy.org.
Thank you for your kind words.

[TINK5C] |"Is K5 my kapusta intellectual teddy bear?"| "Yes"
[ Parent ]

tell me more about your mother (3.00 / 1) (#111)
by macpeep on Wed Jul 10, 2002 at 08:11:51 AM EST

I disagree that you can say an application is intelligent if it passes the Turing Test. A friend of mine wrote an IRC bot that joined random IRC channels, lurked around and picked up stuff from the conversation to build a database of phrases and opinions. It would then use this knowledge on other IRC channels to pose as a 15 year old "chick". It would get hit on a LOT by guys and it would then carry on conversations in private messages and it would often take up to 30 minutes before the guys would figure out something was wrong.

Now this script was not really complex at all. It would simply try to pick up what the topic was about and then launch some random comments that it had heard earlier about the same kind of topic. It would slightly modify them to not make it too apparent, as well as change the talking style to be more suitable to a 15 year old girl.

The script got added credebility by having some "special skills", such as being able to read the TV guide to know what was about to start on TV. At random times when a show "it liked" was about to start, it would go "Oh! Oh! X-Files is about to start!! BBL!" or something. This apparently added a lot of credibilty.

If a script like this can pass (for all practical purposes) for a human then I'd say it's pretty clear that a Turing Test isn't really a good measure for intelligence. I for one think intelligence is a very different thing than the ability to parse text and come up with decent comments based on some kind of grasp of context.

Now of course this was IRC where the conversation isn't really on the highest level to begin with, but nevertheless - it was people listening to a bot talk and not realizing it.

As a amusing side note, my friend would log the conversations and put them on a web site. :) There was some pretty embarrassing stuff up there, let me tell you! Quite often, it would seem like the bot was more intelligent than the people it was talking to!

[ Parent ]

Big difference (5.00 / 1) (#115)
by Simon Kinahan on Wed Jul 10, 2002 at 11:01:14 AM EST

The big difference between these kinds of "bot stories", and its being going on pretty much since people started using the net to talk to one another, and the true Turing test, is that in the test the tester is given the explicit task of discovering whether the correspondent is a computer or a human.

A script like the one you describe would fail pretty fast. When we're in conversation with other humans, we try to make what they say make sense, and this goes double if certain hormonal responses are going. In a true Turing test, the tester would be trying to trip up the AI, and watching for errors.

Simon

If you disagree, post, don't moderate
[ Parent ]

A Circular Definition (none / 0) (#160)
by jasonhutchens on Tue Jul 16, 2002 at 12:06:44 AM EST

Once you have accepted that intelligence lies "in the eye of the beholder", then the circular definition of intelligence as "behaviour that is deemed to be intelligent by an intelligent observer", coupled with the fact that such a decision can only be communicated via language results in the unavoidable conclusion that any definition of intelligence is necessarily anthropocentric, even if the said behaviour is non-lingual!


- Jason L Hutchens www.amristar.com
[ Parent ]
Alternate definition (4.50 / 4) (#34)
by eann on Mon Jul 08, 2002 at 08:16:49 PM EST

AI, it has been said, is anything we can't do yet. Once we do it, we look at it and say, "that's not intelligence; that's just a pile of simple computations arranged in a new way."

Remember when neural nets were the rage? Remember fuzzy logic? These things are still around, and they have contributed to our understanding of what kinds of cool things we can do with computers, but we don't expect them to be able to make the hurdle to "intelligence" (however it's defined) any more.


Our scientific power has outrun our spiritual power. We have guided missiles and misguided men. —MLK

$email =~ s/0/o/; # The K5 cabal is out to get you.


Exactly! (4.00 / 1) (#84)
by codemonkey_uk on Tue Jul 09, 2002 at 01:46:44 PM EST

Working as a games programmer, specialising in AI, I say this all the time. AI is not about creating "intelligence", it is about creating the illusion or intelligence.
---
Thad
"The most savage controversies are those about matters as to which there is no good evidence either way." - Bertrand Russell
[ Parent ]
Game AI... (4.00 / 1) (#88)
by jmzero on Tue Jul 09, 2002 at 03:03:42 PM EST

When I tell my party to go somewhere in Baldur's Gate 2, sometimes half will make it and half won't.  They seem to do about the same whether I have pathfinding set to 8000 nodes or pathfinding set to 27000 nodes.  My computer is faster than any they would have been testing on back when the game was written.

Is it really that hard to find a way from point A to point B (especially with static levels)?  I'm not saying they need to come up with an optimal path - but even that seems like it would be fairly possible.  Most of the maps would only need 20 or 30 intersections.

As a contrasting example, the AI in Starcraft was quite good.  It's fairly rare that it blunders, and often comes up with effective strategic responses to novel strategies.

Why is game AI so splotchy?  
.
"Let's not stir that bag of worms." - my lovely wife
[ Parent ]

Depends (3.00 / 1) (#99)
by X3nocide on Tue Jul 09, 2002 at 06:05:24 PM EST

Game AI especially falls into the "illusion" over fact of intelligence. In static levels especially, I find it depressing that the pathfinding is bad. This sort of thing is PRECOMPUTABLE, using things like waypoints. But generally speaking pathfinding is a tough problem to solve. Sometimes you have to go farther away to get closer. Techniques like alpha-beta pruning have been established to find promising branches, in hopes of a faster algorithm. But how far down the tree you search is a huge tradeoff.

Anyways, AI itself is really a joke. Is it machines interfacing with reality, like face recognition, robots and the like? Or is it finding a way to win a game of chess? Or is it the process of machine learning?

Often it seems to be presented as a novel algorithm for solving problems, but isn't quiksort a novel solution to array sorting? In that kind of light programming is artificial intelligence and vice versa.

pwnguin.net
[ Parent ]

alpha-beta? (none / 0) (#108)
by codemonkey_uk on Wed Jul 10, 2002 at 05:10:43 AM EST

Alpha-beta pruning is a Min-max algorithm optimisation. Min-max is very specifically a zero sum game state search, and not applicable to route finding at all. Typically, games use (or should use) at the very least, an A* search for path finding.

The real problem isn't actually the algorithm for searching the "graph", but creating an appropriate graph for the game world. Not to hard for tile-based games, (XCOM, et al), but exponentially harder for arbitrary 3d space (Quake, etc).
---
Thad
"The most savage controversies are those about matters as to which there is no good evidence either way." - Bertrand Russell
[ Parent ]

Why is game AI so splotchy? (none / 0) (#109)
by codemonkey_uk on Wed Jul 10, 2002 at 05:17:15 AM EST

Because of the nature of the industry. Ideally games are designed, then implemented. In practice games are designed as they are implemented, and under very tight deadlines.

The problems with this is that you cannot (practically) design an AI for a game that is not defined. It would be like playing a game of chess whist all the time an independent observer changes the rules. So AI gets implemented in parallel to the development of the game design, and is always in a state of "catch up". By the time the design is "final" there is very little time for AI tweaking, and "functional" tends to be "good enough" for the publishers.

Memory and CPU time constraints also figure - with graphics often taking priority.
---
Thad
"The most savage controversies are those about matters as to which there is no good evidence either way." - Bertrand Russell
[ Parent ]

well now (none / 0) (#120)
by dr k on Wed Jul 10, 2002 at 02:09:09 PM EST

If game AI had any relationship at all to proper AI research, then your parallel development problem wouldn't really be a problem, because the AI would be able to observe changes in the gameplay. But let's face it, most game AI is just a fat list of event triggers that was written by the one CS guy who actually knows lisp. And for games that are actually playtested, particularly mission-based games, this is enough to get the game selling.


Destroy all trusted users!
[ Parent ]

"proper" AI (none / 0) (#146)
by codemonkey_uk on Thu Jul 11, 2002 at 05:11:03 AM EST

You are both ignoring the economics of the situation, and dismissing "game AI" out of hand.

Game AI has to co-exist with other resource intensive operations (graphics!) in a limited memory and CPU environment, has to perform well "out the box", and often without long term storage (for learning). Yes - rule based systems are popular. They are also effective, and CPU & memory friendly. But you are wrong to suggest they are the only tool in the box (fuzzy systems, and genetic algorithms are also popular) or that they are not a "proper" form of AI.

Remember, game AI developers are producing results, in difficult situations, under tight deadlines using limited resources. Much more than can be said for most "academic" AI researchers.
---
Thad
"The most savage controversies are those about matters as to which there is no good evidence either way." - Bertrand Russell
[ Parent ]

game algebra (none / 0) (#158)
by dr k on Fri Jul 12, 2002 at 07:00:46 PM EST

They should call it Game Algebra instead. They're just solving equations, game developers aren't making systems that learn, or solve problems. Sure, the programmers are actually applying some AI methods in new ways, but a lot of their research is sloppy - they tend to re-implement systems that have already gone through a few generations of improvements.

On the other hand, academic research has long been bogged down by the latest fads. And now all the researchers are interested in game AI - the same game AI that was based on twenty year old AI research to begin with. So I guess I've got a beef against the game industry.


Destroy all trusted users!
[ Parent ]

Aha! (none / 0) (#159)
by codemonkey_uk on Mon Jul 15, 2002 at 05:28:27 AM EST

Now you see, were back to my original point! You don't think "Game AI" is "AI" because it's "just solving equations" and not "making systems that learn, or solve problems". Well, guess what, game AI learns (X-COM, Magic & Mayhem) and solves problems (what, driving a virtual car around a race course isn't a "problem"?) - you just don't accept that it's AI because it's using "systems that have already gone through a few generations". Well guess what, mate, implementing well understood algorithms is good engineering practice. Games programmers aren't (always) researchers. Games programmers are salaried employees, creating a product to a deadline.

Just because it's not white-coats-in-a-lab doesn't mean it's not AI, because if it looks smart, it is smart. And that's all there is to it.
---
Thad
"The most savage controversies are those about matters as to which there is no good evidence either way." - Bertrand Russell
[ Parent ]

competitions... (3.75 / 4) (#41)
by jeffy124 on Mon Jul 08, 2002 at 09:19:25 PM EST

You mention the 2001 Loebner Compition.  Interesting (humerous?) fact from that session: A human failed the Turing test.
--
You're the straw that broke the camel's back!
woops :) (3.00 / 2) (#66)
by ethereal on Tue Jul 09, 2002 at 09:28:42 AM EST

That, more than anything, signifies that the Turing test might need some reworking. If we're making the test difficult enough to weed out the machines, but we start falsely catching people too, then maybe it's time to admit that for the purposes of the test (natural language communication) the machines are almost good enough.

Really, this result doesn't surprise me - there are people who lead "lives of quiet desperation" where 99% of their days could be performed by a machine instead.

--

Stand up for your right to not believe: Americans United for Separation of Church and State
[ Parent ]

Paranoid (3.00 / 2) (#77)
by codemonkey_uk on Tue Jul 09, 2002 at 12:19:10 PM EST

A better Turing test would be one where the testers do not know they are testing the subjects. The false negative (or was it a false positive) in this case was down to paranoia. The tester thought they where being tricked somehow, and didn't want to be made to look a fool by declaring a "robot" as human.

Would the same conversation have triggered a "this is an AI" response if the tester had not been primed into thinking it might be? Would anyone notice if a good (by current standards) conversation bot where modified to post replies to comments here, or on slashdot? If not, how far of are we from "passing the Turing test"? Not far, I'd say.

Perhaps the K5-Turing test would be: "Can an AI gain trusted status within the K5 system?". I'd like to see that! :)
---
Thad
"The most savage controversies are those about matters as to which there is no good evidence either way." - Bertrand Russell
[ Parent ]

Turing test, and an alternative definition (3.25 / 4) (#46)
by hengist on Tue Jul 09, 2002 at 12:31:39 AM EST

I covered this very thing in a lecture yesterday.

In short, the Turing test is fairly useless. If a machine that can carry out a conversation as well as a human is intelligent, then so is Deep Blue, because that can play chess as well as a human.

In my lecture, I quoted David Fogel's comments on the (original) Turing test:

"The Turing Test is no more a test for intelligence than it is a test for femininity... A man doesn't become a woman because he can fool you into thinking that he's a woman. By the same token, a machine doesn't become...an intelligent machine, just because it can fool you into thinking that it's thinking"
-- David B. Fogel, Blondie24: Playing at the Edge of AI, pg 11, Morgan-Kaufman, 2001

Fogel then presents an alternative definition of intelligence, which I feel is more useful:

"Intelligence is the capability of a decision-making system to adapt its behavior to meet its goals in a range of environments" pg 14

Alan Turing was a brilliant man, and was far ahead of his time in many ways. But, when it comes to AI research, we are far more likely to see lots of small, highly specialied AI systems, that may or may not communicate with each other, than we are to see a single AI that can function as a human mind. There is just no need for it: the human brain is plentiful and easily manufactured, while an AI is hard to create for even simple tasks. Better to create specialised systems that can communicate with others, in my opinion.

Disclaimer: I'm currently working on a PhD in AI.

There can be no Pax Americana

modern and future AI (4.00 / 1) (#48)
by zephc on Tue Jul 09, 2002 at 01:35:26 AM EST

what are your opinions on Ray Kurzweil, computer systems etc.?  Your website seems to indicate you are from the connectionist school of AI, but what about the opinion that pure connectionism and pure symbolicism are dead ends, and that some combination of the two will yield better long-term results?  Frankly, I think far too much TALK is being done in the AI world, and not enough CODING.

[ Parent ]
This is correct... and therefore... (none / 0) (#50)
by MickLinux on Tue Jul 09, 2002 at 02:50:27 AM EST

What you say seems to me to be correct.  Therefore, although AI might make use of tools such as measurement sensors that use a speciic program, AI cannot be written like a program is normally written today.

Rather, AI must be designed as a series of filters that operate on any set of conditions, and auto-update themselves.  

Essentially, you have to take discrete analysis, and transform it to an equivalent of fourier analysis -- valid over an infinite range of inputs, but of limited validity depending on the extensiveness of programming.

Such programs will be intelligent, but will allow for stupidity.  Also, as sections of the intelligence are wiped out, stupidity (or insanity) will ensue, but it won't be any less intelligent when compared to a computer.

Even a person with Alzhimers has a response for each situation... he never goes into system shutdown until the system is unable to correctly run the coughing mechanism (thus bringing pneumonia, and a hardware failure).

I make a call to grace, for the alternative is more broken than you can imagine.
[ Parent ]

Fogel (1.50 / 2) (#51)
by dr k on Tue Jul 09, 2002 at 02:59:46 AM EST

I guess Fogel doesn't get laid a lot, because there is a fairly obvious physical difference between men and women. There is not, however, an obvious physical manifestation of intelligence. Fogel proves nothing to me by using such a weak analogy.

It is good to know that today's students are being lectured by such an openminded researcher such as yourself. Heaven knows, the last thing the AI community wants to do is actually solve the problem of intelligence - thereby putting themselves out of business.


Destroy all trusted users!
[ Parent ]

Looks like a troll (none / 0) (#69)
by hengist on Tue Jul 09, 2002 at 10:37:30 AM EST

but, it's late and my judgement is clouded.

there is a fairly obvious physical difference between men and women.

Exactly, therefore a man fooling someone into thinking they are a woman doesn't change the fact that they are a man.

There is not, however, an obvious physical manifestation of intelligence

The original Turing test, which I referred to in my original post, eliminates all such physical cues, replying entirely on remote communication.

t is good to know that today's students are being lectured by such an openminded researcher such as yourself

And I'm close-minded how?

Heaven knows, the last thing the AI community wants to do is actually solve the problem of intelligence - thereby putting themselves out of business.

The problem is, no one can really define what intelligence is - but an openminded person would at least entertain the idea that the Turing test is not adequate.

There can be no Pax Americana
[ Parent ]

can't define it? (none / 0) (#86)
by dr k on Tue Jul 09, 2002 at 02:17:01 PM EST

Oh, intelligence can be defined, it is just difficult to get people to agree upon a particular definition. Perhaps intelligence is the ability to define things?

Intelligence is not a trait the way sex is a trait. Sure, we say things like: "She is a very intelligent woman," or "I wrote a program to calculate primes, but it isn't very intelligent." This kind of intelligence - a kind of inherent characteristic of an object - isn't useful for defining what "intelligence" is, and isn't what the Turing test is trying to determine. The Turing test is looking for intelligence in action, intelligence as a behavior if you will. Unlike a man behaving like a woman, it doesn't make sense to talk about an unintelligent object behaving like an intelligent object - since the observed actions are intelligent, it displays intelligence. Of course you'd want to run a battery of tests to make sure.


Destroy all trusted users!
[ Parent ]

No, the Turing test is not going away (4.50 / 4) (#52)
by arvindn on Tue Jul 09, 2002 at 03:00:22 AM EST

In short, the Turing test is fairly useless. If a machine that can carry out a conversation as well as a human is intelligent, then so is Deep Blue, because that can play chess as well as a human.
Dumb analogy. The Turing test is not about carrying out conversation "as well as" but "indistinguishable from" a human. Now, Deep Blue certainly doesn't play chess indistinguishable from a human; its understanding of positional play is truly pathetic, though its tactical play is far superior.
What about "my pocket calculator is intelligent because it can multiply numbers as well as a human"?

"Intelligence is the capability of a decision-making system to adapt its behavior to meet its goals in a range of environments"
It is precisely the consummate vagueness of this statement that makes it far inferior to the Turing test as a criterion for evaluating AI. What it lacks is objectivity, and that is crucial. Just think about it - if we accepted the above definition, every other programmer who's ever written a bot could claim that his bot is the most intelligent

a machine doesn't become...an intelligent machine, just because it can fool you into thinking that it's thinking
This point has been extensively debated. For instance, look at the FAQs of the relevant newsgroups. Briefly, there are 2 opposing viewpoints: Strong AI, which holds that a program must be truly self aware to be intelligent, and behavioral AI, which says that it is enough for the program's behavior to be indistinguishable from intelligent behavior. But then, since it is often impossible to decide whether or not an entity is intelligent in the strong sense (for instance, if I were to assert that I am not really self aware but am merely imitating human behavior by observing those around me, could you prove me wrong?), behavioural AI is likely to be applied in practice for a long time to come.

But, when it comes to AI research, we are far more likely to see lots of small, highly specialied AI systems,
Years of failure have taught is that such things simply don't exist. In the early 80's, "expert systems" were the rage in AI (I was in my cradle at that time, but I had this from my AI prof.) However, a real stumbling block for these systems to raise from the level of DBMS into the realm of AI was that they did not have "world knowledge", which is the antithesis of specialization.

So you think your vocabulary's good?
[ Parent ]
Gradients (4.66 / 3) (#54)
by Znork on Tue Jul 09, 2002 at 05:17:34 AM EST

That analogy is severely flawed. A man doesnt become a woman because he can fool you into thinking he's a woman? Under what circumstances?

Where is the line drawn? At what point does a man become a woman? If they can fool the casual passerby? Surgery? Hormones? For most intents and purposes you eventually have to establish a working hypothesis that someone is a man or a woman, and you frankly cant go around with portable genetic analysis equipment and test out everyone you meet before deciding. And as far as you're concerned they are what they appear to be until proven otherwise.

If you had a Turing test setup with the additional factor that anyone you decided was not an intelligent being would be turned off (power cut, shot, etc), would you turn off what appears to be a sentient being (with the additional risk of getting a human killed)?

I disagree with Fogels definition of intelligence. Animals most certainly have a decision-making system that adapts behaviour to a range of environments. So do insects. And plants. Even bacteria do, to a certain extent.

That's not intelligence, that's fitness of purpose.

I agree we're going to see computer systems geared at certain purposes. But that isnt intelligence, it's just smart programming. Far more practical, but not intelligence.

Of course, a PhD in AI sounds more 'cool' than a PhD in 'smart programing', so I can understand if you like Fogels definition better :).

[ Parent ]

The Original Turing Test (5.00 / 1) (#68)
by hengist on Tue Jul 09, 2002 at 10:37:16 AM EST

I should probably attach this to the parent story, but it fits here as well.

The original Turing test is as follows: suppose there are two rooms. In one room is a man and a woman. In the other room in an interrogator, who can communicate with the man and woman via a teletype. The man's purpose is to convince the interrogator that he is a woman. He may communicate anything over the teletype to the interrogator, even lie. The woman's puspose is to help the interrogator. After a certain amount of time, the interrogator must choose which correspondent is the woman and which is the man.

Now, replace the man with a machine. Will the interrogator guess correctly as often as before?

Fogel's position, which I happen to agree with, is based on the above version of the Turing test, and is that that test is not an adequate test of intelligence. It only measures one thing - the ability to communicate over a specific medium, the teletype - in a manner that can fool an interrogator. It is based entirely on smoke and mirrors, in that it boils intelligence down to the ability to fool an interrogator.

It should also be borne in mind, that Turing formulated this test in resonse to the question "Can machines think?", a question that he regarded as absurd

For most intents and purposes you eventually have to establish a working hypothesis that someone is a man or a woman, and you frankly cant go around with portable genetic analysis equipment and test out everyone you meet before deciding. And as far as you're concerned they are what they appear to be until proven otherwise.

Exactly. But because you believe them to be so, does not mean that they are so. That is an important point of Fogel's argument.

If you had a Turing test setup with the additional factor that anyone you decided was not an intelligent being would be turned off (power cut, shot, etc), would you turn off what appears to be a sentient being (with the additional risk of getting a human killed)?

I think that for most people (other than sociopaths) this condition would just cause a bias towards rating all entities as intelligent, simply to avoid the danger of killing a human being.

I agree we're going to see computer systems geared at certain purposes. But that isnt intelligence, it's just smart programming. Far more practical, but not intelligence.

But even highly specialised applications will need to adapt, for some problem domains. Can we draw a line between smart programming and AI? I don't think we can.

Of course, a PhD in AI sounds more 'cool' than a PhD in 'smart programing', so I can understand if you like Fogels definition better :).

:-)

There can be no Pax Americana
[ Parent ]

A variation. (none / 0) (#75)
by i on Tue Jul 09, 2002 at 11:16:57 AM EST

Consider the experiment with a man and a woman. Only this time, the interrogator is a machine (that successfully passed the Turing test :)

Suppose that the interrogator incorrectly decided that the man is correspondent No.1. How can you convince "it" that "it" has been mistaken? And why it's important?

and we have a contradicton according to our assumptions and the factor theorem

[ Parent ]

Indeed. (none / 0) (#76)
by Znork on Tue Jul 09, 2002 at 11:50:32 AM EST

It only measures one thing - the ability to communicate over a specific medium, the teletype - in a manner that can fool an interrogator. It is based entirely on smoke and mirrors, in that it boils intelligence down to the ability to fool an interrogator.

The problem as I see it is that for a lot of forms of communication that's all we have to go on (ranging from kuro5hin to mail to phones). I give most people communicating with me the benefit of the doubt; they appear and act as if they were sentient beings who have minds working in a similar fashion to my own, but in the end I have only their ability to express their intelligence to base my opinion on. I could be fooled. I cannot be entirely certain. I'll have to take their word for it.

Exactly. But because you believe them to be so, does not mean that they are so. That is an important point of Fogel's argument.

But just because I believe people communicating with me are intelligent human beings does not mean they necessarily are so either (nor that they are male or female). It's a question of how well it's faked, and if it's done well enough to be indistinguishable from the real thing, I cannot reasonably decide it isnt the real thing without questioning everyone and everything.

Of course, I dont believe we'll be able to create AI's that are indistinguishable from the real thing within a reasonable time frame, nor can I see any real point to it.

But if we do so eventually, I'd personally consider a machine able fool an interrogator for long enough to be sentient. If I cant tell the difference I have to afford someone the benefit of the doubt.

But even highly specialised applications will need to adapt, for some problem domains. Can we draw a line between smart programming and AI? I don't think we can.

Mainly a question of what values you put in the word intelligence. I tend to associate intelligence with sentience which is not necessarily correct in all contexts. Likewise I use the term AI for certain types of software, which has less to do with the actual 'intelligence' than with its ability to make decisions and adapt to certain criteria in its tasks. Of course, such software is rarely more intelligent or complex than an ant, which is something we dont usually define as being 'intelligent'.

I consider the Turing test to be geared towards defining sentience-intelligence, while Fogels definition fits better on smart programming intelligence. It does fit within the AI field, but the definition of AI has grown more diffuse in its use of the Intelligence word IMO.

[ Parent ]

I disagree (5.00 / 3) (#59)
by Simon Kinahan on Tue Jul 09, 2002 at 07:35:26 AM EST

If a machine that can carry out a conversation as well as a human is intelligent, then so is Deep Blue, because that can play chess as well as a human.

Well, no. You could as easily say "a computer will be intelligent when it can add as well as a human". Since Turing would never have said anything so stupid, your reduction from holding a conversation to playing chess has removed some aspect of conversation that was important to the test.

After all, in everyday life, we humans judge one another by our conversations, and much less commonly by our chess or arithmetic skills. Conversation is a very sophisticated process, in which one must be prepared to handle an arbitrary number of possibilities, and learn as you go along.

As to your Fogel quote: Femininity and masculinity are indicated by physical properties and social cues (which don't necessarily all point the same way, mind you). No physical signs or social cues can indicate machine intelligence. Thus we need an alternative, and Turing's is still much better than any anyone else has come up with.

"Intelligence is the capability of a decision-making system to adapt its behavior to meet its goals in a range of environments"

No it is not. This is just another self serving attempt by an AI researcher to move the goal-posts so they can claim to have made progress.

This so called definition is vague and woolly. There is no decision procedure suggested to determining intelligence, and no method for measuring it, either (if it is a continuum). Nor is there any way you could get one from such vagueries as "a range of environments".

Even if it weren't woolier than my granny's knitting, it is fatally flawed by the philosophical naivity and confusion so common among AI researchers. In exactly what sense can a machine be said to have "goals" ? Possessing a goal requires intentionality, and intentionality is a property of conscious systems, and consciousness is closely linked to intelligence, so we're already ascribing the very property we're trying to test for to the system we're trying to test. Pah.

Simon

If you disagree, post, don't moderate
[ Parent ]

Nice theory (4.50 / 2) (#71)
by Rogerborg on Tue Jul 09, 2002 at 10:43:30 AM EST

    "Intelligence is the capability of a decision-making system to adapt its behavior to meet its goals in a range of environments" pg 14

Fine, but given that we're not omniscient and have to rely on frail human senses, then from our point of view, that becomes:

Intelligence can be inferred from the apparent capability of an apparently decision-making system to appear to adapt its behavior to meet what we perceive to be its goals in what we consider to be a range of environments.

Sorry for nearly wearing out my bold key, but it's worth making the point that we apply an impossible standard to AI. When we place it in an entirely controlled and artificial experiment where we do know all the variables, then we tend to we discount it as being a toy. If however it's acting in an uncertain environment, we discount it as being impossible to prove that it's intelligent.


"Exterminate all rational thought." - W.S. Burroughs
[ Parent ]

turing test arguably sufficient, *not* necessary (5.00 / 1) (#103)
by emile on Tue Jul 09, 2002 at 09:15:34 PM EST

Turing never argued that the test was necessary to show intelligence, simply that it is sufficient. The huge benefit that it has over other definitions is that it is straight forward and operational in nature. While it it certainly true that the useful output of AI research is likely to be, as you say "highly specialized AI systems, that may or may not communicate with each other," they don't do very much to illuminate the nature of this amazing capacity we have to think!

The TT makes a strong claim: anything that can interact with me as though it's a conscious, intelligent being really is one. It is directly addressing the thorny and central question of what it is that makes you and I different from, say, an ant. Because the test is straightforward it is nicely out of the way so that we can all get down to arguing about whether the claim is true; does it really take a mind to act like a mind? If we take as our operational definition

Intelligence is the capability of a decision-making system to adapt its behavior to meet its goals in a range of environments.
then we aren't any closer to really talking about the meat of the problem. While it may be true, as far as it goes, it doesn't really help us. Any actual execution of the turing test is almost certainly flawed and messy, but at least it lets us argue about the interesting questions.

[ Parent ]
Humanity != Intelligence (none / 0) (#153)
by bugmaster on Thu Jul 11, 2002 at 08:07:54 PM EST

As I mentioned in another comment, the Turing Test explicitly tests the computer for humanity -- i.e., mental equivalence to a human being. It does not test for intelligence in the sense of IQ; there are other tests for that, such as the SAT. I think what happened is that the term "intelligence" got overloaded -- it means both "ability to solve problems" and "human-like behavior".

However, I would argue that a certain measure of intelligence would be required in order to pass the Turing Test. The computer doesn't have to be able to beat humans at chess (heck, I can't even do that), but it should be able to solve basic problems, such as "if my conversation partner says he remembers voting for FDR, and FDR was president a long time ago, then my conversation partner is very old".
>|<*:=
[ Parent ]

hal (3.00 / 1) (#60)
by chia on Tue Jul 09, 2002 at 07:47:46 AM EST

I just had a look at Ai Research and it seems like Hal is back in business after being shut down for awhile and you can talk online.


Most people are other people. Their thoughts are someone else's opinions, their lives a mimicry, their passions a quotation. O Wilde
Google as AI (4.75 / 8) (#82)
by mumble on Tue Jul 09, 2002 at 01:21:17 PM EST

Google!

In my not so humble opinion, I think google is the closest thing we have to human-level artificial intelligence. eg. Ask it a question (though in a somewhat abbreviated form), and *at least* 1/10 of the time it gives you a meaningful and correct answer. Not to mention the shear breadth of knowledge that google has, which very few, if any, single human could hope to compete with.

And if the front page isn't good enough at providing an answer, then try the backup of groups.google.com, labs.google.com/glossary, or if all else fails, a real human at answers.google.com.

Playing around, you can even get google to make value judgments:
"Cats are better than dogs" about 268.
"Dogs are better than cats" about 359
"Dogs are better than sheep" 0
"Cats are smarter than dogs" about 1,620
"Dogs are smarter than cats" about 66
"Dogs are smarter than sheep" 1
"I love cats" about 29,800
"I love dogs" about 17,300
"Osama bin Laden is evil" about 61
"Osama bin Laden is a saint" 0
"Osama bin Laden is not evil" 1
"microsoft is evil" about 1,710
"microsoft is not evil" about 52
"microsoft is a saint" about 2
"bill gates is a saint" about 9
"linux is evil" about 44
"linux is not evil" 0
"linux is a saint" 0
"linus is a saint" 0
(I take 0 to mean no opinion)

It can even do basic math, like "two plus two equals". But it must have had a schizophrenic maths teacher because sometimes the answer is four, but othertimes it is five, seventy, ninety, eight, one, and presumably etc.
Here are the numbers:
"two plus two equals four" about 1,640
"two plus two equals five" about 554
"two plus two equals eight" about 16
"two plus two equals seventy" about 6
"two plus two equals ninety" about 5

With "three plus five equals" it gets it right most of the time with eight, but a seven and a ten sneak in there.
"three plus five equals seven" 1
"three plus five equals eight" about 12
"three plus five equals ten" 1

While "five plus eleven equals" gives the single and correct result, sixteen.

I just checked, and it can even count, at least as well as a 2 year old.
"one two three" usually results in four, five, but sometimes noise creaps in.
"one two three four" gives the correct answer more often, but still gives an occasionaly interesting results, eg. one two three four many. (which is not wrong, but not the answer we were seeking.)

Oh, and it can spell. Most often it spells with an American accent, but sometimes it reverts to English spellings:
"color" about 31,400,000
"colour" about 5,840,000

And it can correct spelling too:
"colr" gives "Did you mean: color"
Again clearly showing google has an American accent.
Or, "corrct" gives "Did you mean: correct"

I think the above is good evidence that google is well on the way to becoming the planets first truely intelligent artificial intelligence. And for the sake and future of lowly humans, lets just hope it doesn't crawl .mil too often. ;)

-----
stats for a better tomorrow
bitcoin: 1GsfkeggHSqbcVGS3GSJnwaCu6FYwF73fR
"They must know I'm here. The half and half jug is missing" - MDC.
"I've grown weary of googling the solutions to my many problems" - MDC.

you forgot to try... (4.50 / 2) (#90)
by elderogue on Tue Jul 09, 2002 at 03:15:59 PM EST

"two plus two equals free porn"

this will revolutionize mathematics =)


-e
[ Parent ]

Not a bad point (none / 0) (#157)
by Jel on Fri Jul 12, 2002 at 08:42:11 AM EST

I haven't quite pinned down the exact steps, but I can feel myself following a repetitive pattern when I use google (which is often).  And that pattern often gives me exactly what I'm looking for.

For example, I usually imagine the longest phrase which people would commonly write when discussing the subject I'm interested in finding out about.  Then, I'll look at the each result, and ignore quite a few based on very specific criteria. Is it more of an advert than a document?  Is it asking the same question, rather than answering it?

In particular, I'll occasionally find lots of irrelevant results, because I've asked for, say, "rattlesnakes" and "toy rattlesnakes" happen to be a more popular discussion topic.  A relatively simple analysis of discovered phrases in results could probably figure out that the search needs to be performed again with "-toy".

My point?  Just that you're right... google is very close to a usable knowledge base, despite being full of junk too.  It's quite amazing, considering that google is basically just phrase extraction, without real analysis.
...lend your voices only to sounds of freedom. No longer lend your strength to that which you wish to be free from. Fill your lives with love and bravery, and we shall lead a life uncommon
- Jewel, Life Uncommon
[ Parent ]

We don't need no stinkin' Turing Test (2.66 / 3) (#87)
by elderogue on Tue Jul 09, 2002 at 02:39:53 PM EST

In my opinion, the Turing Test has outstayed its welcome. AI research has long since progressed to the point where we need to move beyond such a simplistic definition of intelligence. It's a fuzzy, subjective concept.

The Turing Test is just something for the hopeless commercial GOFAI projects to dream about.

All these little bots (conversational software - haha) are just cheap hacks. I wouldn't even consider them in the field of AI research. They're more akin to video game "AI". Their creators are just aiming for a shallow illusion. Personally, i think that we'll see artificial consciousness* before any sort of natural language processing that can compete with a human. Connectionism is where it's at.

*Unfortunately, i doubt many people will believe me when i emerge from my lab (100 years from now) and claim that i have created artificial consciousness. heh. What would it take? What would my robot have to do to convince you?


-e

What would convince me (3.00 / 2) (#89)
by _cbj on Tue Jul 09, 2002 at 03:08:32 PM EST

If your program could interact with me, in a kind of dialogue perhaps, maybe via a terminal or something so there's no bias, and I couldn't reliably tell it apart from a human. That is probably the minimum that would convince me, and in the form of a remarkably simple test.

[ Parent ]
i'm curious... (4.00 / 1) (#91)
by elderogue on Tue Jul 09, 2002 at 03:29:19 PM EST

do you believe that any other animals are conscious?
do you believe consciousness is an all-or-nothing thing, or can there be varying degrees?
-e
[ Parent ]
Yes, you are (4.50 / 4) (#92)
by _cbj on Tue Jul 09, 2002 at 03:50:18 PM EST

I said what would convince me. Not being able to definitively know whether anything is conscious except me, I can only go with my instincts. Accepting my innate biases, those instincts will probably only be useful in judging other humanlike intelligences. Turing saw that, probably without even thinking about it, and devised his beautifully simple and practical test without having to worry about any abstract notion of 'consciousness'.

And yes, no, yes.

[ Parent ]

How long have you... (none / 0) (#162)
by Sesquipundalian on Tue Jul 23, 2002 at 10:55:24 PM EST

..suspected that the Turing Test has been a focal point of AI discussion since its introduction in 1950?

An interrogator converses with both a typical machines and humans, you think I do?

Can you think of a specific example of Linux? I've always wanted to be a botmaster you said I am!

Again, this is fairly satisfactory, even though Jason Hutchens has no idea what's going on.


Did you know that gullible is not actually an english word?
AI progress and the Turing Test | 162 comments (153 topical, 9 editorial, 0 hidden)
Display: Sort:

kuro5hin.org

[XML]
All trademarks and copyrights on this page are owned by their respective companies. The Rest 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!