Yes he does know Chinese. How can you prove he doesn't. If he can speak Chinese fluently how can you say he doesn't know Chinese.
Because he doesn't. He's emulating speaking Chinese. He's not coming up with any Chinese himself at all, he's just following a set of rules.
This is where we differ. You believe we're just a set of rules, a Turing machine in essence. I don't, because I see a difference between following rules and understanding. I can follow a set of well-defined rules to solve, say Maxwell's equations for electromagnetism, but it doesn't necessilarily mean I understand what that equation means at all.
You are not picturing the right type of rules.
If X then Y. That's the only kind of rules we're talking about here. That's what is specified by the formal system that the Chinese room represents. If you want to use other rules, well then that's not what we've been talking about....
You are not understanding that for a computer to understand the word "moon" it has to understand much much more than a bundle of dirt in the sky.
No what I'm saying is that a computer does not even understand that the word "moon" represents a big rock in the sky. To a computer, "moon" is a just a symbol, nothing else.
If someone builds a set of rules, and a program is just a set of rules, that can speak Chinese fluently then it has already made these associations.
No, no, no. The computer has such a set of rules, it did not create the rules. Associations have been made, but not by the computer. We would assume in the Chinese room that the book has been written by someone with these semantic associations, that's obvious. But the book is provided to the Chinese room as is.
Each association is a new rule. It is a new connection.
But again, how does having a set of rules make a new association? How can it decide on a new rule?
By your way of thinking you must not understand English either because there is no way that your neurons understand it. They are connected into a set of rules so that you can emulate speaking English.
And how do you know this? Have you made advances in neuroscience that show how we think and remember things? Please, do tell of this astounding breakthrough!
We don't know how we think. Saying we have a set of rules in our brain is a complete assumption with as of yet, no hard evidence. And no, there's no hard evidence to suggest otherwise either that I know of, but I'm willing to trust in my own experiance that I understand English.
Your neurons have a rule if they receive a certain amount of input from other neurons, they fire. Otherwise they do not. Your entire intelligence is built on this ONE SIMPLE RULE. It is the connections between these neurons that create your intelligence. These connections are what create the program or rules of your brain and your memory. Which neuron each neuron sends to is dependant on how this neuron has been "programmed" by the connections involved.
It's a little bit more complicated than that, but even so you still can't prove that the mind is a Turing machine. If you can, well then I'm wrong, but if you can prove otherwise then you're wrong. Again, we need a better understanding of cognition and consciousness first.
If it can fool a human then it can do algebra, solve a rubix cube, and develop new theories on how the universe was created.
Now that's a rediculous argument. The Chinese room has been programmed to speak Chinese, not do any of these things! And in each case, to get it to do these things you'd need to give it a new set of rules before it could deal with problems.
I don't believe todays computers can do it either. We need much bigger and faster processors and memory.
Nope, that won't cut it. Better and better emulations sure, but no strong AI at all. What we need is a new method of computing that isn't just a Turing machine.
How far do you think we are away from atomic computers? If you are thinking in the next 20 to 100 years for strong AI then we are in the same time frame. We only differ in whether it can be done with digital computers.
I think you're perhaps being optimistic. I'd say 75-200 years myself. And 15-20 years from atomic computers able to fully utilise such power in an efficient manner.
Something else that must be considered, completely specified behavior can grow and learn.
It can have emergant behavior the way you are meaning.
No, because it's not really emergent behaviour, it's a different thing. Syntax, no matter how good, cannot give rise to semantics. But then, you disagree there...
Every move is not spelled out in the above example. Actually no move is spelled out in the above example. Yet a perfect game of tic tac toe emerges. You say, but tic tac toe is a simple game. I say everything becomes simpler with enough processing power.
Everything that's possible to compute using an algorithm that's computable on a Turing machine for sure! But what about the halting problem? There are classes of algorithms out there which a Turing machine cannot decide on whether they will ever stop or go on forever, and yet we are able to recognise which are which. There's a good one for theory...
In the above game example the computer is deciding where to place it's piece. It had several choices. The program makes it's decision based on evidence collected as to which choice achieved the programs goal. (winning the game) The only way you can say that the program didn't decide is to say that the definition of decide requires a human.
Nope, decision is a conscious choice between alternatives. The computer merely evaluates moves and acts according to the relative values it has placed upon them. It cannot choose a move that will not at some point lead towards victory, whereas a person can play to lose - they have that choice.
Not only does it decide, but it understands.
No it doesn't, it follows an algorithm. That's even more anthropomorphic than assigning the Chinese room understanding. It's only slightly better than saying evolution acts for the good of the species.
It is programmed to hate losing and love winning. It hates by negative numbers. It loves by positive ones. Just as a fly loves the light, my program loves to win.
Oh dear. See above.
Even more could be accomplished by combining a learning method to the above AI.
Adjusting a few weightings. Not a quantative difference at all.
How is this not emergent behavior?
It's not, it's just rule-following. You've really picked a bad example here, because chess programs are about as far from AI as you can get! I've honestly never seen anyone consider chess programs as intelligent before...
A neural network(the programming sense of the word) is an option also. All of this can be implemented in a rule book. It would just be a very complicated rule book.
Indeed, and you've gone from the Chinese room to the Chess room. It's the same argument, except my position is perhaps more clear :)
You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]