Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

[P]
Man is a machine

By delmoi in Op-Ed
Wed May 16, 2001 at 02:12:47 PM EST
Tags: Technology (all tags)
Technology

Electronic Digital Computers, as we think of them today, will never be 'alive'. They will never be conscious and they will never be cognizant. They will never understand. They will never do these things because they cannot, because they understand code no more then a light switch understands what the warm glow of a light bulb means or the smooth touch of a beautiful woman's soft tapered fingers.

Software, on the other hand, is an entirely different matter.


ADVERTISEMENT
Sponsor: rusty
This space intentionally left blank
...because it's waiting for your ad. So why are you still reading this? Come on, get going. Read the story, and then get an ad. Alright stop it. I'm not going to say anything else. Now you're just being silly. STOP LOOKING AT ME! I'm done!
comments (24)
active | buy ad
ADVERTISEMENT
The purpose of this paper, is to explain why I believe that software separate (conceptually) from a computer can be sentient, can be conscious. There are many people who don't believe that, though, for various reasons. But for me the logic is simple.
  • a human is a machine
  • humans minds are conscious
  • therefore, machines can harbor consciousness
I don't know of any good refutation of point number one, and so I find my logic sound. But there are obviously people who don't believe it, thus the need for this little diatribe First of all, lets define some terms.

When I say computer, I mean a computer in the sense of a Turing machine. I mean it in the sense of the machine you have sitting on your desk, or your floor, or your lap. I mean a PC and I mean a super computer. But mostly I mean one powered by a conventional CPU. There are many other senses of the word, and the assertion that computers cannot be conscious would have come as quite a shock to the many women employed as 'computers' during world war two.

I used to be in awe of computers, Their inner workings a mystery to me, even after I had learned assembly language, but before I learned how they really worked, from the fundamental level. From the logic gates and transistors. And If I had actually gone to class, I would know how to build one.

What struck me though was how mundane it all was; you build an adder circuit and then build a multiplier one, and you put in a multiplexor that sends the signals to the right one. The multiplexor is controlled by one of the bits in the instruction. There is never any 'thinking' there is never any 'looking'. The computer doesn't 'interpret' it doesn't 'understand' it just blindly runs. It is no more a mind then a lightswitch or the engine of a car.

Conscious and consciousness are much harder terms to define. Fortunately, for this system I don't really need to other then to state that they are a property of the human mind. Whatever other cockamamie stipulations and connotations people attach to the word are irrelevant to this discussion. I will add that consciousness is something that is not derived from the physical specification of a human being. That is, your arms are not a part of your consciousness. A quadriplegic is still conscious, as is Stephen Hawkins

By software, I mean computer programs that act upon a CPU

I honestly think that most of the arguments against machine intelligence (aside from bio-machine intelligence) result from a rather obvious double standard. There was an article here on kuro5hin a while ago claming that if a person could come up with a number a computer couldn't that it would prove the existence of a soul. The threads were full of comments claiming systems to figure out 'non computable' numbers, regardless of the fact that figure out is a synonym for compute

One post by streetlawyer claimed that humans could 'know' numbers that computers couldn't because no computer knew pi. He said no computer knew pi because none had ever produced all the digits of it. Yet, nowhere in history has there ever been a person who knew all the digits!

Other people point to the determinism of the human mind, or the lack thereof. I don't believe that the human mind is non-deterministic (in fact, if you were to study psychology you would note that people are very predictable), just too complex to fully simulate. Programming a computer to behave non-determistically, on the other hand, is relatively trivial. Just plug it into a Geiger counter and a radiological source.

Others point to certain events, things done by humans that cannot be proven to be done by a machine. Of course, they never bother to prove that it could have been done by a man, and that they weren't just a lucky break.

I'd really like to change the question in this debate, and ask this question. If you don't believe a machine can harbor consciousness, explain to us why a human isn't a machine.

Sponsors

Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure

Login

Related Links
o Kuro5hin
o Also by delmoi


Display: Sort:
Man is a machine | 574 comments (553 topical, 21 editorial, 0 hidden)
a number of comments (4.50 / 8) (#1)
by streetlawyer on Tue May 15, 2001 at 07:48:45 AM EST

First, obviously needing to respond to:

One post by streetlawyer claimed that humans could 'know' numbers that computers couldn't because no computer knew pi. He said no computer knew pi because none had ever produced all the digits of it. Yet, nowhere in history has there ever been a person who knew all the digits!

That's not quite what I meant (I'd like to say "that's not quite what I said", but interpretations differ). My point was that a human mathematician's understanding of pi was significantly different from a computer's, because the human mind could grasp the concept of pi "all at once", whereas I don't think it's valid to say that the computer can be said in any sense to be working with more digits of pi than it has actually computed. Those unfortunate people who are forced to care about roundoff and truncation error in numerical computing probably have more of an intuition for what I mean when I say that when a human mathematician divides through by pi, or by some other transcendental number, he is doing something fundamentally different from when a computer tries to carry out the same process using floating point arithmetic.

...

Second, I don't think that this article establishes its point that consciousness is independent of physical structure. It's true that a tetraplegic amputee is conscious, as is Stephen Hawking. But a cucumber isn't, a cucumber sandwich still less so, and a mural of a cucumber even less so. There are certainly *some* physical requirements which seem to matter for consciousness.

Third, the reason that we can't count computer programs as conscious is that the interpretation of their output is dependent on us. Thus, a computer can't even really produce the first significant digit of pi. It can turn certain switches on and off, which in turn have an effect on a cathode ray tube or teletyper, but the only way in which this out put can be considered to have the content "3" is if a conscious observer is able to interpret it in that way.

And finally, because computer output doesn't have content unless interpreted, that means that computers can't have *private* states; they can't have any analogue to the sensation you feel when you pinch your arm (NB: I am referring to the *sensation* itself, not to any of its physical (neural) or functional (body-damage-indicating) concomitants). And if they can't have private sensation-states, it's hard to see how they can be regarded as conscious.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever

The concept of pi (3.75 / 4) (#4)
by DesiredUsername on Tue May 15, 2001 at 08:10:32 AM EST

"My point was that a human mathematician's understanding of pi was significantly different from a computer's, because the human mind could grasp the concept of pi "all at once", whereas I don't think it's valid to say that the computer can be said in any sense to be working with more digits of pi than it has actually computed."

I'm not sure what you consider "the concept of pi" to be, in this sentence. When talking about the humans, you seem to be referring to things like "ratio of circumference to diameter" and "randomly distributed digits". But then when talking about computers you say "working with more digits". Either way, I have a refutation:

1) If what you mean is "computers can only work with data, not 'concepts'", I suggest you read Hofstadter's "Fluid Concepts and Creative Analogies". The descriptions of the Seek-Whence and TableTop programs do a good job of convincing me that, even at today's primitive level, computers CAN manipulate simple but abstract concepts.

2) If what you mean is "human mathematicians can compute with all the digits of pi without knowing them" then I say show me the money.

OT: "Libertarian -- someone who despises the initiation of force in the present or future, while enjoying a lifestyle dependent on initiation of force in the past. A hypocrite; one who lives in a boat while denouncing shipbuilders."

I was going to respond saying this was a good point. But then it occurred to me that you could just as easily have written this: "Abolitionist -- someone who despises the use of slavery in the present or future, while enjoying a lifestyle dependent on the use of slavery in the past." *Anyone* who wants to initiate change has been a recipient of benefits from a system he dislikes.

Play 囲碁
[ Parent ]
the meaning of "pi" (5.00 / 1) (#7)
by streetlawyer on Tue May 15, 2001 at 08:29:06 AM EST

By "the concept of pi" I mean the referent of the word "pi" or the greek character pi in mathematical context; the transcendental number named by "pi", which happens to describe various ratios. Pi does not have digits; the decimal expansion of a series which converges on pi has digits, but that is very definitely a different entity. If you like, for "pi", substitute "phi", another transcendental number, but one with no interesting mathematical properties, and one which cannot be referred to in any way other than "phi".

If you are convinced by Hofstadter's book, then well done; I have read it and am not. The computers he describes manipulate symbols, not concepts; they manipulate syntactically, not semantically. Indeed, their manipulation can only be taken as even having syntax in the context of an external interpreter.

And the sense in which a human mathematician uses pi has very little to do with computation; do you mean "compute" when you say "compute"?

With regard to your comment on my new .sig, I don't see your point; someone who thinks that the work of abolitionism is finished when the slaves are free, without committing to a further program to undo the injustices of slavery is every bit as much of a hypocrite as a libertarian.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

qualatitive differences, and compensation (5.00 / 1) (#11)
by sayke on Tue May 15, 2001 at 08:41:31 AM EST

how could you tell if the computers were manipulating concepts instead of symbols? i think your notions of "concept" and "semantics" smell like chimeras that goes away with reverse-engineering; like a god of the gaps.

ever played with lex and yacc? computers deal with both syntax and semantics... or what's the difference between a parser and a compiler? a matter of degree, you say? well, that's what i say is the difference between lex, yacc, and you.

and with respect to the comments on your .sig; someone who thinks that "undoing injustice" is anything more then a useful slogan with which to rally the masses is every bit as short-sighted as a maoist communist.


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

syntax and semantics (5.00 / 1) (#13)
by streetlawyer on Tue May 15, 2001 at 08:48:32 AM EST

how could you tell if the computers were manipulating concepts instead of symbols?

They're not even manipulating symbols. They are turning switches on and off, and we interpret those switches symbolically. Lex and yacc don't play with either syntax or semantics, in the same sense; they turn switches on and off. For something to be a symbol depends on its being in some way interpreted, by something. I don't understand what you mean by "goes away with reverse engineering". Conscious experience is intrinsically a first-person phenomenon, and eliminating it isn't the same as explaining it; similarly, simulating its effects isn't the same as having it.

I don't see how someone can claim any moral force for "doing justice" without also recognising the claim of "undoing injustice". If you want to benefit from past injustice, you lose your right to complain about future injustice.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

and "all" you do is turn neurons on and (5.00 / 1) (#19)
by sayke on Tue May 15, 2001 at 09:01:17 AM EST

so? the emergant properties are the interesting part... and you seem to be simulating the effects of conscious experience quite effectively to me.

i don't claim any moral force for "doing justice"; everyone benefits from past injustice to some extent. so, i don't plan to complain about future injustice. i just plan to kick your ass if you crimp on my style.


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

how can you claim that? (4.33 / 3) (#27)
by streetlawyer on Tue May 15, 2001 at 09:28:53 AM EST

How can you claim that "all I do is switch neurons on and off"? I see colours, feel pain and pleasure and have opinions, all from a first-person perspective, but nonetheless objective facts for all that. These may be (almost certainly are) "emergent properties", but they're emergent properties of the physical substrate of my brain, not of any syntactic role. Indeed, it's logically incoherent to suppose they are emergent from the syntactic properties of my neurons, because my neurons don't *have* any syntactic properties without someone (me) around to interpret them symbolically. Silicon chips could quite easily have these emergent properties, but Turing machines can't.

And to prove yourself a crude amoral thug is a step backward from being a hypocrite, not forward.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

assert all you like, but i just see you switching (5.00 / 1) (#33)
by sayke on Tue May 15, 2001 at 09:44:13 AM EST

your neurons on and off, on and off...

what's the qualitative difference between the emergent properties of the physical substrate of your operating system, and the emergent properties the physical substrate of you?

i'd call it logically incoherent to suppose that your operating system emerges from the syntactic properties of your computer, because your computer has no syntactic properties without someone around to interpret them symbolically.

i call syntax the map, and hardware the territory. i need maps because the territory is epistomologically beyond my grasp, but that's ok, because my maps work pretty well, and i seem to be able to improve em to good effect. cue philosophy of science textbook...

but, of course, thermometers have maps too.

and if crude amoral thuggery is the price of honesty and internal consistancy, then so be it. at least i don't think there's some cosmic scale out there somewhere, just waiting for me to balance it.


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

What are you waffling about? (5.00 / 1) (#37)
by spiralx on Tue May 15, 2001 at 09:50:41 AM EST

Where did all this nonsense about operating systems and hardware come from? Are we talking about people, or Linux? And what's Linux got to do with intelligence?

Try and stick with the subject at hand eh? Or stop trying to wriggle out of a losing argument by changing what you're talking about. Whichever you prefer.

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

same difference. software happens; substrate... (5.00 / 1) (#39)
by sayke on Tue May 15, 2001 at 09:58:16 AM EST

is beside the point.

i talked about minds, which i class as software. funky enviornment-modeling software, sure, but nothing terribly special about em, in the grandly mystical sense of the term "special".

i invited streetlawyer to tell me some qualitative differences between the software that runs on his body and the software that runs on his computer. you're welcome to come up with some, too.


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

Differences (5.00 / 1) (#46)
by spiralx on Tue May 15, 2001 at 10:12:09 AM EST

i invited streetlawyer to tell me some qualitative differences between the software that runs on his body and the software that runs on his computer. you're welcome to come up with some, too.

The OS is another version of the lookup table/cards in the Chinese room - it has a predefined output based on a given input. There's no emergent behaviour there at all. There's your key difference...

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

input and output...? baaah. (5.00 / 1) (#57)
by sayke on Tue May 15, 2001 at 10:47:18 AM EST

cellular automata don't need input or output, but stay staunchly deterministic... and i think that's what this is about, not input and output.

ya got two options here, man: total determinism or total randomness. i treat binary oppositions with suspecion, but i've stared at this one for a while and decided to put it into my "useful" box... of course, if you've got a useful third option for me, let me know, but i don't think you do.

so ya know, emergent behavior happens in lookup tables all the time. i really like cellular automata as an example of this. all kinds of wacky stuff emerges, given the right rule and initial conditions...


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

And again, eh? (5.00 / 1) (#62)
by spiralx on Tue May 15, 2001 at 10:57:57 AM EST

cellular automata don't need input or output, but stay staunchly deterministic... and i think that's what this is about, not input and output.

Cellular automata have an initial state as an input, and each cell takes as its input the number of filled squares around it, producing an output of either a filled or non-filled square... no external output sure.

Besides, what has input and output got to do with determinism? How do you see them as being connected?

ya got two options here, man: total determinism or total randomness. i treat binary oppositions with suspecion, but i've stared at this one for a while and decided to put it into my "useful" box... of course, if you've got a useful third option for me, let me know, but i don't think you do.

Where does total randomness come from? Where have I implied that total randomness enters into the system? Output is a product somehow of external input and internal state, not the throw of a dice...

so ya know, emergent behavior happens in lookup tables all the time. i really like cellular automata as an example of this. all kinds of wacky stuff emerges, given the right rule and initial conditions...

But it's still deterministic as you've said.

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

so call the big bang your initial input... (5.00 / 1) (#79)
by sayke on Tue May 15, 2001 at 12:05:16 PM EST

i'm not used to calling the initial state of a cellular automata "input", but i see why people would do that.

a couple of posts ago you said: The OS is another version of the lookup table/cards in the Chinese room - it has a predefined output based on a given input. There's no emergent behaviour there at all.

you brought input and output into it - moreover, you declared that having input and output excludes the existance of emergent behavior! so, i said i/o wasn't really the issue then; determinism was, and gave CAs as an example of deterministic systems with emergent properties aplenty. that's how i saw them as connected.

Where have I implied that total randomness enters into the system?

you declared that where there was input and output, there was no emergent behavior, and i thought you meant that where there is determinism, there is no emergent behavior. as the only alternative to determinism is total randomness, i inferred that you thought humans were totally random because they displayed emgergent behavior... which struck me as absurd.

do you see humans as deterministic? if so, what do we disagree about, again?

i'm getting tired.


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

Excluded middle (5.00 / 1) (#86)
by spiralx on Tue May 15, 2001 at 12:24:13 PM EST

you brought input and output into it - moreover, you declared that having input and output excludes the existance of emergent behavior! so, i said i/o wasn't really the issue then; determinism was, and gave CAs as an example of deterministic systems with emergent properties aplenty. that's how i saw them as connected.

Unfortunately there's a logical error there - just because I said there was no emergent behaviour in this case doesn't mean that there is none in any case at all. But still, I'll leave the point about determinism and emergent behaviour for now, I'm not sure who's right in this case :)

you declared that where there was input and output, there was no emergent behavior, and i thought you meant that where there is determinism, there is no emergent behavior.

Again, only in that case, not in all cases.

as the only alternative to determinism is total randomness, i inferred that you thought humans were totally random because they displayed emgergent behavior... which struck me as absurd.

Not really. Quantum systems aren't deterministic, but exhibit emergent deterministic behaviours. There's a fine line here, but I'm not so sure where it is...

do you see humans as deterministic? if so, what do we disagree about, again?

I think so, insofar as anything is deterministic. Whether that has anything to do with intelligence though...

i'm getting tired.

Heh. I'm going home in a minute. Probably carry on arguing later though :)

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

well that's your first mistake (5.00 / 1) (#48)
by streetlawyer on Tue May 15, 2001 at 10:22:57 AM EST

i talked about minds, which i class as software.

In other words, you make an assumption which is extremely close to your conclusion as your first step. There is no very good reason for regarding minds as software; in any case, this should be argued for rather than assumed.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

show me the money! (5.00 / 1) (#55)
by sayke on Tue May 15, 2001 at 10:39:28 AM EST

tell me some qualitative differences between the software that runs on your body and the software that runs on your computer. please? i don't see any. they look identical to me. based on this apparent identity, i class minds as specialized software.


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

free will (5.00 / 1) (#116)
by eLuddite on Tue May 15, 2001 at 01:12:59 PM EST

I class minds as software capable of free will.

I've always understood determinism as the ability to predict all action and thought according to antecedent information. Philosophically, determinism is nothing more than the belief that everything has a prior cause. When software can mimic free will it will become non-deterministic. My understanding of your position is that you would make an illusion of free will, something we confuse with the complexity of the machine instead of its ghost. Ignoring for the moment philosophic objections to epistemic or logical determinism, where is there evidence in physics for strict causal determinism?

So, if someone can come up with an event that occurred without prior cause, you'd have have an example of "qualitative difference," right?

Truth.

If you only relied on facts which were pertinent to the case before you, how would you assign moral praise and blame? Murder would be exonerable by determined, efficient cause.

So, explain morality using dice and I'll jump aboard your wagon. You really wont be able to do so without resorting to your own peculiar brand of machine mysticism. Why not just admit you have no more cause or evidence to dismiss the mind as software than your opposition does to insist your insistence isnt evidence?

This is tricky stuff; no amount of faith is going to get you through this.

---
God hates human rights.
[ Parent ]

Free will (5.00 / 1) (#182)
by spiralx on Tue May 15, 2001 at 03:42:26 PM EST

I class minds as software capable of free will.

A tricky definition indeed...

First off, I think we'd need some proof of some kind that free will exists and humans possess it in some form other than as a "feeling". I'm sure you know the body reacts before the brain becomes aware of stimuli, and there are also parts of the brain which if damaged cause symptoms much like a loss of "free will" for the recipient.

Ignoring for the moment philosophic objections to epistemic or logical determinism, where is there evidence in physics for strict causal determinism?

There isn't and you know it :) But equally so, where is the evidence that the brain is non-deterministic? The only source of non-determinism in the brain physically known to be possible so far is quantum effects, and unless we make new discoveries about the physical structure of the brain then every piece of evidence so far indicates that such effects are of too limited effect and too small a time scale to have any effect on the physical behaviour of the brain.

Basically, all evidence so far supports the position that the brain is deterministic. But that's an issue aside from free will anyway.

Personally I feel taking a hard stance is premature at the moment...

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

In that case, God isnt one to judge. (5.00 / 1) (#196)
by eLuddite on Tue May 15, 2001 at 04:32:13 PM EST

there are also parts of the brain which if damaged cause symptoms much like a loss of "free will" for the recipient.

Wow, is this true? If so, can it not be explained as a loss of faculty to make any choice, period, instead of discriminating decisions between right and wrong? If we have no free will, Adam and Eve had a poor lawyer.

(I share your position against a premature hard stance, of course. I dont particular expect a resolution anytime before the sun goes red on us, actually.)

---
God hates human rights.
[ Parent ]

True, but hazy memory :) (5.00 / 1) (#200)
by spiralx on Tue May 15, 2001 at 04:41:59 PM EST

It's true, but the thing is I can't remember the details of exactly how the victim was affected, and it's too broad a scope to search for for more details. Sorry :(

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

free will (5.00 / 1) (#191)
by speek on Tue May 15, 2001 at 04:16:33 PM EST

let's see you not die. Let's see you jump off the empire state building and not die. Let's see you drink 50 beers and not get drunk. Let's see you stop sleeping. stop eating, stop thinking about sex, etc, etc, etc. you don't really think you have free will, do you?

--
al queda is kicking themsleves for not knowing about the levees
[ Parent ]

First I must decide to jump. (5.00 / 1) (#199)
by eLuddite on Tue May 15, 2001 at 04:40:33 PM EST

That would be an exercise of free will. The death part -- that's gravity. And let me tell you, 50 beers is not enough. If it were enough, well, what can I say? Thanks for the beers and the opportunity to exercise my free will to prove you right by lying to my advantage.

---
God hates human rights.
[ Parent ]

free will, watchers from another world (5.00 / 1) (#274)
by speek on Wed May 16, 2001 at 08:15:14 AM EST

The death part -- that's gravity

The nature of free will is that it is necessarily not of this world - ie, not bound by the laws of physics. If it were, it would have causal predecessors which would contradict any claim to freeness. That being the case, I'm not sure why you would acknowledge the power of gravity to end your life. It's a physical law that would have no say over your free will.

If you argue for a separation between the physical and free will domains, and you can't help the ending of your physical life, then I would point out that that's simply an argument for the existence of a "watcher" that can have no impact on the physical world, but simply rides along, watching and experiencing. In which case, I can't argue against it (just as I can't argue against the existence of god), but I can ignore it completely, since it has no predictive or explanatory power, and is therefore utterly useless.

And, to get back to computer consciousness, if a watcher from another world can attach to a human, I see no reason why one wouldn't attach to a computer.

--
al queda is kicking themsleves for not knowing about the levees
[ Parent ]

other world? (5.00 / 1) (#330)
by eLuddite on Wed May 16, 2001 at 02:32:48 PM EST

And, to get back to computer consciousness, if a watcher from another world can attach to a human, I see no reason why one wouldn't attach to a computer.

Which is altogether different from an emergent property in a suitably complex formal system. And it doesnt have to be another world, it just has to be something that cannot be understood through the method of science. Your faith in science is endearing but it isnt good engineering. Forgive my ignorance but I always thought AI meant to engineer an artificial intelligence. If it doesnt have free will it will be unable to ascertain semantic truth.

---
God hates human rights.
[ Parent ]

irrelevant (5.00 / 1) (#334)
by speek on Wed May 16, 2001 at 03:17:09 PM EST

If it doesnt have free will it will be unable to ascertain semantic truth

Well, I wouldn't expect it to be capable of the impossible.

--
al queda is kicking themsleves for not knowing about the levees
[ Parent ]

nothing impossible about it (5.00 / 1) (#345)
by eLuddite on Wed May 16, 2001 at 04:25:57 PM EST

We do it all the time.

I never said truth was absolute. I said it cannot be grokked or manipulated without an appeal to its semantic meaning, whatever that meaning is to your machine. It remains an article of your faith that syntax is semantics and by giving the example of Truth, I have attempted to show that emerging properties not only require syntax, they also require "cognition" and monitoring of the emergent process under which semantics emerge. Why? Because truth is NOT an ideogram or any number of ideograms. There is no Truth lookup table. Repeat: Not only do you have to account for vague syntax, you have to identify it by monitoring the syntax engine. The engine can only do this if it has a semantic model of observation.

---
God hates human rights.
[ Parent ]

consistency of belief (5.00 / 1) (#402)
by speek on Thu May 17, 2001 at 10:24:35 AM EST

I mostly want people to be consistent with their beliefs, and to understand the logical implications of what they say. The Chinese room argument is not a consistent argument. On the one hand, it argues that syntax alone can never yield semantics, or meaning, but, it postulates the existence of an entity that succeeds in communicating effectively with presumably semantic beings, using nothing but syntactical rules and manipulation. I see a contradiction. you either have to say that purely mechanical processes could never succeed in such activity, or that they can, and that when they do, they are no different than us in their "consciousness", and their ability to comprehend. I don't see a middle ground.

Regarding faith - it goes both ways. Your appeal to semantics is as unfounded as my belief otherwise. An appeal to semantics is an appeal to a third party truth engine, otherwise, semantics is nothing more than reference, which makes hashtables the primary tools of intelligence. Indeed, human thought isn't rational thought so much as it is associative thought. We don't think - we associate.

I personally believe in a pragmatic model of truth - truth is what works. What gets us through the day. To not survive is to have your incorrectness corrected, and to survive is to be proven correct. The final appeal to truth is the universe itself - it can't be wrong, because it exists. There's all sorts of problems with it - the biggest one being what if the universe isn't entirely mechanical/deterministic the way I assume? I have some answers to that, but it would get long.

--
al queda is kicking themsleves for not knowing about the levees
[ Parent ]

free will, randomness, determinism, morality, etc (5.00 / 1) (#232)
by sayke on Tue May 15, 2001 at 11:03:54 PM EST

how's that for a subject line? ;)

please tell me some differences between "free will" and randomness. i sure as hell don't see any.

yes, i make an illusion of free will. i call it a useless metaphor, and say it's high time we got over it and moved on. the bare fact that i observe continuity between events is ample evidence for strict causal determinism (as any other kind smells incoherent to the point of uselessness). combine that with the fact that the only other alternative is total randomness (unless you want to show me a third option), and take into account the fact that the world doesn't look very random at all, and i think a very strong phenomological case for determinism has been made.

sure, we could be on an *incredible* run of luck, and it's possible that all our correct predictions were nothing more then flukes... but would you call it useful to think that?

so you know, i don't assign moral praise and blame. i don't specify moral imperatives, or commands which all people are obliged to follow. i may come up with a code of conduct for a achieving some particular goal, where i think that if the code is not followed the goal will not be achieved - but nobody becomes meaningfully blameworthy by not following the code. they might not achieve the goal, of course, but if they don't care about that particular goal, then they have no cause for self-reproach.

and neurons can be implemented in software, no? minds can be implemented in neurons, no? that looks to me like more then enough cause to class minds as a kind of software.


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

free will vs randomness (5.00 / 1) (#236)
by eLuddite on Tue May 15, 2001 at 11:45:28 PM EST

You're not going to like my answer but the difference between spontaneous acts and voluntary acts is -- you guessed it -- consciousness. :-) More accurately, personal interference in your phenomenal determinism. As Aquinas put it, "Shall I acquiesce or shall I resist?" Rational software must always choose good over evil and if it does sometimes acquiesce, it should feel the need to atone for "random behavior" by revisting the scene of the crime with resistance. That's one fscked algorithm.

Back to square one.

Instead of asking all the hard questions of me, let me ask you why I should accept randomness in the absence of any hard evidence for free will? Cause, you know, free will is whole lot more attractive and, paradoxically, intuitive when seen in the glow of AI's miserable results on almost all other fronts.

---
God hates human rights.
[ Parent ]

I was randomized, officer! (5.00 / 1) (#237)
by eLuddite on Tue May 15, 2001 at 11:53:59 PM EST

why I should accept randomness in the absence of any hard evidence for free will?

Let me revisit the scene of the crime with a correction: in the absence of any hard evidence against free will.

---
God hates human rights.
[ Parent ]

what a f00kin morass (5.00 / 1) (#288)
by sayke on Wed May 16, 2001 at 10:12:36 AM EST

i agree in that i think the differences between spontaneous acts and voluntary acts is consciousness, but i see them both as perfectly deterministic, nonetheless. however, i have no idea what More accurately, personal interference in your phenomenal determinism means. what in hades is "personal interference"? do you contrast it to "impersonal interference"? and logic gates acquiesce and resist all the time, just like neurons...

i say all software chooses good over evil, all the time, although 1) it may later decide it was slightly mistaken about the goodness certain past choices, and 2) you may disagree with it about the goodness of certain choices. this follows from the way that i think of goodness as an observer-dependant property, like greeness.

let me ask you why I should accept randomness in the absence of any hard evidence against free will?

errr, what? i hold that free will and randomness are indistinguishable. you seem to have misunderstood my position.


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

exactly (5.00 / 1) (#329)
by eLuddite on Wed May 16, 2001 at 02:08:47 PM EST

errr, what? i hold that free will and randomness are indistinguishable. you seem to have misunderstood my position.

What evidence do have that they are not distinguishable? You merely assume there isnt any difference; that assumption is the basis of *your* mysticism. Proof isnt an absence of evidence.

---
God hates human rights.
[ Parent ]

didn't notice your post =) (1.00 / 1) (#573)
by sayke on Wed Jun 20, 2001 at 03:07:56 AM EST

but i'll reply now that i did notice...

i can't distinguish em. i don't see any differences. would you like to point some out for me? because man, i haven't noticed any, and i've looked.

but i don't think you'll be able to tell me any - and not just because of the lateness of this reply, either ;)

but if you do read this, and if you would like to let me in on the differences between free will and randomness, hey, clue me in.


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

Easy (5.00 / 1) (#138)
by Simon Kinahan on Tue May 15, 2001 at 01:48:57 PM EST

I can speak coherent English (I even know when to use capital leters). Computers can't. I can play Go at about a 1 kyu level. Computers can't. I can reliably recognise human faces. Computers can't. Want me to go on ?

You're making a massive assumption that all these problems will turn out to be computable in the Turing Machine sense, which is almost as bad as your assumption that minds are the same kind of thing as software.

Simon

If you disagree, post, don't moderate
[ Parent ]
i said qualitative! sheesh. (5.00 / 1) (#229)
by sayke on Tue May 15, 2001 at 10:34:47 PM EST

my k6-2 500 can do lots of things that my ti-82 can't. that does not make them qualitatively different; only quantitatively different.

because neurons can be implemented on turing machines (no?), and minds can be implemented on neurons (no?), i think calling minds computable in a turing-machine sense looks more like a conclusion then an assumption.

my second conclusion (which you'd call my second assumption) follows from the first.


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

Your mind maybe (5.00 / 1) (#264)
by spiralx on Wed May 16, 2001 at 06:12:54 AM EST

because neurons can be implemented on turing machines (no?), and minds can be implemented on neurons (no?), i think calling minds computable in a turing-machine sense looks more like a conclusion then an assumption.

But your mind isn't conscious as we've discussed earlier. Your mind calculates but doesn't understand what "whiteness" is, it just acts according to a Chinese room-style lookup table or equivalent.

So it may follow from your assumptions, but you're arguing using totally different definitions from everybody else. Which is at best pointless and at worst dishonest.

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

i call this conscious thing my mind (5.00 / 1) (#284)
by sayke on Wed May 16, 2001 at 09:53:23 AM EST

i say the understanding of what "whiteness" is follows from the way my mind acts like a chinese room-style lookup table or equivalent. please do not misstate my position in the future! arr! rightous anger! ;)

the definitions i use aren't terribly uncommon - i've heard them used often enough in the past, although i can't tell you exactly where i first came across em. regardless, as my definitions are at least somewhat precise, they work a lot better then the vague ones so many other people seem fond of.


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

But (5.00 / 1) (#296)
by spiralx on Wed May 16, 2001 at 10:41:33 AM EST

i say the understanding of what "whiteness" is follows from the way my mind acts like a chinese room-style lookup table or equivalent. please do not misstate my position in the future! arr! rightous anger! ;)

You still haven't answered how you think the Chinese room understands Chinese! Or even how the Chinese room could cope with an entirely new concept unrelated to anything in its lookup table.

the definitions i use aren't terribly uncommon - i've heard them used often enough in the past, although i can't tell you exactly where i first came across em. regardless, as my definitions are at least somewhat precise, they work a lot better then the vague ones so many other people seem fond of.

But we haven't got vague definitions because scientists and philosophers like them, we've got vague defintions because they're describing vague things we don't understand fully! If we did, we wouldn't be having this argument after all...

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

what form would you like that answer in? (5.00 / 1) (#302)
by sayke on Wed May 16, 2001 at 10:56:14 AM EST

You still haven't answered how you think the Chinese room understands Chinese!

what kind of answer are you looking for? a provable algorithm would be nice, but i don't have one for you. sorry. remember, i've adopted this position because i think it does quite a bit better against my epistimic critera list then any other position i know of, not because it does perfectly against it.

Or even how the Chinese room could cope with an entirely new concept unrelated to anything in its lookup table.

that's simpler. people have done neural network research in which cells are dynamically added to the network. i remember going to a lecture that talked about this. i don't have any pointers to the research on hand, but it could probably be hunted down again. while reducing the neural network abstraction to a turing machine abstraction to a lookup table abstraction is probably beyond my abilities, surely you see how it would, in principle, follow...

But we haven't got vague definitions because scientists and philosophers like them, we've got vague defintions because they're describing vague things we don't understand fully! If we did, we wouldn't be having this argument after all...

true. so let's adopt more specific definitions! like mine! ;)


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

Neural nets (5.00 / 1) (#308)
by spiralx on Wed May 16, 2001 at 11:13:42 AM EST

that's simpler. people have done neural network research in which cells are dynamically added to the network. i remember going to a lecture that talked about this. i don't have any pointers to the research on hand, but it could probably be hunted down again.

Hmmm... are neural nets Turing-complete? I assume so... But anyway, we're talking about the Chinese room right? Which doesn't have any such facility, yet you're claiming we work in that kind of way. So explain how we can deal with new concepts, whereas the Chinese room can't...

true. so let's adopt more specific definitions! like mine! ;)

But your definition of mind includes things like thermometers, rocks and atoms! And your definition of consciousness (modelling of modeller as well) only covers one thing whilst excluding other aspects of what are generally considered consciousness e.g. imagination.

So they may technically be more "scientific" but they're not defining what we're talking about...

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

equivalence (5.00 / 1) (#310)
by sayke on Wed May 16, 2001 at 11:26:06 AM EST

neural nets are turing complete, right? chinese rooms are turing complete, right? so what are you complaining about?

But your definition of mind includes things like thermometers, rocks and atoms!

baha... true. so think of "mind" like "heat" - all things not frozen in time have a little itty bitty bit, but some things have vastly more then others.

And your definition of consciousness (modelling of modeller as well) only covers one thing whilst excluding other aspects of what are generally considered consciousness e.g. imagination.

imagination occurs when your scenerio-modelers start getting modeled.

but your bug-hunting is appreciated =)


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

Qualitative vs Quantitative (5.00 / 1) (#323)
by Simon Kinahan on Wed May 16, 2001 at 12:24:25 PM EST

Lets a get a little bit more rigorous about this. Algorithms all differ from one another in qualitative respects, unless they are just the same algorithm with some values tweaked. This is true analytically from the meanings of the terms.

Problems that might have algorithmic solutions divide into classes, which similarly differ by their qualities, most notably whether a comptable solution exists, and whether the solution will necessarily halt when run. There are real problems which can be shown not to soluble by a solution that actually halts, or which cannot be shown not to be soluble at all. These are not quantitative differences.

We have no evidence any of the problems I named are not incomputable. We have no evidence they are, mind you, but they are qualitiatively different from the things we have algorithms for, in that the possibility exists that they might not be.

Regarding neurons: I answered this elsewhere. We have no evidence that neurons can be implemented on Turing machines. We can't even completely characterise the behaviour of real neurons yet.



Simon

If you disagree, post, don't moderate
[ Parent ]
insofar (5.00 / 1) (#40)
by streetlawyer on Tue May 15, 2001 at 10:02:01 AM EST

as I can tell what the heck you're on about, which is not necessarily very far ..

i'd call it logically incoherent to suppose that your operating system emerges from the syntactic properties of your computer, because your computer has no syntactic properties without someone around to interpret them symbolically.

You're confused. The operating system of my computer *is* a syntactic property (albeit a mindboggling complicated one) of the physical arrangement of switches inside its rather attractice translucent case. It's not an emergent property of any kind.

and if crude amoral thuggery is the price of honesty and internal consistancy, then so be it.

Well well well, we meet again. All I'll say is that the human race has fought numerous wars on this issue since Machiavelli, and do you know what? Your side has lost almost all of them.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

please tell me the difference between... (5.00 / 1) (#50)
by sayke on Tue May 15, 2001 at 10:30:56 AM EST

syntactic properties and emergent properties, and, more specifically, why emergent properties can't include syntactic ones.

my map/territory point was that i call "syntax" the order i perceptually impose on the universe in an attempt to make working models of it.

oddly enough, i think you're a syntactic property (albeit a mindboggling complicated one) of the physical arrangement of neurons (and maybe some other stuff, like nitric oxide, i guess) inside its rather pecular meat case. of course, i call you an emergent property of the interactions between that neural structure, as well, but i call my operating system an emergent property of the interactions between switches, so...

and yea, your side has one every war, but it always fights against itself, so that doesn't say much. hell, everybody claims that god is on their side; all revolutions are people's revolutions; in all wars justice prevails, if you ask the winners... comes with the territory, man. i call it all a cheap rhetorical trick, and i throw it all out the window. the common good? your god's blessing? the scales of justice? tinkle, ding, clatter, clink...

to the extent that your goals appear to coincide with my own, i will cooperate with you. i don't need your towering legal absurdities, or your ad-hoc appeals to the mystical imperatives you pulled out of your ass, or your convoluted moral-highground rhetoric. to the extent that your goals appear to coincide with my own, i will cooperate with you; forever, and ever, amen.


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

I'm afraid I'm going to have to be rude (5.00 / 1) (#54)
by streetlawyer on Tue May 15, 2001 at 10:39:17 AM EST

I'm sorry; you don't understand what syntactical properties are, I don't have time to teach you, and I'm already stretched to the limit in this thread arguing with people who do understand. So this thread has to end here. All I will say is that it is not logically coherent to suggest that human beings are in general "syntactic properties", because there is no syntax except in the context of an interpretation, and an interpretation requires an interpreter. Our goals do not coincide, for which I am profoundly glad.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]
Your losing your edge, honey (5.00 / 1) (#60)
by Anonymous 242 on Tue May 15, 2001 at 10:53:33 AM EST

John Saul, John Saul, John Saul. Where art thou, my John Saul? Far from being rude, you almost polite in your rebuttal. Where's the angst? Where's the rage? Where's the belittling of intelligence through four letter verbs used as adverbs and adjectives?

What have you done with my precious streetlawyer?

Is he still in there?

[ Parent ]

i appreciate the civility of your rudeness (5.00 / 1) (#61)
by sayke on Tue May 15, 2001 at 10:55:13 AM EST

you appear to me as a syntactical property. i expect that i appear the same to you, and have no problem with that. i also expect that you do not appear as a syntactic property to yourself; attempting to model oneself tends to not work out so well. you may be able to interpret me, but you can't interpret yourself for reasons godel hinted at. i fail to see the difficulty of this.


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

Comments (5.00 / 1) (#76)
by ucblockhead on Tue May 15, 2001 at 11:58:46 AM EST

Silicon chips could quite easily have these emergent properties, but Turing machines can't.

Mathematically speaking, silicon chips are Turing Machines, unless someone screwed up.

Indeed, it's logically incoherent to suppose they are emergent from the syntactic properties of my neurons, because my neurons don't *have* any syntactic properties without someone (me) around to interpret them symbolically.

But that begs the question of what is the you that is around to interpret them?

The computer I am sitting at is really just a bunch of switches, as you say. The same goes for the computer that hosts Kuro5hin. However, my computer does not see the Kuro5hin computer as a bunch of switches. It sees the Kuro5hin computer as an HTTP server. Everything about Kuro5hin is unknown to my computer except for its HTTP responses. For all my computer knows, Kuro5hin could just be Rusty typing real fast on a dumb terminal.

Even individual computers are made up of components that communicate using a protocal. My keyboard has a chip in it that is just a bunch of switches. The CPU is just a bunch of switches. But the keyboard communicates with the CPU using a syntactic protocal.

There is no compelling reason to assume that the brain doesn't work exactly like this. There is no compelling reason that the syntactic and semantic experiences that go on in the head are not just higher level protocols in the brain sitting on top of the lower level protocol of firing neurons. (There's also no compelling proof that it this is true, either, which is why I think that anyone who claims to be sure on this issue is letting their personal biases get in the way of the evidence.)
-----------------------
This is k5. We're all tools - duxup
[ Parent ]

absolutely not (5.00 / 1) (#104)
by streetlawyer on Tue May 15, 2001 at 12:58:08 PM EST

Mathematically speaking, silicon chips are Turing Machines, unless someone screwed up.

Turing Machines are abstract entities, unless Alan Turing screwed up. Some silicon chips implement Turing machines. Some implement finite state automata which are not Turing Machines. Some silicon chips are merely amplifiers.

But that begs the question of what is the you that is around to interpret them?

Luckily, the question of "what is me" does not arise for me, although it does for you. The fact that I exist is one of the few things that is definitely, and certainly given to me to assume as of right by the universe. The general question of what sort of thing I am, I leave to the biologists, giving them the occasional philosophical tip about what kind of question to ask.

However, my computer does not see the Kuro5hin computer as a bunch of switches. It sees the Kuro5hin computer as an HTTP server.

Your computer doesn't "see" anything at all; it switches its own switches on and off according to the signals is receives on port 80. You choose to interpret things given your own semantic knowledge, but nothing which is not a person does any interpretation at all, which includes "seeing".

But the keyboard communicates with the CPU using a syntactic protocal.

The keyboard doesn't communicate and doesn't use any protocol at all. The syntactic role of the bundle of switches on your desk is entirely imposed on it by your understanding. We need to be very careful about being thoroughly consistent in our use of language here.

There is no compelling reason to assume that the brain doesn't work exactly like this. There is no compelling reason that the syntactic and semantic experiences that go on in the head are not just higher level protocols in the brain sitting on top of the lower level protocol of firing neurons.

I'm sure that the brain does work like this. What I'm denying is that anything else does. Nothing else has syntax; or at least, it doesn't make sense to talk of "syntax" in a context where it is not interpreted. Pebbles washed up on a beach interact with each other in systematic ways, but they don't interact syntactically.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

Absolutely not not (5.00 / 2) (#128)
by ucblockhead on Tue May 15, 2001 at 01:35:12 PM EST

Turing Machines are abstract entities, unless Alan Turing screwed up. Some silicon chips implement Turing machines. Some implement finite state automata which are not Turing Machines. Some silicon chips are merely amplifiers.
All of them can be implemented as Turing Machines, which is the point. The root of the question here is whether or not the human brain can be implemented as a Turing Machine, and thus, as any other sort of computing machine.

uckily, the question of "what is me" does not arise for me, although it does for you.
Not quite. You know you exist. You do not know the mechanisms whereby you exists, and are aware of your existence. That was what I was trying to get at. The Greeks thought conciousness was located in the heart. Obviously proving existance is not the same as proving the "how" of existence, and that's really the question here.

Your computer doesn't "see" anything at all; it switches its own switches on and off according to the signals is receives on port 80.
Yes, and the vision center of the brain doesn't "see" anything at all; it receives neural impulses according to the light that hits the retina.

I'm sure that the brain does work like this. What I'm denying is that anything else does.
But what you've not done is shown how the two things are different. You are saying that the computer is just switches, and that higher level protocals only have meaning for the builder, but you've not really shown that brains are any different. How are brains anything but a bunch of neurons firing?

I know your answer, which is that you know this because you have a brain, and you have a conciousness. But here's the trouble, which you've as much as admitted in your post. You know that you exist and have conciousness. That is provable. You do not know whether or not I am, or anyone else is, or whether or not anything is. You make assumptions, but they are not provable. You think your neighbor is concious because he looks and act like you, and you are concious. You think your chain is not because it neither looks or acts like you. But these are assumptions, they are not provable things. So you've got the trouble that you are drawing a line with one data point.

Which brings us back to machines. You cannot prove that a PC is not concious. (Yeah, I don't think it is either, but we're talking rigid proof, here.) So what you do is look at how it works, and compare it to how the brain works. If you can show that it doesn't work like your brain, then you've got a very good reason to assume that it isn't concious. (Still not proof, though.)

But to do that, you've got to dig a lot deeper than just though experiments and subjective prejudices like the Chinese Room. You've got to essentially show something in the brain that is fundamentally different from the way a machine works.

In order to truly come up with a real answer to this question, you've either got to be able to point to something in the brain that is definitely not mechanistic, or you've got to come up with a complete mechanistic description of the brain. Until then, it is just belief and navel gazing.
-----------------------
This is k5. We're all tools - duxup
[ Parent ]

your standard of proof is based on an error (5.00 / 1) (#137)
by streetlawyer on Tue May 15, 2001 at 01:48:08 PM EST

Why do I have to do all these things? I'm not the one saying something wildly counterintuitive! You go and prove it the other way!

More formally; you're demanding objective, third-person proof of an objective, but first-person phenomenon. It is entirely possible that a computer (and indeed, a cucumber sandwich) have an inner life, but if they do, they do independently of their interpretation as finite state automata. It is not possible that the syntactic manipulations of an implemented Turing machine constitute an inner life, because that would imply that the switchings on and off had content independent of their interpretation, which would imply the metaphysical existence of meanings of switch-sequences, which are far more ridiculous entities to postulate than simple conscious experiences.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

Your mistake (5.00 / 2) (#146)
by ucblockhead on Tue May 15, 2001 at 02:00:54 PM EST

You go and prove it the other way!
Your mistake is assuming that I am arguing for the contrary position, when in fact, I am arguing that the question is currently undecidable. The question is not provable either way, and neither way is, in my mind, more or less counterintuitive. The other side is actually the simpler theory because it does not posit some unexplainable thing.

The one side says that there is this thing, "conciousness", that somehow exists separate from mechanistic processes, despite being unable to point to that thing, or show how it in any meaningful way.

The other side says that there is this property, "conciousness", that somehow emerges from a combination of simple, mechanistic processes, despite being unable to show how such a thing would work.

Both contentions are, in my mind, equally lame. Both are equally unsubstantiated, and the reason that people discuss it with such heat is that human beings are almost invariably incapable of saying "I don't know".
-----------------------
This is k5. We're all tools - duxup
[ Parent ]

the second is correct (5.00 / 1) (#151)
by streetlawyer on Tue May 15, 2001 at 02:08:55 PM EST

Consciousness arise from brains, which as far as we know, are deterministic (mechanistic), and nothing Roger Penrose has written has convinced me otherwise. What people like John Searle (and me) are denying is that certain other types of mechanistic process (Turing machines) are capable of entering into the right kind of causal relations which could possibly give their states semantic content.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]
But that's the same damn thing! (5.00 / 2) (#161)
by ucblockhead on Tue May 15, 2001 at 02:34:10 PM EST

But if you can't find some wierd-ass quantum effect like Penrose was trying to do, or give up on materialism entirely, you've lost the argument.

If the process is deterministic in the sense that X always leads to Y, then a Turing Machine can simulate it exactly. And if a Turing Machine can similate it exactly, then conciousness becomes a sort of ghost in the machine, with nothing to attach itself to, and no reason for existence other than our own experience of it. More important to this particular argument, if a Turing Machine can simulate it exactly, then it is theoretically possible to build a machine that simulates it exactly. Give such a machine a mouth, and it will claim it is concious just the way you or I do, and we end up being unable to give any reason for it not being concious other than the fact that it isn't made of neurons.


-----------------------
This is k5. We're all tools - duxup
[ Parent ]

Not at all (5.00 / 1) (#164)
by streetlawyer on Tue May 15, 2001 at 02:41:17 PM EST

A simulation ain't the real thing. A Turing machine can simulate a hurricane, but it can't blow the doors off. We can claim it ain't conscious for the simple reason that it's a Turing machine, and that if we weren't around to interpret it's mouth squawking, it would just be a diapraghm flapping in a complicated way.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]
Well, you see.... (5.00 / 3) (#170)
by ucblockhead on Tue May 15, 2001 at 02:57:51 PM EST

The second you wire up the "simulation" to a body with ears, eyes and a mouth, you've ceased to have a simulation, and instead have something that walks, talks and acts just like a human being. It ceases to be a simulation, and becomes a different implementation. If it kicks you in the ass, it hurts just as if a human being kicks you in the ass.

Yeah, sure, you can say that there's nothing behind the eyes, and that it is just a mechanism causing a diaphram to flap. But then, you can also say your neighbor is just a bunch of meat, with nothing behind the eyes. Yeah, that seems ridiculous until the you watch people discuss whether or not chimpanzees, dogs, or rats have conciousnous.

So your claim becomes that this thing, which acts just like a human being, does not possess an attribute that you claim that all human beings possess.

I only see two ways that can work. One is to assume what Penrose did, and claim that it is impossible to do such a thing in the first place. In other words, claim that you cannot simulate a human brain with a Turing Machine. The only real way to do this is to assume a process that has a random element, which is why he talked about quantum effects. That's the only apparent randomness in nature.

The other is to say that this attribute is somehow separate from the functionality of the thing, and thus has no effect on what the thing does. But that does not seem to me to be very satisfying, because suddenly conciousnous loses any sort of purpose whatsoever. It becomes something that is just along for the ride.


-----------------------
This is k5. We're all tools - duxup
[ Parent ]

zombies (5.00 / 1) (#252)
by streetlawyer on Wed May 16, 2001 at 02:38:36 AM EST

and instead have something that walks, talks and acts just like a human being.

She takes just like a woman, yes, she does
She makes love just like a woman, yes, she does
And she aches just like a woman
But she breaks just like a little girl.

Only the first two lines of this would be true about a robot woman of the kind you describe.

It ceases to be a simulation, and becomes a different implementation. If it kicks you in the ass, it hurts just as if a human being kicks you in the ass.

But not vice versa. Simulated pains don't hurt.

However lifelike the simulation, the only meaning in the utterances of such a Turing machine would be that which we attribute to them. To slowly crush such a creature until it stopped functioning would be a waste of money, but not morally bad.

I have perfectly good evidence that human neurons have the causal power of semantic representation, because that's how they work in me. Pending huge advances in neuroscience, I have no reason to believe that silicon switches have any such power. My neurons can't have representative power by virtue of any functional or syntactic properties of theirs, because they don't _have_ any syntactic powers unless there is someone (me) to interpret their physical changes. Therefore, they must have this pwoer by virtue of some other property which we don't understand, and therefore have no reason to attribute to anything other than that which we have first-hand evidence of.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

So then... (5.00 / 2) (#294)
by ucblockhead on Wed May 16, 2001 at 10:37:41 AM EST

What about a chimpanzee? A rat? A cockroach? They've all got neurons... Which of them are concious, and how can you tell?

But the more important question is, if your neurons can't have representative power of their own, but only have such power because you are their to interpret their meaning, then what are you?

Unless you propose the existence of a mind that is somehow physically differentiated from the brain, then the thing doing the interpreting of the neurons is just more neurons. In it is just a group of neurons that gives meaning to the patterns of other groups of neurons.

There's really no other conclusion you can reach without moving the "mind" out of the physical brain.

And if semantic meaning is just one bunch of neurons interpreting patterns in another bunch of neurons, it becomes far less clear what the difference is with one hunk of silicon interpreting patterns in another hunk of silicon. So then you are left with the same old thing, that the all human brains must be concious, and all non-brains must be unconcious, because you have a brain and are conciousous. But what you are doing there is extrapolating from one data point. You only have data about the state of conciousnous of one object in the universe. All the rest is extrapolation and intuition.


-----------------------
This is k5. We're all tools - duxup
[ Parent ]

Circular argument (5.00 / 1) (#318)
by ucblockhead on Wed May 16, 2001 at 12:08:33 PM EST

But not vice versa. Simulated pains don't hurt.
I realized while at the gym how clearly circular this argument is. It doesn't feel pain because it isn't conscious. It isn't conscious because it doesn't really feel pain...

Now I know your answer, which is that you know the manufactured brain is just switches and things. It is mechanical thing that you can understand.

But is not a brain? What is magic about ions and neurotransmitters travelling in axons as opposed to electrons travelling in wires?

You claim to be a materialist, but you sure don't act like one. You've just managed to hide your elan vital in the terra incognita of neurochemistry.
-----------------------
This is k5. We're all tools - duxup
[ Parent ]

Ah, consciousness (5.00 / 1) (#169)
by leviathan on Tue May 15, 2001 at 02:52:25 PM EST

Axioms are fun. I think we're dealing with axioms here because I've been thinking about these things for a long time and have been able to prove or break down any of them, except in terms of other potential axioms.

Both your contentions (and unless someone disproves one, they may as well be axioms in some system) depend on the definition of consciousness. The definition of it I've seen most often is that it's the ghost in the machine; the difference between us and a zombie version of us which appears exactly like us, but is simply a machine. This definition is in favour of your first contention and assumes that we aren't that zombie. We must be different and therefore man isn't a machine - because a zombie (which no-one has ever seen) isn't conscious.

The opposite is to say that we are that zombie and since consciousness has no discernable effect, it either doesn't exist or arises from what makes us act as us (and consequently what makes the zombie the same). That is to say, man is a machine.

Of course, all this is flawed because consciousness can be proved to no one else than yourself. It has no discernable effect upon anyone but yourself. For many hundreds of years people have been trying to define it, and it never works until you bring religion into it.

If you're determined to be either agnostic or atheistic (sp?) about it, you cannot define consciousness so you cannot prove whether there is a difference between a zombie (clone?) of us and us, so you cannot prove (or disprove) the contention of this entire discussion. I think you're precisely right, but unfortunately I suspect people are liable to argue about it for hundreds of years more.

--
I wish everyone was peaceful. Then I could take over the planet with a butter knife.
- Dogbert
[ Parent ]

Perl? (5.00 / 1) (#134)
by priestess on Tue May 15, 2001 at 01:42:58 PM EST

Nothing else has syntax; or at least, it doesn't make sense to talk of "syntax" in a context where it is not interpreted
But surely syntax is just rules about which symbols are allowed to follow which other symbols. Syntax is often written in BNF, yeah?

The Perl executable interperates a Perl program which has syntax. If you screw the syntax up you even get a Syntax Error. All of this is reduced to switches and voltage leves underneath, sure, but you have not shown me anything that You can do isn't reducable to those same things. Indeed you probably can't since all I see of you are pixels on a screen anyway.

Maple uses syntactic rules to manipulate equations, (with a few huristic guesses thrown in there to figure out which rules to apply) and unless you somehow do maths in a different way to me it treats PI in the same way we do, as a symbol which can be moved around from one place in an equation to another without effecting the logical truth of that equation.

Mathematicians don't bother to caclulate X when X is higher than a few hundred, they just abstract it out and computer code like Maple does the same thing.

Pre.......

----
My Mobile Phone Comic-books business
Robots!
[ Parent ]
symbols (5.00 / 1) (#147)
by streetlawyer on Tue May 15, 2001 at 02:03:23 PM EST

But surely syntax is just rules about which symbols are allowed to follow which other symbols.

A symbol isn't a symbol unless it symbolises something. An instrument which turns electrical switches on and off does not have either symbols or syntax. Syntax is a guide to the *interpretation* of those switches, but this can only be done by something which is capable of interpreting. A machine can't do this interpretation, because it doesn't have any syntax or semantics itself unless it is itself interpreted by something with this ability.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

Syntax/Semantics (5.00 / 1) (#160)
by priestess on Tue May 15, 2001 at 02:32:25 PM EST

No, I contend that a symbol can still be a symbol if it represents nothing, everything, itself. Maybe there's a different word for a symbol when it's just being itself and represents nothing at all. if I draw a curve in the sand and don't tell you what it represents, you can still manipulate it.

Syntax is a guide to which of these symbols are allowed to appear following which others, in the usual case, and simply how they may be manipulated in the general.

The 'guide to interpretation' is surely semantics, much harder to write down than BNF is. That's what I was always taught anyway. Maybe this is different in English Comp Sci to American Comp Sci, or they changed the definitions since I graduated, though I doubt it.

My CPU is capeable of manipulating the meaningless switches in it's memory according to rules but it's incapeable of understanding them, agreed.

The difference seems to be that you contend that it's IMPOSSIBLE to make a turing complete machine (with no extra parts) which will be able to not only manipulate syntatx but also comprehend symantics. I dont know, though I suspect that it is possible since it looks to me very much like we will be able to take a brain apart and see how it manipulates it's meaningless switches or neuron firing rates or whatever. When we've done that and have a machine which is reacting like a human and passes the Turing Test you will still be insisting it doens't really understand becasue it's just manipulating these switches and I'll certanly reply that it appears it's you who doesn't understand this. Probably becasue you're just manipulating neuron firing rates in that sponge of yours.

Pre.........

----
My Mobile Phone Comic-books business
Robots!
[ Parent ]
not all that helpful (5.00 / 1) (#509)
by Kaki Nix Sain on Fri May 18, 2001 at 06:37:42 PM EST

"A symbol isn't a symbol unless it symbolises something."

An X isn't an X unless it X's something? Gee, that doesn't seem all that helpful. Come on streetlawyer, I know you can do better than defining something in terms of its selfness or selfing.

Well, at least you could before you ran off to your new userid to avoid talking to us all in a straightforward manner. Or maybe the history got to be a burden (understandable). Still, our histories are what make us who we are.



[ Parent ]

formal grammars (5.00 / 1) (#154)
by eLuddite on Tue May 15, 2001 at 02:17:11 PM EST

But surely syntax is just rules about which symbols are allowed to follow which other symbols. Syntax is often written in BNF, yeah?

That is formal syntax and its existence is imposed from without, not within.

---
God hates human rights.
[ Parent ]

But (5.00 / 1) (#163)
by priestess on Tue May 15, 2001 at 02:37:40 PM EST

it's enforcement can still be done by purely mechanistic methods, when I was first learning to program the Spectrum had a better idea which button I should press next than I did thanks to it's context sensitive syntax checking.

All the understanding in your head is also imposed from without, you learn from your environment not teach it.
Pre...........

----
My Mobile Phone Comic-books business
Robots!
[ Parent ]
syntax != understanding syntax = ordered structure (5.00 / 1) (#189)
by eLuddite on Tue May 15, 2001 at 04:00:04 PM EST

The Spectrum was able to second guess you because the syntax in dispute was formal and because the programmer programmed it to mimic his own understanding. Rather fortuitously -- otherwise the computers would be quite useless -- formal grammars can have only a single understanding. Natural language, to understate the challenge, is not so easy to second guess from its syntax.

All the understanding in your head is also imposed from without, you learn from your environment not teach it.

How can your environment teach you understanding? I assume you are smarter than the objective facts of your environment in order to make sense of them, no? Right from the get go, where does knowledge of I, the mind's eye come from? Transcedental number? Aleph? Logic? How about things we do not understand like black hole singularity or God?

Parrots dont understand that "Polly wants a cracker" means the noun polly meaning me verb wants object a hunger thingy not thirst thingy, do they? (They might. The point is there isnt enough information in sytax to figure any of this out.) If two parrots become accidental neighbors and one of them sqwuaked "polly wants a black hole singularity," would the other protest "I am Polly, not you, and I want a cracker because a black hole singularity is rather inedible and therefore a poor metaphor for a cracker, if indeed you were trying to make a metaphor out of an empty gullet."

---
God hates human rights.
[ Parent ]

Semantics (5.00 / 1) (#202)
by priestess on Tue May 15, 2001 at 04:48:30 PM EST

Okay, you seem to be using the word Syntax where I'd just use the word Semantics since as I understand Syntax it's precisely devoid of all MEANING, it's just about symbols which can be mechanically manipulated.

Probably Comp Sci claimed the word from Philosophy and redefined it, but I think the Comp Sci definition is more useful and precise so I prefer that.

Poly has no idea what a cracker is, and nor does Microsoft Word, but Word's understanding is a level above Polly's, it does understand the syntax and can (to some degree, it always tells me my sentences are too long) check the grammar. At least it knows Cracker is a noun.

Well, it appears to anyway, actually it's just manipulating switches but the point is that this is all PEOPLE do too, we have neurons which fire, or not, depending on a sum of the inputs (more or less).

The environment teaches understanding the same way the scientific method learns to model reality increasingly well. You start with a random hypothesis, and refine it when it turns out to be wrong.

where does knowledge of I, the mind's eye come from?
We're heading out into philosophical fancy and we're way out of the grounds of science here, but my guess for which I can offer almost no evidence and which it's obviously hard to word, goes like this. When a person says they grok something, they're talking about some internal state of neurons which causes a restructuring of the rest of the brain, this is wired (thanks to evolution, it's USEFUL) to feel pleasant, reinforcing whatever it is they grok in a very skinneresque way.

So knowledge of 'I', exists because it's useful but it's probably not a great deal more than, and is similar to, Microsoft Word's 'knowledge' that Cracker is a noun. We happen to have also developed ways to talk about out own functions.
A mind basically consists of a model of reality, and if there is such a thing as 'I', it'll have predictive power and when it helps us predict, it reinforces that part of the model, makes that particular neurological path more likely.
Transcedental number? Aleph?
My philosophy went as far as reading most of Burt Russels History Of, I'm not sure I do have a concept of transcendental number or Aelph. What are they?

Logic? How about things we do not understand like black hole singularity or God?
This is when the model building towers up like a house of cards, our brains are so big they tend to masturbate a lot (like I am now) and we get things like this, but the state of groking something is SO useful that it, rightly, feels real good. Maybe a Black Hole Singularity will turn out to have some value, heck maybe even God will, but the fact we can't really experiment on it gives us no way to knock those cards down and they feel good damnit.

Everyone is asking "Can a computer be conscious?", but I'm not convinced that even Human consciousness, even MY consciousness, is anything more than a useful illusion that's very deep grained probably in software AND wetware.

But I won't claim to know any of this, if someone manages to figure out what the brain does that a Turing machine can't, I'll be happy as hell. It's just I get my stongest grok chemical reaction when I think there's nothing there but syntactic manipulation on an INCREDIBLE scale. Brains are damn huge.
Pre...........

----
My Mobile Phone Comic-books business
Robots!
[ Parent ]
Briefly (5.00 / 1) (#208)
by spiralx on Tue May 15, 2001 at 05:19:30 PM EST

Probably Comp Sci claimed the word from Philosophy and redefined it, but I think the Comp Sci definition is more useful and precise so I prefer that.

But we're using the philosophical definitions here, so you using Computer Science ones isn't useful, it doesn't add to the debate :)

Poly has no idea what a cracker is, and nor does Microsoft Word, but Word's understanding is a level above Polly's, it does understand the syntax and can (to some degree, it always tells me my sentences are too long) check the grammar. At least it knows Cracker is a noun.

No, Word has no superior understanding of a cracker, it has no understanding of a cracker because such is contained in the semantics. It simply has it stored in a lookup table, and has a (programmed) ability to manipulate it syntactically. Well, I suppose streetlawyer would argue the latter, but I'm not as fussy...

My philosophy went as far as reading most of Burt Russels History Of, I'm not sure I do have a concept of transcendental number or Aelph. What are they?

Transcendental numbers are the infinites. Aleph null is the number of natural numbers, aleph one is the number of ways of arranging the set of numbers in aleph null and so on. IIRC of course. A search for Hilbert's Hotel would give you more info probably, it's a good way to get a handle on the basic concepts...

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

Metaphor (5.00 / 2) (#218)
by priestess on Tue May 15, 2001 at 06:17:12 PM EST

Of course, when I say that Word has an understanding that Polly is a Noun and Want is a verb and Cracker is a noun and it understands that Noun-Verb-Noun is a valid sentence, I'm using a metaphor for the complex symbol manipulation hidden inside the machine.

The thing is, I suspect that Human Understanding is entirely similar, a metaphor for the complex neural processes hidden in the wetware. People say this is rubbish becasue they feel the understanding, they experience it but then people feel the presence of God and they halucinate and they dream and they experence DejaVu and they misunderstand and misinterprete data and god knows what else.

Sure, the process is many orders of magnitude more comlpex, and we don't understand exactly what that process is (the same as my mother doesn't understand how Word knows English grammar better than she does) but we should be very careful not to assume our language, our house of metaphor cards, combined with our ignorance isn't mistaken for God's own truth.

You see a similar thing when Rich Attenbourgh says things like "the animals were being eaten so they evolved a poison to deter their preditors". People misunderstand and think that some animal DECIDED to alter it's genes but of course it's just a metaphor for the random mutation combined with natural selection which drives evolution.

Don't misunderstand, the metaphor still represents a real thing, there are instrctions in Word's exe that represent this 'understanding', just as there are processes in the brain that we interperate as semantics, but at the bottom of it all we still just have neurons firing, hormones raging, patterns forming and collapsing etc. The word 'Understand' itself is part of our language and our language is nothing but metaphor similie and symbols, how can we assume that just becasue we 'feel different' to a mega-computer we know this subjective feeling is right? Especially when we don't know how a computer feels. It's not replicable, it's not explicable, it's not even observable, we're all just halucinating!
Pre..........

----
My Mobile Phone Comic-books business
Robots!
[ Parent ]
*heh* (5.00 / 1) (#247)
by streetlawyer on Wed May 16, 2001 at 02:25:07 AM EST

Okay, you seem to be using the word Syntax where I'd just use the word Semantics since as I understand Syntax it's precisely devoid of all MEANING, it's just about symbols which can be mechanically manipulated.

Probably Comp Sci claimed the word from Philosophy and redefined it, but I think the Comp Sci definition is more useful and precise so I prefer that.

Errrrmmm ... if these concepts weren't available in philosophical logic already, how would Turing and Goedel ever have got off the ground?

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

Semantic *vs* syntactic (5.00 / 1) (#26)
by DesiredUsername on Tue May 15, 2001 at 09:26:03 AM EST

"The computers he describes manipulate symbols, not concepts; they manipulate syntactically, not semantically."

What's the difference between a symbol and concept? What's the difference between syntax and semantics?

"And the sense in which a human mathematician uses pi has very little to do with computation..."

Which is exactly my point. Humans do "conceptual thinking" while we leave computers to work on the low-level stuff. Doing a comparison on a coincidental state of affairs is fallacious. Using the same logic I could prove that children learning long division can have no concept of pi--"they're just manipulating numbers". Yet I think you would agree that they could eventually learn it. Why can't a (sufficiently advanced and suitably programmed) computer?

"...someone who thinks that the work of abolitionism is finished when the slaves are free, without committing to a further program to undo the injustices of slavery is every bit as much of a hypocrite as a libertarian."

Someone who thinks either that libertarians have no concern for the past or that injustices can ever be "undone" (vs redressed at a cost to the innocent) is as hopelessly simplistic as John Searle.

Play 囲碁
[ Parent ]
"as hopelessly simplistic as Searle" (5.00 / 1) (#31)
by streetlawyer on Tue May 15, 2001 at 09:43:27 AM EST

I hope that there was some level of self-irony there; you cannot seriously imagine yourself as being in a position to call Searle "simplistic".

What's the difference between a symbol and concept? What's the difference between syntax and semantics?

Kuro5hin is a very poor place to learn philosophical logic, so you really need a textbook for this. It's the difference between a word and its referent; between the letters "Fido" and the dog Fido. Have another look in your Douglas Hofstadter book under "use and mention".

Using the same logic I could prove that children learning long division can have no concept of pi--"they're just manipulating numbers". Yet I think you would agree that they could eventually learn it. Why can't a (sufficiently advanced and suitably programmed) computer?

Depends what you mean by "computer". If you mean "artifical brain", no reason at all. If you mean "Turing machine or logical equivalent", the answer is - because all that it can do is syntactic manipulation. It lacks the causal power which children possess of being able to have first-person experiences with semantic content. That's something I'm prepared to regard as a fact about the universe, unless someone comes up with a simpler explanation which isn't logically contradictory.

Someone who thinks either that libertarians have no concern for the past or that injustices can ever be "undone" (vs redressed at a cost to the innocent) is as hopelessly simplistic as John Searle.

I like your exposition of the libertarian position on injustice and have amended my .sig accordingly.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

You misunderstand (5.00 / 1) (#43)
by DesiredUsername on Tue May 15, 2001 at 10:07:26 AM EST

"...you cannot seriously imagine yourself as being in a position to call Searle "simplistic"."

I consider his Chinese Room argument to be simplistic.

"It's the difference between a word and its referent; between the letters "Fido" and the dog Fido."

Yes, I know this. What I'm asking is "are they really as different as you think". Take another look in *your* copy of Hofstadter for examples of a fine gradation between the two.

"[A computer] lacks the causal power which children possess of being able to have first-person experiences with semantic content. That's something I'm prepared to regard as a fact about the universe, unless someone comes up with a simpler explanation which isn't logically contradictory"

Are you also prepared to explain what a "causal power...of being able to have first-person experiences" *means*? Or is this just another one of your unexamined blocks of philosophical jargon with no internal structure?

As for the sig: I don't think it's so much that libertarians claim to be innocent as that they figure "they can fend for themselves" therefore there is nothing to be innocent *of*. Something I don't necessarily agree with, at least when applied to the present.

Play 囲碁
[ Parent ]
again, disbelief (5.00 / 1) (#47)
by streetlawyer on Tue May 15, 2001 at 10:18:39 AM EST

I consider his Chinese Room argument to be simplistic.

I hope there's a degree of self-irony here; you cannot seriously regard yourself as being in a position to call the Chinese Room argument simplistic.

What I'm asking is "are they really as different as you think". Take another look in *your* copy of Hofstadter for examples of a fine gradation between the two.

Well your answers are, yes they are and I've looked; it isn't there. Hofstadter gives lots of examples where it's difficult to see the join between syntax and semantics, but that doesn't mean that there's a gradation there, any more than a shell game is one step along the road to teleportation. It's always possible, with sufficient care, to find the "join" -- the stage where purely physical phenomena are given syntactic roles, and where syntactic manipulations are interpreted semantically.

Are you also prepared to explain what a "causal power...of being able to have first-person experiences" *means*?

I'll leave that to people much cleverer than myself; in principle yes, but we don't know anything like enough about neuroscience to even ask the right questions.

Or is this just another one of your unexamined blocks of philosophical jargon with no internal structure?

Far, far better unexamined blocks of philosophy than *over*examined blocks of analysis of a question which is wrongly posed. Men in a cave seek enlightenment. But a fly trapped in a bottle will attempt to fly toward the light even though it keeps hitting its head on the glass. What it needs is *endarkenment*, the unasking of an impossible question. Then it can fall downward, out of the bottle and be free. Ludwig Wittgenstein, Philosophical Investigations.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

So...no answers (4.50 / 4) (#53)
by DesiredUsername on Tue May 15, 2001 at 10:38:56 AM EST

You gave 4 responses. Number one was an argument based on "you cannot seriously regard...". Number two was "you can always tell". Number three was a non-sequitur (I asked what "causal power" means, you respond with comments about neuroscience). Number four responds with a poor analogy that, in any case, doesn't address the question.

If you didn't want to discuss the matter, why did you respond to the original article?

Play 囲碁
[ Parent ]
grow up (4.20 / 5) (#59)
by streetlawyer on Tue May 15, 2001 at 10:50:26 AM EST

Grow up.

Number one was a request to you not to be so bloody arrogant as to dismiss the towering achievement of a man's career, a piece of work which has attracted more than one hundred separate responses in published philosophical journals as "simplistic". You are simply not in a position to judge John Searle's philosophy.

Number two follows from the fact that I can tell in all of Hofstadter's examples, therefore it is possible to tell, and neither he nor you have given any inkling of how a case might be constructed in which I could not. This is a perfectly adequate response to your impudent suggestion that I reread a book which I understood perfectly first time. Indeed, you weren't making an argument at this point; just using your own lack of understanding as if it were a failing of mine.

Number three was not a non-sequitur and if you'd thought about it you'd never have made a fool of yourself in this way. Whatever the causal power of the brain is which allows it to a) have consciousness and b) represent things semantically, it is either a neurological fact, or a non-physical one. We are agreed that it is not physical; therefore any explanation of it depends on neuroscience. We do not currently know the answer to the question now; therefore either it will be found by future advances in neuroscience or it will never be found at all.

Number four was an attempt to expand your horizons, in response to a rather insulting jibe of yours. I am trying to suggest to you that not every question can be answered; some questions, including most philosophical ones, presuppose an entirely wrong representation of the problem and therefore need to be unasked.

Why don't you grow up and realise when someone is trying to discuss a subject like an adult, instead of mewling like a baby for someone to answer your question exactly as you asked it, and then squealing with childish pleasure when told it's impossible, like a seven year old who's discovered his first riddle?

For Pete's sake, and your own, grow up. Or alternatively, keep your earlier promise to stop discussing things with me. And take my freaking user information page out of your bookmarks. Pretend that killfiles have been implemented on k5, and put me in yours.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

*I* should grow up? (5.00 / 3) (#72)
by DesiredUsername on Tue May 15, 2001 at 11:46:35 AM EST

I note that "spiralx" has rated your comment +5. Talk about immature.

Play 囲碁
[ Parent ]
So? (5.00 / 1) (#74)
by spiralx on Tue May 15, 2001 at 11:54:11 AM EST

What's that got to do with the point he's making. You asked for answers, and you got them. Why are you still whinging? Either respond or shut up.

And why do I get quotes around my user name?

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

Searle's towering achievement... (5.00 / 1) (#126)
by _cbj on Tue May 15, 2001 at 01:29:37 PM EST

I don't think Searle himself would consider his attempted attacks on strong AI anything more than a hobby. His "towering achievement", at around eight stories, was his philosophy of language work. In AI he's a fumbling drunkard, motivated by some deep personal fuckedupedness rather than the pursuit of truth (as we all are, but he's particularly spoilsportish about it). The reason the Chinese Room attracted so many counterarguments was because Searle drew wonderfully faulty conclusions from it on so many levels.

Look at his homepage. It's awfully hard to find anything about The Chinese Room. This is the man who once replied to David Chalmers with a line about how strong AI implied panpsychism, which, being "obviously absurd," refutes strong AI, so we're doing him a favour if we don't treat him as a God where he speaks as a layman.

(Agree about the Libertarians though. One of the most shortsighted philosophies around, loved by teenage Americans everywhere. "Go" "figure".)

[ Parent ]
"fumbling drunkard????" (5.00 / 1) (#132)
by streetlawyer on Tue May 15, 2001 at 01:39:01 PM EST

Perhaps "towering achievement was a bit much". But the Chinese Room is just an extension of the distinction between syntax and semantics, which was his major contribution, so it's doubly stupid to think of it as "simplistic".

Is your disagreement with his rejoinder to Chalmers that Strong AI *doesn't* imply panpsychism, or that panpsychism isn't absurd? Personally, I thought it was a quite clever and effective way of making the same point about syntax and semantics which drives the Chinese Room; that syntax is relative to an interpretation (and therefore, since a rock implements all finite state automata, there is an interpretation under which a rock is a mind).

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

Oh, sorry... (5.00 / 1) (#204)
by _cbj on Tue May 15, 2001 at 05:00:32 PM EST

Had I known reading his doctoral thesis would have wallpapered over some of the cracks in the Chinese Room, I surely would have.

Regarding his strong AI implies an absurdity bit, he confused a 2+2=5 absurdity with a benevolent bearded god created the universe absurdity; the kind of untestable thing that you can't go about using formally. It was in the middle of one of many barely coherent, certainly incohesive rants I've seen from him on the subject. Yet for all those, he'll urge us to accept the findings of neuroscience, should they find anything. So why such dogma?

Maybe he believes AI to be an unworthy goal compared to understanding the human mind. I'm inclined to think the guy just wants really badly for AI not to be possible. He read the wrong kind of science fiction as a child or caught his mother using a progentior of the walking, talking vibrator. One with valves. (Still talks more sense than Penrose and Lucas though ;)

[ Parent ]

he's not confused (5.00 / 1) (#246)
by streetlawyer on Wed May 16, 2001 at 02:22:42 AM EST

I'm not sure what you mean about a "doctoral thesis"; AFAIK, all Searle's work on the subject is published in journals.

And he may be *wrong* on the subject, but is certainly not *confused* in the case you mention. Searle believes that strong AI implies panpsychism for strictly logical reasons (he believes that a rock implements every finite state automaton, a controversial but defensible proposition). He certainly does think that it implies this in the same way that 2+2=4.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

Yeah, he is (5.00 / 1) (#271)
by _cbj on Wed May 16, 2001 at 07:09:32 AM EST

Searle believes that strong AI implies panpsychism for strictly logical reasons (he believes that a rock implements every finite state automaton, a controversial but defensible proposition). He certainly does think that it implies this in the same way that 2+2=4.

Yes, strong AI does imply panpsychism. It's thinking that panpsychism is absurd that's confused. It would seem to stem only from a failure to understand how free will v. determinism is a non-question. Or just speaking too casually.

I'm not sure what you mean about a "doctoral thesis"

His was about the philosophy of language, the foundation of his towering achievement, apparently required reading before picking at the Chinese Room. Obscurity of reference defeated by heightened sarcasm, sorry.

[ Parent ]

Sense and Reference (5.00 / 2) (#129)
by Simon Kinahan on Tue May 15, 2001 at 01:37:08 PM EST

What's the difference between a symbol and concept? What's the difference between syntax and semantics?

The difference between seeing the moon and seeing the word "moon". The best way to say it is: semantics is what happens in your mind when you see something that refers to something else. Syntax is just the rules for putting references together to create new meanings. You can add as much description about the moon as you like, even to the point of being able to approximately predct the moon's position, but your description will never be a bloody great ball of rock in the sky. In just the same way, your description never becomes the experience a human being has when looking into the sky and seeing the moon. To believe putting a syntactic description into electronic form and getting a computer to manipulate it somehow makes a difference to this is to make an enormous assumption.

You're makimg a basic logical error when you say Hofstadters programs convinced you computers can manipulate symbolc. They can't. We can just make them appear to, by getting them to arrange bits of phosphor or liquid crystal in the right way.

The final logical conclusion of the position you are taking is claiming not to be concious. I can't convince you that you are, since we have no objective way of verifying conciousness by looking at the brain - though one day I assume we will - but you know you are, and we have no evidence at all that you can make a computer concious by getting it to appear to manipulate symbols.

Simon

If you disagree, post, don't moderate
[ Parent ]

Uh... (none / 0) (#220)
by delmoi on Tue May 15, 2001 at 06:49:40 PM EST

You're makimg a basic logical error when you say Hofstadters programs convinced you computers can manipulate symbolc. They can't. We can just make them appear to, by getting them to arrange bits of phosphor or liquid crystal in the right way.

If that's the case, how is a human doing anything other then 'appearing' to be intelligent by making noises with a voice box?
--
"'argumentation' is not a word, idiot." -- thelizman
[ Parent ]
Its a reasonable assumption (5.00 / 1) (#319)
by Simon Kinahan on Wed May 16, 2001 at 12:09:54 PM EST

That other human beings are concious because their behaviour and appearance are very like ours, and we know we are concious. As yet, we do not have the basis for assuming the same of computers, since their behaviour is not very similar to ours unless you restrict the problen domain in rather peculiar ways you would never do for a human being.

Simon

If you disagree, post, don't moderate
[ Parent ]
Oh, the humanity! (5.00 / 1) (#71)
by Office Girl the Magnificent on Tue May 15, 2001 at 11:46:26 AM EST

<humor>

I didn't understand any of this. Does that mean I'm a replicant?

</humor>

Moderation in everything. Including moderation.
-- Mark Twain

[ Parent ]

Mathematical assumptions... (5.00 / 1) (#238)
by Estanislao Martínez on Wed May 16, 2001 at 12:24:49 AM EST

By "the concept of pi" I mean the referent of the word "pi" or the greek character pi in mathematical context; the transcendental number named by "pi", which happens to describe various ratios.

Does one have to accept that such an object exists in order to accept your account?

And what if one only accepts constructive mathematics? For constructions are tightly associated with effective procedures...

And, even within classical mathematics, which is this object that is the referent of the word pi? Doesn't set theory allow for infinitely many objects which would meet all the requirements to be called pi? (This problem already arises for the natural numbers...)

--em
[ Parent ]

now we're getting complicated (5.00 / 1) (#245)
by streetlawyer on Wed May 16, 2001 at 02:19:23 AM EST

I happen to believe in an ontology which contains sets, therefore I believe that there is an ontological entity which is the referent of "pi". But even a constructivist would have to admit that "pi" refers -- specifically, it refers to pi. I don't think that anything turns on the ontology here; unless formalism is literally incoherent, doing mathematics does not carry any specific ontological commitment. Though it's a damn long while since I last read Quine, I suspect that this is a red herring.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]
Classical assumptions, yep. (5.00 / 1) (#265)
by Estanislao Martínez on Wed May 16, 2001 at 06:13:39 AM EST

I happen to believe in an ontology which contains sets, therefore I believe that there is an ontological entity which is the referent of "pi".

You did not answer one of the traps I set for you. Which criteria must the set which is the referrent of "pi" meet? Are there many sets which meet this criteria? If so, in what sense can we talk about "the" referent of "pi"?

As I said, this point arises already with the natural numbers; there are many families of sets and successor functions one could choose as the denotations of the natural numbers, all of which do equally well. This is a key motivation for philosophies like structuralism in mathematics. The best you'd be able to do is something like "given a set S with the structure of the real numbers, pi relative that set is the element which satisfies the following properties: ..."

Structuralism and classical mathematics are completely compatible. Thus, even remaining within classical mathematics, there is a critique of your statement that

Pi does not have digits; the decimal expansion of a series which converges on pi has digits, but that is very definitely a different entity.
If a suitably defined set of decimal expansions or series of rational numbers has the structure of the real numbers, then, according to a structuralist, these objects would do every bit as well as whichever set you aribtrarily pick to be "pi".

But even a constructivist would have to admit that "pi" refers -- specifically, it refers to pi.

I don't see that a constructivist would require that there exists a referent for "pi". All a constructivist would require is that your proofs be constructive, i.e., that they be done in intuitionistic logic. Sure, Brouwer certainly seemed to believe that words like "pi" referred, but what if, just to piss him off, one decided to do use an intuitionistic proof system, and nothing more powerful, as a purely empty formalism?

So yes, I do believe you are bringing some classical assumptions about mathematics into your argument.

I don't think that anything turns on the ontology here; unless formalism is literally incoherent, doing mathematics does not carry any specific ontological commitment. Though it's a damn long while since I last read Quine, I suspect that this is a red herring.

But is empty formalism informative at all about how mathematics relates to the real world? Don't you need some account of why mathematical statements can serve as premises to empirical arguments?

--em
[ Parent ]

ontological commitments (5.00 / 1) (#268)
by streetlawyer on Wed May 16, 2001 at 06:30:04 AM EST

But is empty formalism informative at all about how mathematics relates to the real world? Don't you need some account of why mathematical statements can serve as premises to empirical arguments?

Not if all I want to do is mathematics; my mathematical terms refer even if they don't refer to entities in "the real world" (and insofar as that is taken to mean anything other than "the world", it's not exactly unproblematic itself). And I'd tend to side with Wittgenstein in waving away the question of "why mathematical statements can serve as premises to empirical arguments". After all, you can substitute "why logical statements can serve ...." without materially changing the problem, and if you're going to ask why logic can be applied to empirical facts, then it's hard to see how you're going to avoid extreme scepticism of a sort even more radical than Hume's.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

Not now, but later... (5.00 / 1) (#5)
by _cbj on Tue May 15, 2001 at 08:23:34 AM EST

I don't think anyone, since Newell and Simon, believes present day computer programs, of the kind that compute pi, say, are conscious. The aim now is to get programs wherein 'pi' spawns a rash of connotations of similar depth, breadth and usefulness as it does in us, though, as they'd be based on experience, obviously quite different.

[ Parent ]
but that's still GOFAI (4.50 / 2) (#9)
by streetlawyer on Tue May 15, 2001 at 08:35:50 AM EST

(= Good Old Fashioned Artificial Intelligence). Whatever "connotations" these things have, they're still performing syntactical manipulation at a fairly (conceptually) simple level. And Searle's Chinese Room tells us that syntactical manipulation isn't ever going to add up to consciousness. Neurone simulation looks like a more promising avenue to me, though I doubt that anything based on a Turing-computable algorithm is ever going to make the grade.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]
the chinese room proves that intelligence can... (5.00 / 1) (#15)
by sayke on Tue May 15, 2001 at 08:55:05 AM EST

emerge from a lookup table. of course, the lookup table would have to have about the same bitwise complexity as a human brain (probably within an order of magnitude or so), but searle didn't mention how slow his chinese room would be. nope, he just went ahead and assumed the man in the room could flip though terabytes of information in time to formulate a cogent reply. of course, when you're doing it all in a massively, humongously parellel fashion, it becomes a hell of a lot easier, but searle didn't mention that. noooo sireee.

but let's try a different thought experiment: let's take put something next to one of your neurons that analyses it's function long enough to develop a working model of it (connections, firing thresholds, firing lags, rate in change of tresholds and lags, etc... the works). then let's replace that neuron with something that emulates it's behavior closely enough for jazz, but dumps it's state to a computer in realish-time. then let's do it to all the other neurons in your brain.

bwow. we've got a you running on a non-biological substrate, and we've got a saved you-state that we can refer to and stare at till we get bored.

this is called uploading, and it's just a matter of time.


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

Eh? (5.00 / 2) (#22)
by spiralx on Tue May 15, 2001 at 09:11:08 AM EST

The Chinese room experiment is a thought experiment designed to show that intelligence does not emerge from a lookup table. The point is that no matter how good the answers are, there is no intelligence behind it and no understanding of concepts. And such a system cannot handle new concepts it encounters until someone explicitly enters a new set of lookups - this is blatently not the case with what we consider to be intelligence.

bwow. we've got a you running on a non-biological substrate, and we've got a saved you-state that we can refer to and stare at till we get bored.

bwow?

Assuming that neurons are the be all and the end all of course. There have been recent advances in neuroscience showing that chemicals like nitrous oxide may play a role in how the brain functions, acting as a neurotransmitter but not following neuronal pathways IIRC.

But of course this is a thought experiment, so that shouldn't matter.

So assuming it works as you say it will (a big assumption), so what? You've proved the human brain is intelligent. Wow. A human brain made out of a different material is still just a human brain...

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

what's another name for a self-modifying... (5.00 / 1) (#24)
by sayke on Tue May 15, 2001 at 09:22:21 AM EST

lookup table, if not neural network?

you can have lookup tables that modify the way they look things up based on previous lookups, you know... in the same way, you can have cellular automata that modify their own rules.

i hadn't heard about the nitrous oxide bit. got any pointers to papers or anything? that could make things quite a bit more complex... it just adds another layer of obfuscation, of course, but that's still damn interesting if true.

and you'd call a human brain running on a different materal "still just" a human brain? this beastie has all kinds of nifty abilities; the ability to take advantage of faster hardware to run itself faster, and the ability to save state, experiment, and restore if things get too fucked up, most notably. sounds quite a bit more then human to me...

doing that would be quite nifty, methinks. bwow-worthy, in fact.


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

Well then stop talking about the Chinese room then (5.00 / 1) (#35)
by spiralx on Tue May 15, 2001 at 09:45:59 AM EST

you can have lookup tables that modify the way they look things up based on previous lookups, you know... in the same way, you can have cellular automata that modify their own rules.

Of course you can, but that's got nothing to do with the Chinese room experiment now has it? If you want to talk about neural networks, talk about neural networks.

i hadn't heard about the nitrous oxide bit. got any pointers to papers or anything? that could make things quite a bit more complex... it just adds another layer of obfuscation, of course, but that's still damn interesting if true.

Ugh, that took some tracking down, but have a look at this paper. A touch technical, but better than nothing. Oh, and my mistake, it's Nitric Oxide...

But the original article I read about NO as a neurotransmitter was in the context of neural nets anyway. Basically a group of scientists were using genetic techniques to design neural nets for various tasks, and one of them came across the information about NO and how, as a gas, it behaves differently to other neurotransmitters. When they incorporated a similar non-local effect into their neural net models, the resulting nets had far less nodes and less connections, but performed at least equally well...

and you'd call a human brain running on a different materal "still just" a human brain? this beastie has all kinds of nifty abilities; the ability to take advantage of faster hardware to run itself faster, and the ability to save state, experiment, and restore if things get too fucked up, most notably. sounds quite a bit more then human to me...

Possibly. It all depends on how it ends up being implemented really.

bwow-worthy, in fact.

bwow?

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

lookup tables as neural networks as chinese rooms (5.00 / 1) (#41)
by sayke on Tue May 15, 2001 at 10:03:44 AM EST

i don't see any difference.

thanks for the pointer to the NO paper, btw. high ownage. four out of five bwows.


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

Well then... (5.00 / 1) (#45)
by spiralx on Tue May 15, 2001 at 10:09:28 AM EST

lookup tables as neural networks as chinese rooms... i don't see any difference

Then you need to go study what Searle said and the conclusions he drew a bit more, because you're missing the point entirely.

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

i did. maybe you can explain what i missed. (5.00 / 1) (#51)
by sayke on Tue May 15, 2001 at 10:33:34 AM EST

or maybe you can't. ;)


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

The point as I see it (5.00 / 1) (#56)
by spiralx on Tue May 15, 2001 at 10:39:44 AM EST

From the outside it looks as though the Chinese room is intelligent - it is producing correct responses to the questions put to it, and would thus pass the Turing test.

But the man in the room does not know Chinese at all. He's just looking up the input and then copying out the output that the cards tell him to produce. So how is there any intelligence? He's just following syntactic rules, there's no understanding of concepts.

And additionally, if a question is placed that is outside of the set of lookups on the cards, the man has no idea what output to produce, and the illusion of intelligence is shattered. Without a well-defined rule he can produce no meaningful output.

OTOH people adapt to new situations all the time. You don't have to be presented with something you've seen before to be able to come up with an appropriate behaviour, which means that the intelligence is of a different kind in that it understands semantics and meaning.

Or at least that's my take on it. Not having done any of this formally, I may have missed stuff...

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

Probably covered before (5.00 / 1) (#58)
by Anonymous 242 on Tue May 15, 2001 at 10:49:45 AM EST

I have a friend that would contend that the chinese room as you paint it is no different from the human mind. The only difference is that the human mind has an algorithm to extend the table when it comes up against a question that isn't in the current look up table.

Don't know that I agree with him, but it's something to consider.

[ Parent ]

Your friend is goofing on you (5.00 / 1) (#115)
by streetlawyer on Tue May 15, 2001 at 01:09:17 PM EST

... or possibly confused himself. If the algorithm is a Turing-computable one, then a lookup table plus an algorithm is just the equivalent of a bigger lookup table. Nothing is gained in logical (which is to say *syntactic*) terms by sweeping this one under the rug.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]
Please educate me. (5.00 / 1) (#122)
by Anonymous 242 on Tue May 15, 2001 at 01:19:58 PM EST

Go back and read what I wrote, then re-read what you wrote and explain to me in simple terms how what you wrote applies to what I wrote.

Thanks,

-l

[ Parent ]

Look at it this way (5.00 / 1) (#125)
by streetlawyer on Tue May 15, 2001 at 01:29:28 PM EST

Your friend thinks that Chinese room A (with a lookup table, and an algorithm for updating it) is a mind.

Now consider a different arrangement, call it the Taiwanese Room. This is a room set up to perfectly simulate the output of Chinese Room A, but it has nothing inside other than a vastly bigger lookup table. It can be proved that if the updating algorithm of Chinese Room A is Turing-computable, then the Taiwanese Room can accurately simulate the output of Chinese Room A.

Therefore, your friend has to say either:

The Taiwanese Room is a mind: in which case, he is still committed to the proposition that lookup tables can be minds, and has gained nothing from changing the problem or

The Taiwanese Room is not a mind, but Chinese Room A is: in which case, he has the twin problem of explaining what the principled difference between Chinese Room A and the Taiwanese Room is, and then explaining why the same difference does not hold between Chinese Room A and a normal mind, or

Stating that the updating algorithm of Chinese Room A is not Turing-computable: in which case Searle has all that he wants out of the thought experiment, and no computer which is a Turing Machine can be a mind.



--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]
My friend would take option number one (5.00 / 1) (#135)
by Anonymous 242 on Tue May 15, 2001 at 01:44:54 PM EST

Which would be his whole point, that minds are just machines made out of meat.

And like I said, I don't know that I agree with him.

[ Parent ]

fair enough (5.00 / 1) (#142)
by streetlawyer on Tue May 15, 2001 at 01:52:48 PM EST

But then he ought to dump this confusing "updating algorithm" and just take the bull by the horns.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]
Because sooner or later, someone comes by and asks (5.00 / 1) (#162)
by Anonymous 242 on Tue May 15, 2001 at 02:35:28 PM EST

What happens if the information isn't on the lookup table?

[ Parent ]
that's a non-objection (5.00 / 1) (#165)
by streetlawyer on Tue May 15, 2001 at 02:43:40 PM EST

the only answer to that is "so what?"; it's a bad objection and one Searle never made. Assume that the information *is* on the lookup table; it's a gedankenexperiment, and nothing important turns on the fact.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]
But people aren't like that (5.00 / 1) (#167)
by Anonymous 242 on Tue May 15, 2001 at 02:48:38 PM EST

We hit holes in our lookup table, all the freaking time

[ Parent ]
it's about process, ph00! (5.00 / 1) (#228)
by sayke on Tue May 15, 2001 at 10:14:36 PM EST

if we turn all your neurons off, you go off. if turn the lookup table off (that is, stop looking things up with it), the mind that was implemented on the lookup table goes off. it goes to the same place that your OS goes when you turn off your computer.

in the same way that a frozen-in-time substance has no temperature, a frozen-in-time brain has no mind.


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

So? (5.00 / 1) (#263)
by spiralx on Wed May 16, 2001 at 06:05:42 AM EST

if we turn all your neurons off, you go off. if turn the lookup table off (that is, stop looking things up with it), the mind that was implemented on the lookup table goes off.

Yes. And?

it goes to the same place that your OS goes when you turn off your computer.

Where is this? The ether? The Magic Kingdom?

in the same way that a frozen-in-time substance has no temperature, a frozen-in-time brain has no mind.

Or you can say that for a frozen-in-time substance its temperature is irrelevent. The kinetic energy that is what we think of as "temperature" is still there, similarly a frozen-in-time brain would still have the electrochemical properties it had before it was frozen. The mind is there, it's just frozen.

It's quite an odd thing to say really, and doesn't fit into the argument IMHO.

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

the place your lap goes when you stand up (5.00 / 1) (#283)
by sayke on Wed May 16, 2001 at 09:47:24 AM EST

sheesh, you should know this. that "place" exists only because of the way we speak about things - it's more of a semantic artifact then anything. i used it for rhetorical effect.

Or you can say that for a frozen-in-time substance its temperature is irrelevent.

ok, then. in frozen-in-time brains, minds become irrelevent. see how it fits the argument now?


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

No, I don't (5.00 / 1) (#298)
by spiralx on Wed May 16, 2001 at 10:43:38 AM EST

ok, then. in frozen-in-time brains, minds become irrelevent. see how it fits the argument now?

I'm not sure where all this even came from. It's kind of obvious a mind is irrelevent for a frozen-in-time brain. As I've said before, I'm not advocating a soul or something separate from the physical brain...

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

why were you complaning if it was so obvious? =) (5.00 / 1) (#311)
by sayke on Wed May 16, 2001 at 11:32:06 AM EST

a couple of posts ago, you said The mind is there, it's just frozen, which i couldn't distinguish from saying "it's not there"... so it went.


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

and your compiler doesn't know c. so? (5.00 / 1) (#65)
by sayke on Tue May 15, 2001 at 11:23:20 AM EST

the chinese room experiment says to me that speaking chinese is an emergent property of the interactions inside the lookup table. one man's syntax is another man's semantics.

i couldn't have said it better then lee's friend when he said that the human mind is like a lookup table that can add new elements and connections between elements. in that way, the lookup table can adapt to novel situations.

lookup table as neural network as chinese room, remember?


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

Compiler (5.00 / 1) (#70)
by spiralx on Tue May 15, 2001 at 11:35:43 AM EST

and your compiler doesn't know c. so?

Which is why your compiler doesn't write software. It follows syntactic rules without any concepts, just as the Chinese room follows such rules without any understanding of Chinese.

i couldn't have said it better then lee's friend when he said that the human mind is like a lookup table that can add new elements and connections between elements. in that way, the lookup table can adapt to novel situations.

Perhaps, but then it seems to me that the key element that in order to make new connections there must be understanding in a semantic sense of where these new connections go. So again, we have the same difference between syntax and semantics - we have it, the Chinese room doesn't.

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

or evolutionary optimization... (5.00 / 1) (#75)
by sayke on Tue May 15, 2001 at 11:54:41 AM EST

or do you think our minds are carefully constructed, one neuron at a time, very carefully, by the cognitive craftsmen elves? is there understanding in the semantic sense of where the new connections in our brains go?


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

Semantics again (5.00 / 1) (#77)
by spiralx on Tue May 15, 2001 at 12:01:28 PM EST

or do you think our minds are carefully constructed, one neuron at a time, very carefully, by the cognitive craftsmen elves? is there understanding in the semantic sense of where the new connections in our brains go?

No, you're looking at this at the wrong level. Your understanding of a new fact involves semantic connections as you fit it into your understanding of the world. This may involve neuronal changes, but that's a question beyond me, and I believe beyond our current understanding of neuroscience ;)

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

funny how the semantic changes correlate to the... (5.00 / 1) (#85)
by sayke on Tue May 15, 2001 at 12:21:11 PM EST

structural/syntactic ones. i mean, gee, there's, like, a one to one correlation! holy nitric oxide, batman!

and what is this "believe beyond our current understanding of neuroscience" bit? do you have a vested interest? would some closely held axiom be challanged otherwise? do you really think you couldn't be run in emulation? do you really think you possess some magical elan vital that makes you special and impossible to reverse-engineer? heh. why do i have to keep asking this...


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

What axiom? What are you talking about? (5.00 / 1) (#92)
by spiralx on Tue May 15, 2001 at 12:31:58 PM EST

funny how the semantic changes correlate to the... structural/syntactic ones. i mean, gee, there's, like, a one to one correlation! holy nitric oxide, batman!

Where d'you get the idea there's a one-to-one correlation? Most concepts cover a whole range of syntactic objects, so it seems doubtful to me there'd be a one to one relationship.

and what is this "believe beyond our current understanding of neuroscience" bit?

Because we currently don't have an understanding of things like memory or consciousness, so we can't really say if or how semantic concepts relate to the underlying structure of the brain.

do you have a vested interest? would some closely held axiom be challanged otherwise?

Errm, no.

do you really think you couldn't be run in emulation? do you really think you possess some magical elan vital that makes you special and impossible to reverse-engineer?

Who knows? It is, *gasp*, beyond our current understanding of neuroscience.

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

ok, whew. (5.00 / 1) (#103)
by sayke on Tue May 15, 2001 at 12:57:57 PM EST

the one to one correlation bit was part of the batman act. all i meant to imply is that there seems to be a very strong correlation between the syntactic and the semantic.

all the other stuff (axiom, magical elan vital, etc) was asked to see if you claimed some mystical property of mind; to see if you thought you had some soul or somesuch that you fervently believed in. does that make sense now?


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

Correlation (5.00 / 1) (#176)
by spiralx on Tue May 15, 2001 at 03:19:06 PM EST

the one to one correlation bit was part of the batman act. all i meant to imply is that there seems to be a very strong correlation between the syntactic and the semantic.

Undoubtedly there's a correlation between semantics and syntax (although not vice versa), but there's no causal connection as far as I can tell.

And no, I don't believe in the tooth fairy, leprechauns, Santa Claus or the soul ;)

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

drop some acid and get back to me ;) (5.00 / 1) (#225)
by sayke on Tue May 15, 2001 at 09:09:16 PM EST

more seriously, it looks fairly clear to me that structural (syntactic) changes result in semantic ones - indeed, they look inextricably intertwined. if you've ever partaken of any psychoactive substances, you'd know this intuitively. somehow, qualia follow from neural structure - somehow, semantics emerges from syntax, looks at itself, and goes "hey! i look semantic!"

why do you say there's no correlation betwen syntax and semantics? there looks to be an incredibly strong correlation to me. geesh. we talk about semantics in programming, and of course it can be reduced to syntax, but temperature can be reduced to molecules and even to atoms (and the syntax we use to model their behavior), so that's not saying much.


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

Limits of knowledge (5.00 / 1) (#258)
by spiralx on Wed May 16, 2001 at 04:37:16 AM EST

more seriously, it looks fairly clear to me that structural (syntactic) changes result in semantic ones - indeed, they look inextricably intertwined.

That's pure assumption given that we have no evidence for it at the moment from a physical basis. Neuroscience doesn't tell us for instance how memory is encoded or changed, so making arguments involving physical structure means you can't have any proof to back it up. For now, I'm taking the opinion I stated, but as our understanding of these things increases, I'm prepared to alter my opinion...

you've ever partaken of any psychoactive substances, you'd know this intuitively.

Not for a couple of weeks now, but I've taken enough acid to know what you're getting at. And I'm not denying structure has an effect on how we understand things - I've already said I don't believe in a soul, but that's different from saying semantics arises from syntax. Different things, and different positions...

why do you say there's no correlation betwen syntax and semantics? there looks to be an incredibly strong correlation to me. geesh. we talk about semantics in programming, and of course it can be reduced to syntax, but temperature can be reduced to molecules and even to atoms (and the syntax we use to model their behavior), so that's not saying much.

I think these programming definitions are what's causing a lot of problems here, because it's the same terms used in different ways. Talking about semantics in the sense of computing is rediculous, but computers have no sense of semantics, they just manipulate data.

And temeperature can't be reduced to molecules, it's a function of the average kinetic energy of a set of particles.

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

let's examine this, man... (5.00 / 1) (#276)
by sayke on Wed May 16, 2001 at 09:03:29 AM EST

let's say we come across a novel, alien-made computing architecture. computers of this architecture are inert black blocks about 200mm square with some standard ports on one side. they're seemingly turing-complete (it quacks like a turing-complete system; hell, for the sake of example, let's say there's an alien-made netbsd port for this arch), but because of the miniscule scale on which this architecture is implemented, it's beyond our current ability to meaningingfully reverse-engineer. in this architecture, we don't know how memory is encoded or changed, etc - does that mean we shouldn't think it's turing complete till it's reverse-engineered? would you call it "pure assumption" to do so?

what if we didn't have the netbsd port? what if all we knew is that it seemed to respond consistantly to certain stimuli? would you then say we shouldn't think it's turing-complete? would you call it "pure assumption" to do so, as opposed to a good guess?

I've already said I don't believe in a soul, but that's different from saying semantics arises from syntax.

tell... me... some.... differences! heh.

Talking about semantics in the sense of computing is rediculous [sic], but computers have no sense of semantics, they just manipulate data.

funny, that's what i thought neurons did ;) you see my point though, right?

And temeperature can't be reduced to molecules, it's a function of the average kinetic energy of a set of particles.

if a molecule's definition includes it's kinetic energy, then for every molecule we have a temperature... but i think i understand what you're saying - in order to measure temperature, we must treat it as a verb; something that interacting molecules do. as you probably know, dennet takes the position (as do i) that mind is something brains (interacting neurons) do... and that's where the analogy came from.


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

Reply (5.00 / 1) (#279)
by spiralx on Wed May 16, 2001 at 09:32:31 AM EST

in this architecture, we don't know how memory is encoded or changed, etc - does that mean we shouldn't think it's turing complete till it's reverse-engineered? would you call it "pure assumption" to do so?

It's just a big an assumption to believe it is Turing complete as to believe it isn't Turing complete without sufficient information. Yes, we can make a good guess, but that's all.

tell... me... some.... differences! heh.

Syntax is purely about symbols and how they relate - for instance a language's structure and grammer. Programming languages are defined by a formal grammer for instance which specified what symbols there are and how they can be used.

Semantics has to do with what these symbols mean. A computer can deal with syntax - Word knows that a capital letter should follow a full stop for instance - but it knows nothing of the meaning of these. Word doesn't know that a full stop indicates the end of a sentance, and it doesn't know that the word "white" signifies a certain colour.

And yes, programs can be given this knowledge (see the Cyc project for the best example - it has over 1.5 million assertions so far) but again, that's just adding new rules for manipulating syntax.

funny, that's what i thought neurons did ;) you see my point though, right?

Well yes, but I'm assuming you know what I mean when I say "the envelope on my desk is white" - you know what envelope, desk and white mean in this context. You have semantic knowlege.

Again, how does a Turing machine deal with something totally new? If there's an updating algorithm, how does it deal with something that doesn't trigger any of the inputs in its lookup table?

if a molecule's definition includes it's kinetic energy, then for every molecule we have a temperature... but i think i understand what you're saying - in order to measure temperature, we must treat it as a verb; something that interacting molecules do. as you probably know, dennet takes the position (as do i) that mind is something brains (interacting neurons) do... and that's where the analogy came from.

I don't think many here are taking a different position on the matter, not even streetlawyer :)

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

then i say it sounds like a good guess (5.00 / 1) (#293)
by sayke on Wed May 16, 2001 at 10:37:36 AM EST

Semantics has to do with what these symbols mean.

you mean, like, how they relate to each other? teehee... remember, i say relatio ergo sum instead of cogito ergo sum. i seriously think meaning without context isn't meaning; indeed, meaning arises from context. what kind of context, exactly? not precisely sure. can i show you working code that exemplifies this? nope. sorry. but i think the position that semantics arises from syntax is a good guess - in fact, quite a bit better then any alternatives.

Word doesn't know that a full stop indicates the end of a sentance, and it doesn't know that the word "white" signifies a certain colour.

it may not have the massive library of context i do, but i think the way it knows it and the way i know it are merely (!) differences of number and not qualitative kind.

And yes, programs can be given this knowledge (see the Cyc project for the best example - it has over 1.5 million assertions so far) but again, that's just adding new rules for manipulating syntax.

that sounds like you're saying "dropping acid is just adding new rules for manipulating syntax". i agree in principle - but the "just" really irks me. it feels like the "mere" in "mere infinities"...

I'm assuming you know what I mean when I say "the envelope on my desk is white" - you know what envelope, desk and white mean in this context.

i assocate all kinds of things with "envelope", "desk", and "white", sure, but beyond that, i have no idea what you mean. none.

You have semantic knowlege.

not without context i don't.

Again, how does a Turing machine deal with something totally new? If there's an updating algorithm, how does it deal with something that doesn't trigger any of the inputs in its lookup table?

the same way you deal with things that don't affect you in any way.

I don't think many here are taking a different position on the matter, not even streetlawyer :)

cool. how often does that happen? =)


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

Ugh (5.00 / 1) (#301)
by spiralx on Wed May 16, 2001 at 10:54:58 AM EST

you mean, like, how they relate to each other?

No, that's syntax again. Which has nothing to do with meaning.

but i think the position that semantics arises from syntax is a good guess - in fact, quite a bit better then any alternatives.

But it's pure conjecture without evidence as far as I can tell. Whereas the position I'm holding has a fairly good proponent in the Chinese room experiment, which while not perfect, is better than your evidence by a long way :)

it may not have the massive library of context i do, but i think the way it knows it and the way i know it are merely (!) differences of number and not qualitative kind.

No, because you link the symbol "white" to an actual thing - the colour white. Word just has the symbol. You're really limiting yourself by claiming otherwise.

Besides, your "library of context" is fucking semantic knowledge.

i assocate all kinds of things with "envelope", "desk", and "white", sure, but beyond that, i have no idea what you mean. none.

Exactly. You associate the symbol "envelope" with a certain kind of obejct and so on. This is the semantic knowledge you have, linking a symbol with what it represents.

not without context i don't.

Semantics is context.

the same way you deal with things that don't affect you in any way

I wasn't saying it didn't affect you, I was saying you have no symbols representing this phenomenon. Different things...

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

guh. (5.00 / 1) (#309)
by sayke on Wed May 16, 2001 at 11:15:56 AM EST

if how things relate to each other has nothing to do with meaning, then i have no idea what you mean by meaning.

so what's the position you're holding, exactly, and how does the chinese room bit support it? i didn't know you were putting a position forth; i just thought you were attempting to disassemble mine.

you link the symbol "white" to an actual thing - the colour white.

really? fascinating. i have no idea what you mean by "actual thing", then. all i see is symbols upon symbols upon symbols, all the way down... =)

Besides, your "library of context" is fucking semantic knowledge.

sure. look at it one way, and it looks like syntax; look at it another way, and it looks like semantics. perspective and quantity, man, not quality...

You associate the symbol "envelope" with a certain kind of obejct and so on.

i'm sorry, but i can't tell the difference between symbols and the symbolized. perhaps you can help me out, said the spider to the fly... ;)

Semantics is context. i don't like the verb "to be".

I wasn't saying it didn't affect you, I was saying you have no symbols representing this phenomenon. Different things...

different how?


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

Wow, you can't see this thread without scrolling (5.00 / 1) (#313)
by spiralx on Wed May 16, 2001 at 11:41:13 AM EST

i'm sorry, but i can't tell the difference between symbols and the symbolized. perhaps you can help me out, said the spider to the fly... ;)

Well here's a hint - one of them, you can put things in! And another hint - it's not the symbol!

different how?

Well if a completely unknown thing is hurtling towards you at a very fast rate then it would be safe to assume it would affect you very soon, but you wouldn't know what it was...

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

that's cuz we rule =) (5.00 / 1) (#320)
by sayke on Wed May 16, 2001 at 12:10:27 PM EST

Well here's a hint - one of them, you can put things in! And another hint - it's not the symbol!

maybe i'm missing a metaphor or something, but i have all kinds of set and container symbols in which to "put" other symbols...

Well if a completely unknown thing is hurtling towards you at a very fast rate then it would be safe to assume it would affect you very soon, but you wouldn't know what it was...

if it's completely unknown then i can't assume it'll affect me, can i? but i don't understand what any of this has to do with whether conscious minds can be implemented on turing-complete machines... maybe i'm just missing metaphors because i'm tired. oh well. time to cease to exist again, i think.


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

Ah, but (5.00 / 1) (#322)
by spiralx on Wed May 16, 2001 at 12:20:00 PM EST

You can't put physical objects in a symbol eh? You can use symbols to represent such a move as you say, but you need the physical objects (the referrents) to do what the symbols represent.

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

wait a minute... (5.00 / 1) (#219)
by eLuddite on Tue May 15, 2001 at 06:29:06 PM EST

do you really think you possess some magical elan vital that makes you special and impossible to reverse-engineer?

No, its the considerably less mysterious "emergent property." :-)

Since emergence is a process that makes explicit features that were previously only implicit, it must be defined relative to a semantic model of phenomena seen by an observer. Therefore any soft machine model of design emergence requires a means of observing both syntax and the process of design in order for it to monitor any potential changes in syntax (life happens.) How does emergent design cope with syntax that is ill defined? How does emergent design cope with novel syntax? How does emergent design *create* novel syntax?

---
God hates human rights.
[ Parent ]

The brain (5.00 / 1) (#80)
by ucblockhead on Tue May 15, 2001 at 12:09:19 PM EST

But the man in the room does not know Chinese at all.
Consider this: no neuroscientist can yet point to a part of the human brain and say "this is the part of the brain that knows Chinese".

The best they can do is say something like: "See, these neurons fire more often when the subject is translating Chinese".


-----------------------
This is k5. We're all tools - duxup
[ Parent ]

The Rule Book (5.00 / 1) (#367)
by acronos on Wed May 16, 2001 at 10:44:55 PM EST

But the man in the room does not know Chinese at all. He's just looking up the input and then copying out the output that the cards tell him to produce. So how is there any intelligence? He's just following syntactic rules, there's no understanding of concepts.

The intelligence is in the rule book. The Rule Book in the Chinese Room argument does know Chinese if it is sufficiently well designed. Combine it with a body and then you will have human level intelligence from the perspective outside the room. Understand that it would have to be a dynamic rule book that could change it's rules and add new knowledge.

There is no way a human would be fast enough to sort through all the rules though so that particular example would not add up to strong AI. Add a sufficiently large supercomputer (likely bigger than anything we have so far) applying the rules in the room and then you have it.

The Chinese Room argument is just a restatement of the origional problem. The person in the room is the hardware. The rule book is the software. And the slits are the input and output. How is this in any way different from a computer, except it limits the input and output, makes it hard to visualize complicated evolving rule books, and has little room for memory.

Computers on the other hand do have memory, can change their rules, and have much more developed input and output. The Chinese room is only a VERY scraped down computer. How does it help us understand the problem. If you already have the presuposition that conscience is some esoteric quantity then it just restates what you already believe that computers are inherently stupid. If you already understand how the whole can be much more than it's parts, then it is obvious that the intelligence is in the whole of the parts, and from the exterior the machine is exhibiting intelligence.

Computers already can beat the human grandmaster in a game of chess. Try visualizing that with your Chinese room. If a program is able to emulate human intelligence and is every bit as flexible as a human being, then I don't care whether it's intelligence is "real" or not. I want one to do my dishes.

[ Parent ]

The systems argument eh? (5.00 / 1) (#384)
by spiralx on Thu May 17, 2001 at 06:42:10 AM EST

The intelligence is in the rule book. The Rule Book in the Chinese Room argument does know Chinese if it is sufficiently well designed.

This is called the systems argument. Okay then, we'll do away with the rule book and just have our man inside with the rules in his head. Externally we have the same situation, but the man still doesn't know Chinese does he? He still has no understanding of what the questions put to him actually mean, he's just following rules to produce an output.

Combine it with a body and then you will have human level intelligence from the perspective outside the room.

Well yes, that's the point. The Chinese Room passes the Turing test in that sense, without having any understanding at all.

Understand that it would have to be a dynamic rule book that could change it's rules and add new knowledge.

And how would it do that without understanding Chinese? It'd be interesting to see how such an algorithm could deal with new concepts... And even so, you've just got a bigger set of rules, you haven't changed the nature of the argument at all. He still doesn't understand Chinese.

There is no way a human would be fast enough to sort through all the rules though so that particular example would not add up to strong AI.

It's a thought experiment :)

Add a sufficiently large supercomputer (likely bigger than anything we have so far) applying the rules in the room and then you have it.

Speed is irrelevent. Computers do these kind of tasks quicker than humans today, but that doesn't make them have any kind of semantic understanding of what they process.

The Chinese Room argument is just a restatement of the origional problem. The person in the room is the hardware. The rule book is the software.

Hardware/software, the distinction is irrelevent to the argument.

How is this in any way different from a computer, except it limits the input and output, makes it hard to visualize complicated evolving rule books, and has little room for memory.

It's not, that's the point. The point is that it's different from how we think.

The Chinese room is only a VERY scraped down computer.

Remember - any Turing-complete system can emulate any other Turing-complete system. That's why there is no meaningful distinction between a computer and the Chinese room.

If you already understand how the whole can be much more than it's parts, then it is obvious that the intelligence is in the whole of the parts, and from the exterior the machine is exhibiting intelligence.

How is the whole more than the sum of its parts? It's doing exactly what it's programmed to do, nothing more, nothing less.

Computers already can beat the human grandmaster in a game of chess.

So? Chess is a game with strict rules which can be analysed by various algorithms. Big Blue was hardly conscious now was it?

If a program is able to emulate human intelligence and is every bit as flexible as a human being, then I don't care whether it's intelligence is "real" or not. I want one to do my dishes.

That's the point of the Chinese room - the system emulates the output of a human, but it does not emulate a human. And if you don't care about the distinction, why are you even arguing this case?

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

Ok.. (5.00 / 1) (#450)
by acronos on Thu May 17, 2001 at 05:21:25 PM EST

This is called the systems argument. Okay then, we'll do away with the rule book and just have our man inside with the rules in his head. Externally we have the same situation, but the man still doesn't know Chinese does he? He still has no understanding of what the questions put to him actually mean, he's just following rules to produce an output.

Then this time the man does know Chinese. The rules for Chinese are in his head.

>Understand that it would have to be a dynamic rule book that could change it's rules and add new knowledge.

And how would it do that without understanding Chinese? It'd be interesting to see how such an algorithm could deal with new concepts... And even so, you've just got a bigger set of rules, you haven't changed the nature of the argument at all. He still doesn't understand Chinese.

In order for the rule book to respond fluently to Chinese then it has to understand Chinese. The rule book could then use the information that someone said in Chinese to generate new rules. Let me give an example. Say I have a computer that has relationships attached to 3 words. {dog,small,display} Dog is connected to a picture, small is connected to a resize smaller, and display places the picture on the screen. Now I use the word "cat" in speaking. The computer queries me for the definition of "cat". I say "cat=small dog" Now when I say "display cat" I get a picture of a shrunken dog. The computer has learned a new word. It is true that the computer has a very questionable understanding of cat. A more complete example would be far to long.

Speed is irrelevant. Computers do these kind of tasks quicker than humans today, but that doesn't make them have any kind of semantic understanding of what they process.

Speed is not irrelevant. Try playing quake on a 8088. The game is unplayable and essentially unrecognizable. There is probably no computer around today that could handle the size of the rule book and processes needed to speak Chinese fluently. The complexity of the human brain is extraordinary. You can hear the word "impressive" said just right and immediately pull through every movie you have ever seen and every word that had an impact on you and remember that that was in star wars and simultaneously make a half dozen other associations. There is no computer today that can cull through that much data fast enough to seem real. Your brain can. It is speed that is holding back AI, but the speed is coming. This matters to language as well. Our language is chuck full of associations. Until a computer is able to understand all these associations you can forget it speaking language as fluently as a human.

Remember - any Turing-complete system can emulate any other Turing-complete system. That's why there is no meaningful distinction between a computer and the Chinese room.

I have never agreed with the Turing argument. The Turing argument doesn't take into account speed. Speed matters.

How is the whole more than the sum of its parts? It's doing exactly what it's programmed to do, nothing more, nothing less.

Do you really believe a composition by Mozart is just a bunch of notes. Each note alone is not very impressive. When we combine them in the right combination is when they become "impressive." No atom, cell, or neuron in your brain has any understanding of Chinese either. But, when they are working together, you can learn it and communicate if you choose. The whole is always more than it's parts if for no other reason than that it is a "whole."

So? Chess is a game with strict rules, which can be analyzed by various algorithms. Big Blue was hardly conscious now was it?

I was in no way implying that big blue was conscious. I was just saying that the Chinese room argument is a very simplistic computer. It is very hard for me to visualize a rule book that could play chess. Big blue won because it considered billions of combinations and chose the one that accomplished the goal of it's programming. It DID make decisions. It DID exhibit intelligence. It was NOT conscious. No one is arguing that any computer is conscious today. And the word "conscious" will have to be much better defined before anyone ever will successfully.

That's the point of the Chinese room - the system emulates the output of a human, but it does not emulate a human. And if you don't care about the distinction, why are you even arguing this case?

Because I want a computer that will do my laundry not slit my throat. It is very important to recognize the societal implications of AI. I cannot say with absolute certainty that we will ever achieve it. I can say that the evidence so far makes it extremely probable. And if there is even a chance of it happening then we need to be thinking about what it means for our existence, or we may loose our existence. I am arguing for it because most people are in denial. I don't care about the technicality of, "is it human." Of course it isn't human. The implications of "can it do what a human can do" are more than enough for me. But what most people are arguing is "it is impossible for a computer to do what a human can do." They are using the Chinese room to back this up. They are wrong. There are HUGE holes in the Chinese room argument. It fails because it says that something cannot be more than the sum of its parts.

[ Parent ]

*sigh* (5.00 / 1) (#455)
by spiralx on Thu May 17, 2001 at 06:13:19 PM EST

Then this time the man does know Chinese. The rules for Chinese are in his head.

No he doesn't, he's just following rules. He doesn't understand what any of it means after all. So he may speak Chinese, and fool a Turing test, but he still doesn't understand it...

In order for the rule book to respond fluently to Chinese then it has to understand Chinese.

No it doesn't, it just has to have a big enough set of rules to cover all contingencies.

Say I have a computer that has relationships attached to 3 words. {dog,small,display} Dog is connected to a picture, small is connected to a resize smaller, and display places the picture on the screen. Now I use the word "cat" in speaking. The computer queries me for the definition of "cat". I say "cat=small dog" Now when I say "display cat" I get a picture of a shrunken dog. The computer has learned a new word.

No it hasn't, it just has a new rule. Syntactically it has a new symbol with a rule about it, but it still doesn't understand that "cat" is anything other than a symbol attached to certain rules.

Speed is not irrelevant. Try playing quake on a 8088.

This is a thought experiment, not an actual experiment :) Speed is totally irrelevent to the argument at hand. And in real life, speed may make a computer better at certain tasks, but as the Chinese room experiment shows, it doesn't imply understanding in any way.

I have never agreed with the Turing argument. The Turing argument doesn't take into account speed. Speed matters.

Since it's a thought experiment, let's let Searle move at 99.99999% of the speed of light. Sure, he may answer queries quicker, but it makes no difference to whether or not he understands Chinese.

Do you really believe a composition by Mozart is just a bunch of notes.

Yep. It's our interpretation of it that is more. You can't argue that along with the physical compression waves that are produced at a Mozart concertio (sp?) there's some magical extra element. Still, that's a good argument in my favour - how does the Chinese room, by following rules about its input (assuming we give it aural input now as well, as anticipated in the Robot reply to Searle), come up the appreciation for Mozart that we do?

The whole is always more than it's parts if for no other reason than that it is a "whole."

Prove it's true in this case then. We have a formal system with completely specified behaviour. I don't disagree in principle with emergent behaviour, but for a system with such a simple formal description?

I was just saying that the Chinese room argument is a very simplistic computer. It is very hard for me to visualize a rule book that could play chess.

The book merely represents a lookup table, a set of rules for turning one set of symbols into another, the same as a computer does at its core.

It DID make decisions. It DID exhibit intelligence.

It does exhibit intelligence. As does the Chinese room. But it didn't make any decisions as such, because it followed strictly deterministic rules. It doesn't have any choice in the matter, it does as its program dictates. As does the Chinese room.

And the word "conscious" will have to be much better defined before anyone ever will successfully.

Heh, no fucking shit :)

I cannot say with absolute certainty that we will ever achieve it. I can say that the evidence so far makes it extremely probable.

I agree, we'll manage it some day. Not soon though, because as I've been arguing, I don't think todays computing systems are up to the job. We'll need a new way of computing not based on formal systems and Turing machines.

But what most people are arguing is "it is impossible for a computer to do what a human can do." They are using the Chinese room to back this up. They are wrong. There are HUGE holes in the Chinese room argument. It fails because it says that something cannot be more than the sum of its parts.

A Turing machine technically, which our current computers are. I don't know whether quantum computers are or aren't for instance, so I'm not arguing against AI in principle at all. In fact, I'm not so sure anyone is. The Chinese room is only an argument against Turing machines, as even Searle says, not machines as a whole.

And you've still failed to demonstrate he was wrong in my view. If you can show emergent behaviour in a deterministic formal system that has been completely specified, then we'll talk...

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

Yes He does (5.00 / 1) (#491)
by acronos on Fri May 18, 2001 at 11:29:24 AM EST

No he doesn't, he's just following rules. He doesn't understand what any of it means after all. So he may speak Chinese, and fool a Turing test, but he still doesn't understand it...

Yes he does know Chinese. How can you prove he doesn't. If he can speak Chinese fluently how can you say he doesn't know Chinese.

The Chinese room just separates the understander from the actions that understanding generates. It is a stupid example because it doesn't mean anything. When you put the rules back into the mans head, you did what makes sense. It doesn't matter how those rules are encoded as long as it generates the ability to speak Chinese fluently.

You are not picturing the right type of rules. You are not understanding that for a computer to understand the word "moon" it has to understand much much more than a bundle of dirt in the sky. It has to understand the way the moon is used in love songs. It has to understand how dark the earth is when it is lit by just a full moon. There is no set of "rules" you can construct that can speak Chinese fluently without understanding these associations. If someone builds a set of rules, and a program is just a set of rules, that can speak Chinese fluently then it has already made these associations. Each association is a new rule. It is a new connection. It has touched a flower and felt the softness. Otherwise it cannot understand the word "softness" well enough to speak fluently about it. It has made the associations that make up the language. And it "understands" them. Just as your sensations begin with electrical impulses, so do this machines. From there it generates experiences based on these associations. The word "experience" carries all kinds of humanist baggage. I mean experience in the way a digital camera experiences the world. It sees the light and adjusts it's internal mechanisms in order to give you the best picture it can. It experiences a stimulus and reacts to it.

By your way of thinking you must not understand English either because there is no way that your neurons understand it. They are connected into a set of rules so that you can emulate speaking English. It doesn't matter what or how the rules are created. Whether they are on paper, in computer memory, or in human brain matter. Your neurons have a rule if they receive a certain amount of input from other neurons, they fire. Otherwise they do not. Your entire intelligence is built on this ONE SIMPLE RULE. It is the connections between these neurons that create your intelligence. These connections are what create the program or rules of your brain and your memory. Which neuron each neuron sends to is dependant on how this neuron has been "programmed" by the connections involved. Just as the old teletype computers were "programmed" by making a punch in a card that allowed a connection.

Just because you cannot explain how something can be done using current technology, doesn't mean it can't be done. By using the Chinese room argument you are implicitly saying we have already achieved how to make a machine speak Chinese fluently enough to fool a human. If it can fool a human then it can do algebra, solve a rubix cube, and develop new theories on how the universe was created. Otherwise a human who asked these questions would not be fooled. If it can do all these things then how has the AI failed to be intelligent? It is the whole that matters, not the pieces.

I agree, we'll manage it some day. Not soon though, because as I've been arguing, I don't think todays computing systems are up to the job.

I don't believe todays computers can do it either. We need much bigger and faster processors and memory.

How far do you think we are away from atomic computers? If you are thinking in the next 20 to 100 years for strong AI then we are in the same time frame. We only differ in whether it can be done with digital computers.

>The whole is always more than it's parts if for no other reason than that it is a "whole."

Prove it's true in this case then. We have a formal system with completely specified behaviour. I don't disagree in principle with emergent behaviour, but for a system with such a simple formal description?

The whole in the example is able to speak Chinese. None of the parts can.

Something else that must be considered, completely specified behavior can grow and learn. It can have emergant behavior the way you are meaning. I can develop a machine that can learn to play a perfect game of tic tac toe. All I need to do is have it consider every move from here on and consider if it causes a win or a loss. All moves that give the most wins and least losses are chosen taking into consideration the order of the wins and losses. This is a very simple rule. All the behavior the game exibits is emergent. The game will play tic tac toe better than I could, and I wrote the rule.

Every move is not spelled out in the above example. Actually no move is spelled out in the above example. Yet a perfect game of tic tac toe emerges. You say, but tic tac toe is a simple game. I say everything becomes simpler with enough processing power.

But it didn't make any decisions as such, because it followed strictly deterministic rules.

In the above game example the computer is deciding where to place it's piece. It had several choices. The program makes it's decision based on evidence collected as to which choice achieved the programs goal. (winning the game) The only way you can say that the program didn't decide is to say that the definition of decide requires a human. My definition of decide is "to pick one of multiple choices." What definition of decide can you give me that doesn't have a humanistic component that the above example doesn't meet? Not only does it decide, but it understands. It understands that if it places it's piece here it will lose in the next round. It is programmed to hate losing and love winning. It hates by negative numbers. It loves by positive ones. Just as a fly loves the light, my program loves to win.

Even more could be accomplished by combining a learning method to the above AI. I cannot beat the game if it is considering just 3 levels of moves but so what. (I built it to make sure I was right about how it worked before writing this.) If we made the game more complicated than current computers could handle in a brute force kind of way, then we could use learning to improve the machines performance. I would just record positions that caused losses and discourage the machine from getting in those situations again. I would "discourage" by using numeric weights which is how I did the brute force algorithm also. This would cause the machine to get better each time it played. How is this not emergent behavior? A neural network(the programming sense of the word) is an option also. All of this can be implemented in a rule book. It would just be a very complicated rule book.

[ Parent ]

Urk (5.00 / 1) (#494)
by spiralx on Fri May 18, 2001 at 12:28:46 PM EST

Yes he does know Chinese. How can you prove he doesn't. If he can speak Chinese fluently how can you say he doesn't know Chinese.

Because he doesn't. He's emulating speaking Chinese. He's not coming up with any Chinese himself at all, he's just following a set of rules.

This is where we differ. You believe we're just a set of rules, a Turing machine in essence. I don't, because I see a difference between following rules and understanding. I can follow a set of well-defined rules to solve, say Maxwell's equations for electromagnetism, but it doesn't necessilarily mean I understand what that equation means at all.

You are not picturing the right type of rules.

If X then Y. That's the only kind of rules we're talking about here. That's what is specified by the formal system that the Chinese room represents. If you want to use other rules, well then that's not what we've been talking about....

You are not understanding that for a computer to understand the word "moon" it has to understand much much more than a bundle of dirt in the sky.

No what I'm saying is that a computer does not even understand that the word "moon" represents a big rock in the sky. To a computer, "moon" is a just a symbol, nothing else.

If someone builds a set of rules, and a program is just a set of rules, that can speak Chinese fluently then it has already made these associations.

No, no, no. The computer has such a set of rules, it did not create the rules. Associations have been made, but not by the computer. We would assume in the Chinese room that the book has been written by someone with these semantic associations, that's obvious. But the book is provided to the Chinese room as is.

Each association is a new rule. It is a new connection.

But again, how does having a set of rules make a new association? How can it decide on a new rule?

By your way of thinking you must not understand English either because there is no way that your neurons understand it. They are connected into a set of rules so that you can emulate speaking English.

And how do you know this? Have you made advances in neuroscience that show how we think and remember things? Please, do tell of this astounding breakthrough!

We don't know how we think. Saying we have a set of rules in our brain is a complete assumption with as of yet, no hard evidence. And no, there's no hard evidence to suggest otherwise either that I know of, but I'm willing to trust in my own experiance that I understand English.

Your neurons have a rule if they receive a certain amount of input from other neurons, they fire. Otherwise they do not. Your entire intelligence is built on this ONE SIMPLE RULE. It is the connections between these neurons that create your intelligence. These connections are what create the program or rules of your brain and your memory. Which neuron each neuron sends to is dependant on how this neuron has been "programmed" by the connections involved.

It's a little bit more complicated than that, but even so you still can't prove that the mind is a Turing machine. If you can, well then I'm wrong, but if you can prove otherwise then you're wrong. Again, we need a better understanding of cognition and consciousness first.

If it can fool a human then it can do algebra, solve a rubix cube, and develop new theories on how the universe was created.

Now that's a rediculous argument. The Chinese room has been programmed to speak Chinese, not do any of these things! And in each case, to get it to do these things you'd need to give it a new set of rules before it could deal with problems.

I don't believe todays computers can do it either. We need much bigger and faster processors and memory.

Nope, that won't cut it. Better and better emulations sure, but no strong AI at all. What we need is a new method of computing that isn't just a Turing machine.

How far do you think we are away from atomic computers? If you are thinking in the next 20 to 100 years for strong AI then we are in the same time frame. We only differ in whether it can be done with digital computers.

I think you're perhaps being optimistic. I'd say 75-200 years myself. And 15-20 years from atomic computers able to fully utilise such power in an efficient manner.

Something else that must be considered, completely specified behavior can grow and learn.

Yes indeed.

It can have emergant behavior the way you are meaning.

No, because it's not really emergent behaviour, it's a different thing. Syntax, no matter how good, cannot give rise to semantics. But then, you disagree there...

Every move is not spelled out in the above example. Actually no move is spelled out in the above example. Yet a perfect game of tic tac toe emerges. You say, but tic tac toe is a simple game. I say everything becomes simpler with enough processing power.

Everything that's possible to compute using an algorithm that's computable on a Turing machine for sure! But what about the halting problem? There are classes of algorithms out there which a Turing machine cannot decide on whether they will ever stop or go on forever, and yet we are able to recognise which are which. There's a good one for theory...

In the above game example the computer is deciding where to place it's piece. It had several choices. The program makes it's decision based on evidence collected as to which choice achieved the programs goal. (winning the game) The only way you can say that the program didn't decide is to say that the definition of decide requires a human.

Nope, decision is a conscious choice between alternatives. The computer merely evaluates moves and acts according to the relative values it has placed upon them. It cannot choose a move that will not at some point lead towards victory, whereas a person can play to lose - they have that choice.

Not only does it decide, but it understands.

No it doesn't, it follows an algorithm. That's even more anthropomorphic than assigning the Chinese room understanding. It's only slightly better than saying evolution acts for the good of the species.

It is programmed to hate losing and love winning. It hates by negative numbers. It loves by positive ones. Just as a fly loves the light, my program loves to win.

Oh dear. See above.

Even more could be accomplished by combining a learning method to the above AI.

Adjusting a few weightings. Not a quantative difference at all.

How is this not emergent behavior?

It's not, it's just rule-following. You've really picked a bad example here, because chess programs are about as far from AI as you can get! I've honestly never seen anyone consider chess programs as intelligent before...

A neural network(the programming sense of the word) is an option also. All of this can be implemented in a rule book. It would just be a very complicated rule book.

Indeed, and you've gone from the Chinese room to the Chess room. It's the same argument, except my position is perhaps more clear :)

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

Obviously not convinced (5.00 / 1) (#496)
by acronos on Fri May 18, 2001 at 01:14:13 PM EST

Obviously your arguments are not convincing to me and my arguments are not convincing to you. We both see the fundamentals different. My position is easier to prove bacause all we have to do is do it. Your position is much harder because you can never know for sure. In 20 to 100 years we will know if I am right. I guess we will have to wait.

[ Parent ]
illusory complexity (5.00 / 1) (#44)
by streetlawyer on Tue May 15, 2001 at 10:09:02 AM EST

Any "self-modifying" lookup table based on a Turing-computable algorithm is isomorphic to a lookup table of the simple kind, which happens to be a bit larger. So there is no gain from adding this layer of obfuscation.

And your hypothesised brain would not have the abilities you suggest; for example, you can't "save state and restore", because all you can save is the syntactic arrangement, and, as we're slowly establishing, syntax ain't semantics.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

that's right up there with illusory greeness (5.00 / 1) (#73)
by sayke on Tue May 15, 2001 at 11:50:24 AM EST

i was describing a process, not a static thing. as a gas's static saved state has no temperature, a lookup table's static saved state has no intelligence. i call intelligence and temperature emergent properties of the interactions between the components - of the process - and if you take away the process aspect then the interesting stuff goes away. much can indeed be gained from the process aspect, and i wouldn't call it obfuscation at all.

i see no difference between the syntactic arrangement and the saved state. your repeated assertion that syntax isn't semantics irks me. the way that you've established nothing, but repeated your conclusion often, irks me.

i think that semantics emerges from syntax, but i don't see any qualitative difference between them, either. i'd call it a quantitative and perspective difference, but decidedly not a qualitative one. of course, i encourage you to point out qualitative differences between syntax and semantics, but as you haven't so far, i doubt you will.


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

all greenness is "illusory" (5.00 / 1) (#87)
by streetlawyer on Tue May 15, 2001 at 12:26:28 PM EST

in the sense that like consciousness and semantic content, it's an intrinsically first-person concept. That's the sort of thing I'm talking about. The distinction between syntax and semantics is not something I have to argue for; it comes from the meaning of the two terms. You seem to be arguing that there is no such thing as semantics; a defensible view, but not one which I find attractive because it leads to all sorts of strange results elsewhere in logic.

How are you going to describe the first-person objective property of greenness in software terms? I don't think that any purely formal, syntactic description is going to do, because for any such description D, it is not a necessary property of D that it describes greenness, whereas this is a necessary property of "greenness".

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

Process aspects (5.00 / 1) (#119)
by Simon Kinahan on Tue May 15, 2001 at 01:16:23 PM EST

If you're happy to accept that some aspect of a process may be what gives rise to conciousness, how do you know that this process is not either a physical one, whose simulation would not give rise to the same result, depending, say, on some property of matter we don't yet understand ? Similarly, even if the process is "informational" (if that is even menaingful) how do we know it is computable ?

Unless you're a genius on some unprecedented scale, the only possible answer to either question is "we don't know".

Simon

If you disagree, post, don't moderate
[ Parent ]
Chinese room (5.00 / 1) (#78)
by ucblockhead on Tue May 15, 2001 at 12:05:15 PM EST

The problem with the Chinese room thought experiment is that it doesn't really prove anything. It assumes what it purports to prove.

And you can see exactly the trouble with it in this quote from your post: "This is blatently not the case with what we consider to be intelligence".

What we consider to be intelligent. In other words, we think that the Chinese room proves something because we don't believe that the mind works in such a mechanistic fashion. But we certainly haven't proved it!
-----------------------
This is k5. We're all tools - duxup
[ Parent ]

True (5.00 / 1) (#81)
by spiralx on Tue May 15, 2001 at 12:11:26 PM EST

But then again, I've no formal background in this kind of thing, so my definitions are somewhat sloppy :)

If I were being more precise I could talk about semantics I suppose, because the Chinese room has no concept of the meaning of the ideograms, they're just a set of symbols to be processed and could represent anything or nothing equally well.

Unfortunately this still sticks on defining us as intelligent as I see it, but finding a useful definition of intelligence is a tricky problem I'll leave for greater thinkers than I :)

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

ideograms (5.00 / 1) (#90)
by ucblockhead on Tue May 15, 2001 at 12:28:27 PM EST

But again, you are assuming that there is the part of the brain that actually does the translation isn't similarly mechanistic.

The core of the problem, which really makes all of this argument utterly pointless, is that no one can point ot an object in the world and say "that is concious". Not even another human brain. Each of us knows that conciousness exists because we have one example, our own selves. We believe that it exists in other people, not because we have proof of such, but merely because of occam's razor. It is the simplist assumption. But given that lack of proof, to point at anything else and say "that definitely has no conciousness" or "that definitely has conciousness" is to get beyond what can really be logically deduced.

A great way to show how utterly weak this question is to ask yourself this question: "Does a chimpanzee have a conciousness?" Prove that one way or another and you might have a start at answering this question.

So given that we have absolutely no way of really pointing to conciousness in the real world, there's really no way to say what it is and what causes it. And given that, arguing about it is like arguing about the number of angels on the head of a pin. Any answer is really meaningless, and just a reflection of what the answerer wants to believe.

This question, whether man is just a "machine" or not, is essentially unanswerable today. Maybe even forever.

Another thought experiment, which I think is more illuminating than the chinese room: Imagine someone building a humanlike robot. This robot, in all important ways, responds exactly like a human. It does so because it is programmed that way. Using neural nets and indepth training using human models, it is trained to react just as a human would, in every situation. Is it "concious"?

Turing said "yes", in proposing the Turing Test. Searle would say no, because you can take it apart and see how it works. There's no understanding in there, just mathematical reacts to inputs. (In "The Emperor's New Mind", Penrose predicted that this would be impossible, but his line of reasoning is very different from Searle, in that he knows that he is making an assumption and those proposes some wierd-ass "conciousness as quantum effect" thing in order to make up for it.)


-----------------------
This is k5. We're all tools - duxup
[ Parent ]

True (5.00 / 1) (#94)
by spiralx on Tue May 15, 2001 at 12:38:34 PM EST

Turing said "yes", in proposing the Turing Test. Searle would say no, because you can take it apart and see how it works. There's no understanding in there, just mathematical reacts to inputs. (In "The Emperor's New Mind", Penrose predicted that this would be impossible, but his line of reasoning is very different from Searle, in that he knows that he is making an assumption and those proposes some wierd-ass "conciousness as quantum effect" thing in order to make up for it.)

Again, it's the different defintions that cause the problems. If we had any clue what conscioussness was we might be able to come up with a workable defintion, but as you say it's just something that we assume we have. Not very scientific :)

And IIRC Penrose's microtubule quantum-effect stuff has been disproven - quantum effects are too small and on too short a time scale to have any effect on even the finest-grained processes in the brain.

Unless something else is discovered, and I'm not betting against that ;)

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

Penrose (5.00 / 1) (#97)
by ucblockhead on Tue May 15, 2001 at 12:46:35 PM EST

Penrose was pretty careful to say that those quantum-effects were something that he thought might be the source. He was careful not to base the theory on it. However, without something concrete to point to, it loses much of its force.
-----------------------
This is k5. We're all tools - duxup
[ Parent ]
it isn't circular (5.00 / 1) (#111)
by streetlawyer on Tue May 15, 2001 at 01:05:08 PM EST

It doesn't assume what it puports to prove. The assumptions come from the use of the words "mind", "conscious" etc, which are commonly used in ways which ensure that the Chinese Room does not speak Chinese.

Given this, it squarely puts the onus on anyone who wants to reform linguistic practice to explain why we should do so.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

Yes, it does... (5.00 / 1) (#120)
by ucblockhead on Tue May 15, 2001 at 01:16:31 PM EST

Your second sentence shows that the assumption is there, whether Searle wants to admit it or not. The whole thing is dependent on the idea that the brain is not itself a simple rule-following mechanism just like the Chinese Room. But that is not proven, obviously, and it is exactly what Searle is claiming the Chinese Room argues against.

He builds up a thought experiment and says "See, that mechanistic model doesn't match my subjective bias of how I think the mind works, therefore the mind must work like I think it does".

And you can't use sloppy linguistic definitions to prove anything. Words like "mind" and "conciousness" are not rigidly defined enough to prove anything as they stand, lingusitically speaking. It is like trying to build a machine to detect the color yellow. Everyone knows what "yellow" is, and will mostly agree one which objects are yellow. But if you are going to build a machine to prove something is yellow, you've got to specify what exact wavelengths of light define yellow. You can't pretend to prove something is yellow without doing this, even if a person can't point to it and say "that is yellow". This has nothing to do with reforming linguistic practice. It is an artifact of the fact that linguistic definitions are not rigid in normal practice.
-----------------------
This is k5. We're all tools - duxup
[ Parent ]

Introduction to Philosophy (5.00 / 1) (#127)
by streetlawyer on Tue May 15, 2001 at 01:34:31 PM EST

"you can't use sloppy linguistic definitions to prove anything" .... welcome to the last 70 years of philosophy.

Of course, it is not *logically* impossible for the Chinese Room to be a mind. However, the onus is decidedly not on Searle to show that this is true. What Searle does show is that there is *no reason to believe* that it is a mind. Which ought to be enough for anyone.

And how is a wavelength in hertz or Angstrom ever going to add up to a rigourous definition of yellow? Yellow is a first-person description; there is nothing in any wavelength which is incompatible with light of that wavelength being red.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

You've got it backwards. (5.00 / 1) (#136)
by ucblockhead on Tue May 15, 2001 at 01:47:28 PM EST

The point is that without a rigourous definition of yellow in hertz or Angstroms, you cannot "prove" something meets that definition.

In other words, you cannot prove that something is yellow, using the linguistic definition of yellow, because that linguistic definition of yellow is not rigourous enough. All you'll be able to say is "most people think that thing is yellow". If you want pure and utter proof that something is yellow, you've got to move beyond the normal definition of yellow, and rigourously define it in concrete terms.

The same goes with conciousness. You cannot use the normal definition to prove something is concious because the normal definition is not concrete enough. You can only say "most people think that thing is concious". If you want to get beyond that, you've got to have a more rigourous definition.

What Searle does show is that there is *no reason to believe* that it is a mind. Which ought to be enough for anyone.
But that hardly means much. Really, that's no more compelling then the Turing "if it quacks like a duck, it is a duck" Test. It was the same deal. If it acts concious, there's not reason to think its not. At best, Searle cast doubt on that, but that just leaves us back on square one, not knowing a damn thing.

Really, that's all the Chinese Room shows. In fact, the same argument means that I can't even be sure that you are concious. You could just be the results of some mechanistic rule based machinery. So could everyone. Maybe you are just all figments of my solispistic imagination.

Which means we are left with the only logically valid position to take on this issue: "Fuck if I know".
-----------------------
This is k5. We're all tools - duxup
[ Parent ]

the colour yellow (5.00 / 1) (#145)
by streetlawyer on Tue May 15, 2001 at 02:00:19 PM EST

The point is that without a rigourous definition of yellow in hertz or Angstroms, you cannot "prove" something meets that definition.

Even with such a definition, you can't prove that something is yellow. What if some light meets your rigorous definition, but isn't yellow?

Your belief that there can be no evidence other than "scientific", third-person evidence, or that there is no fact of the matter in cases where verification is impossible, is unwarranted, and leads to substantial complications elsewhere (particularly in the philosophy of mathematics). The argument that a Chinese Room doesn't speak Chinese is very good evidence that formal systems don't have semantic content.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

No, no, no, you are missing the point... (5.00 / 2) (#156)
by ucblockhead on Tue May 15, 2001 at 02:20:59 PM EST

I can't prove anything meets the linguistic definition of yellow because it is not rigourous enough. It is therefore impossible to prove that something meets the linguistic definition of yellow.

That's the point.

The closest we can come is to come up with a concrete definition of yellow, and prove that. But as you say, this is not the same thing. Close, though, and might satisfy people.

The same goes for "conciousness". We cannot prove anything using the linguisitic definition of it. It is not concrete enough. So you are left with two choices, either you can give up, and say it is unprovable, or you can come up with a more rigourous definition of it and prove that.

What you can't do is trundle along, pretending that you can prove anything without a concrete definition. That is exactly what everyone is doing here.

As far as the Chinese Room goes, the trouble is that I've had enough cognitive science training to know that nothing has yet been found in the brain that is not as rule-based and mechanistic as that chinese room. This is especially true in that we can see not just human brains, but a whole range of brains from the human brains that we assign conciousnous too to things like worms and cockroaches that seem entirely mechanistic even in our limited understanding. Follow that range, and you find no clear-cut dividing line.

The Chinese Room only shows that formal systems don't have semantic content only if you assume that semantic content can't be an emergent property of a formal system...
-----------------------
This is k5. We're all tools - duxup
[ Parent ]

"proof" (5.00 / 1) (#166)
by streetlawyer on Tue May 15, 2001 at 02:47:17 PM EST

You seem hung up on proof in the sense of formal verification. If there is evidence to believe a thing, and no evidence not to believe it, what do you do?

Furthermore, this isn't going to help:

The Chinese Room only shows that formal systems don't have semantic content only if you assume that semantic content can't be an emergent property of a formal system

Searle points out in later papers that this is giving away too much to the computationalists; the Chinese room isn't even a *formal* system unless someone who has access to semantic content starts to interpret its output in a certain way. So you can't treat something as having semantics unless you know that it does have semantics. Which is a problem for anything other than human beings.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

Proof (5.00 / 2) (#172)
by ucblockhead on Tue May 15, 2001 at 03:07:11 PM EST

I'm not hung up on proof. It is just that, as I said elsewhere, I find the arguments on both sides equally lame. I would contend that the reason that one or the other argument appears "counterintuitive" is because people have different intuitions about how things are. In other words, it all boils down to people believing what they want to believe, and then drawing lots of fancy words to try to make it seem is if it is more than just what they believed before they ever thought about it.

Which is a problem for anything other than human beings.
My challenge to you then is to tell me whether or not chimpanzees can do this, and how you'd one way or the other.
-----------------------
This is k5. We're all tools - duxup
[ Parent ]
Er... (5.00 / 1) (#187)
by ucblockhead on Tue May 15, 2001 at 03:55:28 PM EST

How you'd tell, that is.
-----------------------
This is k5. We're all tools - duxup
[ Parent ]
this is not what ai proponents are doing (5.00 / 1) (#221)
by eLuddite on Tue May 15, 2001 at 07:05:05 PM EST

What you can't do is trundle along, pretending that you can prove anything without a concrete definition. That is exactly what everyone is doing here.

That isnt quite what AI people are doing. AI people make semantics out of syntax because in the absence of any "proof" that semantics exist, what possible alternative is there, right? In other words, they are implicitly denying ANY POSSIBLE ALTERNATIVE from the word go. You will recognize this position as faith even if they do like to refer to it as emergence.

---
God hates human rights.
[ Parent ]

wither elan vital? (5.00 / 1) (#224)
by eLuddite on Tue May 15, 2001 at 08:40:50 PM EST

Recent work by Chaitin suggests there may be real limits on what you can uncover of the physical world using mathematics. He was also the subject of a slashdot article, The Omega Number and Foundations of Math:
Among the more provocative statements in the article: '[Chaitin] has found that the core of mathematics is riddled with holes. [He] has shown that there are an infinite number of mathematical facts but, for the most part, they are unrelated to each other and impossible to tie together with unifying theorems. If mathematicians find any connections between these facts, they do so by luck.'
I havent read any of these links and I have no real understanding for what the Omega number is but if the synopsis on slashdot is accurate then surely the missing alternative could be one of these "holes." So really, making semantics out of syntax for lack of evidence can turn out to be a demonstrably unwarranted assumption.

---
God hates human rights.
[ Parent ]

The System Reply (5.00 / 2) (#25)
by streetlawyer on Tue May 15, 2001 at 09:23:18 AM EST

You've just given the Systems Reply, anticipated by Searle and never really successful. A lookup table isn't intelligent, and it has nothing to do with its size or speed. You can make Searle into a superman who can move at 99.9% of the speed of light, give him ten million arms and the ability to use them simultaneously; he's still never going to speak Chinese.

bwow. we've got a you running on a non-biological substrate, and we've got a saved you-state that we can refer to and stare at till we get bored.

Nope, you've got a complicated electric circuit. If you can interpret it, it's a me-state. If you can't, it isn't. Which is to say, intrinsically, it doesn't have the property of being me.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

yea, searle and his arms would be the hardware... (5.00 / 1) (#38)
by sayke on Tue May 15, 2001 at 09:51:23 AM EST

that the chinese-speaking mind runs on. the mind would be speaking chinese all by itself, thank you very much.

i can't interpret your bit about the complicated electronic circuit. last time i checked, you smelled exactly like a complicated electronic circuit. but, well, i couldn't interpret you, so you must not be... you, that is. go figure. no comprende, senior...


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

what mind? (4.50 / 2) (#42)
by streetlawyer on Tue May 15, 2001 at 10:04:40 AM EST

There is only one mind in the thought experiment (Searle's) and it doesn't speak Chinese. The whole point of the thought experiment is that the system is merely passing cards back and forth, without any regard to the fact that the characters on them can be interpreted as ideograms. Syntax isn't semantics.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]
that mind *points* (4.00 / 2) (#67)
by sayke on Tue May 15, 2001 at 11:26:51 AM EST

the unintended consequence of searle's experiment is that it implies a mind emerging from the interactions between the cards, without any regard to the fact that the characters on them can be interpreted as ideograms. assert all ya want, man, but it looks to me like syntax becomes semantics.


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

What mind? (4.00 / 2) (#69)
by spiralx on Tue May 15, 2001 at 11:31:56 AM EST

the unintended consequence of searle's experiment is that it implies a mind emerging from the interactions between the cards, without any regard to the fact that the characters on them can be interpreted as ideograms.

Where? Where does it? Explain where this mind is, please.

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

the same place your OS is... (4.00 / 2) (#82)
by sayke on Tue May 15, 2001 at 12:13:41 PM EST

or do you think it doesn't exist either? minds, like operating systems, go in my "emergant property" box, and as such can be localized about as much as temperature can. but wait, you probably don't think that exists either... ;)


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

But (5.00 / 1) (#93)
by spiralx on Tue May 15, 2001 at 12:34:45 PM EST

You still haven't a) given a definition of mind or b) proven one has arisen from emergent properties.

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

you didn't ask me to do so, but here ya go (5.00 / 1) (#100)
by sayke on Tue May 15, 2001 at 12:53:12 PM EST

mind: a model observable as a strong correlation or mapping between changes in one system and changes in another. changes first occur in one system (the modeled), and then the model in the second system (the modeler) is then updated. the modeler tends to be quite a bit less bitwise-complex then the modeled; that is to say, the model tends to be lossy. i say thermometers embody mind, albiet only in a very small way.

and that's off the top of my head.

i think my definition of mind renders b) obsolete.


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

you will at least admit (5.00 / 1) (#107)
by streetlawyer on Tue May 15, 2001 at 01:03:18 PM EST

that this is a very unusual definition of "mind", and certainly not the one Searle intended. Specifically, it does not seem to require that a mind be conscious.

Do you have any particular reason why we should stop using the word "mind" as we currently do and adopt your new definition instead? Other than that it would allow computers to be called "minds"?

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

i will admit it's unusual, certainly (5.00 / 1) (#123)
by sayke on Tue May 15, 2001 at 01:21:12 PM EST

but as i think consciousness is a matter of modularity and degree, my definition avoids (or rather, embraces) the slippery slope the more common definition finds itself on. i think consciousness arises when parts of your modeler start modeling each other. to my knowledge, thermometers only do that in a most trivial atomic-level sense.

i use my definition because it's quite specific, as opposed to the current definition, which, well, doesn't make for much of a definition... of course, we may be thinking of different current definitions; throw me yours.

the current definition i'm talking about goes a little something like this: "the human consciousness that originates in the brain and is manifested especially in thought, perception, emotion, will, memory, and imagination" and "the collective conscious and unconscious processes in a sentient organism that direct and influence mental and physical behavior" and "the principle of intelligence; the spirit of consciousness regarded as an aspect of reality"... which combine to make a damn near useless little definition.

my definition refers to the bitwise complexity of models. by doing so, it allows us to talk about how many bits a mind is made up of, which is always handy, at least as a rough benchmark. so you know, i'm not the first to think of this, but i forgot who the first was.


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

Okey dokey (5.00 / 1) (#190)
by spiralx on Tue May 15, 2001 at 04:08:40 PM EST

So which position are you holding in this argument? Is it that we have a mind by your definition and aren't conscious, that conscioussness is some kind of illusion? Or is it that we have something extra which makes us consciouss above and beyond your definition of a mind?

If it's the first then no wonder you're not accepting these arguments...

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

number two (5.00 / 1) (#227)
by sayke on Tue May 15, 2001 at 09:54:19 PM EST

my position is that we exemplify conscious minds, where i use "mind" as "a model observable as a strong correlation or mapping between changes in one system and changes in another", and i use "conscious" as "that which arises when parts of a modeler start modeling each other".

it follows from this that i see economies, ecologies, languages, societies, bird flocks, and fish schools as somewhat-conscious living besties in their own right. that i have no idea to meaningfully communicate with them is, i think, beside the point.

just thought i'd preempt that little line of thought ;)


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

Of course (5.00 / 1) (#257)
by spiralx on Wed May 16, 2001 at 04:23:17 AM EST

my position is that we exemplify conscious minds, where i use "mind" as "a model observable as a strong correlation or mapping between changes in one system and changes in another", and i use "conscious" as "that which arises when parts of a modeler start modeling each other".

By your definition we're not very conscious at all, because we are very unaware of the processes and states of our own minds at any time.

But the trouble is your definition of mind is so alien to anything we're talking about here that arguing the toss is pointless. It's sort of like having a conversation about living things and you defining living things as anything that displays motion of some kind...

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

i think you misunderstand (5.00 / 1) (#282)
by sayke on Wed May 16, 2001 at 09:43:57 AM EST

according to my take on things, consciousness arises when modelers start modeling each other as well as more external things. we exemplify consciousness. hell, i'd say our awareness of the processes in our minds, and the states of our minds, sounds impossible without consciousness. that sounds like an excellent corollary to the "consciousness arises when modelers start modeling each other as well as more external things" bit, in fact.

of course, we don't need to be aware of the hardware-level workings of our minds any more then an OS needs to be aware of the workings of the processor it runs on... but i didn't say anything about that.

i'm trying to remember where i got the "consciousness arises when modelers start modeling each other as well as more external things" definition... i think it was dennet, but i'm honestly not sure.


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

better definitions (5.00 / 1) (#289)
by speek on Wed May 16, 2001 at 10:18:11 AM EST

But the trouble is your definition of mind is so alien to anything we're talking about here that arguing the toss is pointless

But, from my perspective, you've redefined consciousness as to mean that which only human minds can exhibit, which made the argument pointless. Developing better definitions of consciousness and mind is part of an effort to generate a better understanding of what it is exactly that humans DO when they "think". If we could understand that well enough, and if our technology were good enough, then presumably, we could recreate it.

--
al queda is kicking themsleves for not knowing about the levees
[ Parent ]

Really? (5.00 / 1) (#299)
by spiralx on Wed May 16, 2001 at 10:46:17 AM EST

But, from my perspective, you've redefined consciousness as to mean that which only human minds can exhibit, which made the argument pointless.

I'm not sure I've defined consciousness anywhere at all, because it's a nebulous thing that's pretty damn tricky to pin down. If we ever come up with a decent definition, these questions would be closer to being solved...

And for the record, I do believe we'll eventually manage AI. Some day.

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

not you, streetlawyer - sorry (eom) (5.00 / 1) (#304)
by speek on Wed May 16, 2001 at 10:59:35 AM EST

.

--
al queda is kicking themsleves for not knowing about the levees
[ Parent ]

this is the "Systems Reply" (4.50 / 2) (#89)
by streetlawyer on Tue May 15, 2001 at 12:28:22 PM EST

and is dealt with by Searle in the original paper far better than I could. At best, the Chinese Room is a description of a mind, but that isn't a mind any more than a description of a rainstorm will make you wet.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]
Description of a mind. (5.00 / 1) (#256)
by i on Wed May 16, 2001 at 02:57:27 AM EST

I have a description of Linux on my computer, in form of bunch of files with ones and zeros. Y'know what? I can load this description into an interpreter (my interpreter is implemented in hardware, but there are also software interpreters) and run it! Yeah-hoo! It runs! It's as good as Linux itself!

and we have a contradicton according to our assumptions and the factor theorem

[ Parent ]

Blind faith (5.00 / 1) (#114)
by Simon Kinahan on Tue May 15, 2001 at 01:09:03 PM EST

jsm's reply (which is similar to Searle's) is misdirection, in a sense, since to point to the human mind and claim it doesn't undersrand Chinese is not a proper answer to the claim that another mind can somehow emerge from a lookup table and a mechanism for reading it.

The proper answer, IMHO, is that to claim so is an act of indescribable blind faith on a level the pope would be envious of. How can that happen ? We neither know what a mind is, nor how one arises, and we certainly don't know whether all you need to get a mind is to simulate one's external behaviour.

Simon

If you disagree, post, don't moderate
[ Parent ]
Speaking Chinese. (5.00 / 1) (#83)
by i on Tue May 15, 2001 at 12:15:08 PM EST

he's still never going to speak Chinese.

This is probably false, for two reasons. Layman's explanation follows.

I presume that by "speak Chinese" you mean "able to translate from his native English to Chinese and back".

  • First, Searle will be able to learn Chinese "by pattern". Imagine this:
    • a x a y b
    • a x b y c
    • b x b y d
    • ...
    After a while he'll understand that 'a' is 1, 'b' is 2, 'c' is 3, 'd' is 4, 'x' is +, and 'y' is =. Initially this is just a conjecture. As Searle gathers more and more data, he becomes more and more confident that this is true. Further he learns that every M is associated with two Hs and each H is associated with five Fs. It does not take a genius to understand that we're talking about men, hands, and fingers. And so on and so forth.
  • Second, there's more to language than queries. There are also commands. "Sit down", "Show me your left hand", "Run". In the original setting Searle cannot obey such commands in Chinese, therefore he is not a Chinese speaker in the traditional meaning of the word. Extend his program so he can show his left hand, sit down, run, draw a picture of a dog, etc. on request -- and suddenly he speaks Chinese in very real, conservative sense. That is, he's able to translate from Chinese to English and back.



and we have a contradicton according to our assumptions and the factor theorem

[ Parent ]
Presupposition (5.00 / 2) (#98)
by joecool12321 on Tue May 15, 2001 at 12:49:58 PM EST

After a while he'll understand that 'a' is 1, 'b' is 2, 'c' is 3, 'd' is 4, 'x' is +, and 'y' is =. Initially this is just a conjecture. As Searle gathers more and more data, he becomes more and more confident that this is true.

You're presupposing that he can "think outside the system" while he is interpreting. Searle's point is that the system IS the lookup table. And is therefore devoid of intelligence.

What I mean is this: The man inside the box can only look up information. He can't understand that information at a different level. Even if there were terabytes of information, etc. -- he would still be looking up information, he wouldn't be able to understand internally what 'a' is.

--Joey

[ Parent ]

Searle's point. (5.00 / 1) (#243)
by i on Wed May 16, 2001 at 02:10:03 AM EST

he system IS the lookup table. And is therefore devoid of intelligence.
Non-sequitur. We are trying to figure out whether or not a lookup table can possess intelligence.

The man inside the box can only look up information. He can't understand that information at a different level.
This is the right direction. What does it mean to "understand [...] infornation at a different level"? Why some different level is required for understanding? What is taht level, precisely?



and we have a contradicton according to our assumptions and the factor theorem

[ Parent ]
Technically no (5.00 / 1) (#261)
by spiralx on Wed May 16, 2001 at 05:58:03 AM EST

Non-sequitur. We are trying to figure out whether or not a lookup table can possess intelligence.

No, we're trying to find out whether or not the Chinese Room has intelligence or not. The physical mechanism of the lookup table (IF a THEN x ELSE IF b THEN y etc.) is not intelligent. You wouldn't call a neuron intelligent would you?

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

Same thing. (5.00 / 1) (#267)
by i on Wed May 16, 2001 at 06:20:54 AM EST

Chinese room is a lookup table plus some simple, obviously non-intelligent physical mechanism (IF a THEN x ELSE IF b THEN y). Now, a neuron is not intelligent, but 10 billion neurons wired together, plus some simple, obviously non-intelligent physical mechanism (voltage/chemical changes) may well be. What about a lookup table with 10 billion entries plus some simple, obviously non-intelligent physical mechanism?

and we have a contradicton according to our assumptions and the factor theorem

[ Parent ]
But (5.00 / 1) (#269)
by spiralx on Wed May 16, 2001 at 06:46:20 AM EST

The whole point of this argument is over whether or not the mind is simply a Turing machine or something else... The Chinese Room experiment shows that the Chinese Room may produce the right answer, it does so without any semantic concepts at all, whereas we have such concepts.

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

Ha! (5.00 / 1) (#270)
by i on Wed May 16, 2001 at 06:50:26 AM EST

This is the key question. What are those "semantic concepts" people keep bragging about? And why they are considered necessary components of an intelligent mind?

and we have a contradicton according to our assumptions and the factor theorem

[ Parent ]
I think you're orthogonal (5.00 / 1) (#106)
by streetlawyer on Tue May 15, 2001 at 01:01:21 PM EST

Your second point is clearly wrong. There are tetraplegic Chinese who speak Chinese.

Your first point just seems to boil down to saying that Searle could learn Chinese if he had a mind to, which also doesn't establish anything about AI.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

Tetraplegic. (5.00 / 1) (#240)
by i on Wed May 16, 2001 at 02:05:17 AM EST

What's that? I didn't find it in a dictionary.

I don't want to establish anything, just push the people's thinking in the right direction. What does it mean to "know Chinese" for somebody who only speaks Chinese? How do you define "meaning" and "semantics"? Think about it.



and we have a contradicton according to our assumptions and the factor theorem

[ Parent ]
Why the Systems Response is good... (5.00 / 1) (#387)
by _cbj on Thu May 17, 2001 at 07:59:51 AM EST

...Or, at least, not demonstrably bad.

You can make Searle into a superman who can move at 99.9% of the speed of light, give him ten million arms and the ability to use them simultaneously; he's still never going to speak Chinese.

But the room can, and however forcefully Searle assumes panpsychism is absurd, he's a long, long way from proving it.

(I like this guy's support of the System's Reply (starting p.93), but the bloody pdf isn't copyable and I'm too lazy to paraphrase.)

[ Parent ]

No it doesn't (5.00 / 1) (#389)
by spiralx on Thu May 17, 2001 at 08:44:42 AM EST

But the room can, and however forcefully Searle assumes panpsychism is absurd, he's a long, long way from proving it.

Again, how does the room speak Chinese? It is just following a set of rules? You can only really take the position that the room speaks Chinese if you believe that semantics doesn't exist, that we have no understanding of how things relate to the real world and we just manipulate symbols.

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

Damn straight. (5.00 / 1) (#395)
by i on Thu May 17, 2001 at 09:27:03 AM EST

Semantics is how we relate syntactic stuff to the real world. We can only do so because we have sensory inputs (video/audio/tactile/whatnot) that reflect the real world. The Chinese room does not have sensory inputs. So it can't relate stuff to the real world. It has no conception of the real world, simply because it's deaf and blind and paralyzed. It only can read Braille so to speak.

So look at what happens. We have a model of the real world that we build through our senses. And we have another thing, the language. And we have a mapping between the language and the sensory model of the world. It's a fairly accurate mapping because we can describe most of the world adequately. The part of it that we can sense. So in fact the language is another model of the real world because of this mapping.

So we have this mapping, and we call this mapping "semantics" or "meaning". The Chinese room doesn't have this mapping, because it does not have a sensory model of the real world. It only has the language. But the language is, as we established, a good model of the world. So in fact the Chinese room has a language, a model of the world (which is the language itself), and a mapping between the two (which happens to be the identity mapping).

What now? We can give the Chinese room senses. We can attach a camera/monitor and a microphone/speaker pairs to it, and we can scramble both signals so that the human inhabitant of the room does not recognize them. But he still can mentally process them as some abstract painting and music. And we can also extend his (huge, self-modifying, self-optimizing) lookup table with entries for the audio/video input. So the Chinese room not only speaks Chinese now, it sees the world and can relate what it sees, in Chinese.

The inhabitand if the room still does not speak any Chinese, the room still does, and now it can relate Chinese to the real world. This is what we call "semantics" and "understanding". Searle loses. Case closed.



and we have a contradicton according to our assumptions and the factor theorem

[ Parent ]
Ah, the robot reply! (5.00 / 1) (#397)
by spiralx on Thu May 17, 2001 at 09:35:19 AM EST

Okey dokey, basically we've now got the Chinese Room inside a robot which can see, move and generally interact with the outside world. But of course, Searle inside can't as before right, he can only take the inputs supplied to him from the robot, and produce outputs which make the robot move.

But he still doesn't understand Chinese does he? All he's doing is just manipulating formal symbols as before. And now you've got the additional worry that you've basically moved from saying that cognition is not solely a matter of symbol manipulation, but that it must also entail a set of causal relations with the real world. So you've already ruled out any kind of self-contained AI...

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

But of course (5.00 / 1) (#399)
by i on Thu May 17, 2001 at 10:10:01 AM EST

he's manipulating formal symbols. Which does not prevent them from having semantics. Which is defined as formal sytem to real world mapping. If you don't like this definition, please supply another (gosh, I'm asking people to define "semantics" and "meaning" all the time). And of course there IS a possibility of self-contained AI, because the mapping is just as there, only less accurate and more static. Causal relation to RW (or indeed any relation to RW) is not a requirement in my view, but it's nice to have one.

Read my comment again. The robot is not important. Definitions of "meaning" and "semantics" are important. As soon as you recognize that you define "semantics" as "that part of my mind which I don't understand" and drop that definition, everything becomes clear.



and we have a contradicton according to our assumptions and the factor theorem

[ Parent ]
No, I agree with your definition (5.00 / 1) (#403)
by spiralx on Thu May 17, 2001 at 10:30:23 AM EST

But still, little Searle inside the Chinese room inside the robot doesn't know Chinese, so he can't make any such mapping from the symbols he's manipulating to the real world.

And just in case you were going to mention the fact that although he doesn't know it himself what he is doing does involve semantics because the cards are ideograms that represent real things, read this for a brief overview of Searle's position. I'm a little shaky on this ground so I won't bother arguing it, but I still believe Searle is right overall, it's far too convincing an argument for me :)

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

Searle doesn't know Chinese. So? (5.00 / 2) (#406)
by i on Thu May 17, 2001 at 11:16:13 AM EST

Who cares? He's but a small part of the system, and an insignificant one at that. Easily replaceable by a PC. The Chinese room itself can make such mapping -- ask it! It knows about the real world-- ask it! Keep in mind that, as far as YOU are concerned, there might be an English room inside my head with a little Chinese boy inside the room. Do you care?

Now for something different. You are asked a question. Some neurons fire in your head, you produce an answer. That's how your head works, and you have no faintest idea how all these neuron firings produce the correct answer.

With the Chinese room, you can look inside and see a lookup table, full of Chinese sentences, and Searle. That's how it works, and you understand how it produces the correct answers.

Now imagine that Searle, instead of looking up the table, manipulates a big model of somebody's brain, presented as a board game. He's moving coloured pieces that represent molecules and charges between playing fields that represent neurons, according to a set of rules. When certain pieces move to certain fields, an external output device is fired. This device produces Chinese speech. As far as I'm concerned, this is And poor Searle still doesn't know Chinese! Damn.

You see, the weak point of Searle is this:

since the symbols it processes are meaningless (lack semantics) to it [the AI] it's not really intelligent
He does not define what does it mean "meaningless (or meaningful) to it". Formally, semantics is objective. It's just a mapping between arbitrary structures. This is a simplification of course. But as soon as I try to make it less simplified, suddenly I only begin to know (or, rather, feel) what's meaningful to me. As for other people, I have to ask them first. But the AI will answer me, just like any living person would. So how would I know it lacks internal semantics? Just because I cannot recognize relevant structures in a giant lookup table? But I cannot recognize them in a giant bunch of neurons either. I can only feel that they are present in my own mind. By extension I presume they are present in any mind. How do I know a giant lookup table does not contain a mind? I ask it, it answers "sure I have a mind, and internal semantics too". Back to square one.



and we have a contradicton according to our assumptions and the factor theorem

[ Parent ]
Well there you go then (5.00 / 1) (#414)
by spiralx on Thu May 17, 2001 at 12:05:36 PM EST

You don't believe in semantics, or that they're pointless. We disagree on this matter, and over our interpretations of the Chinese room. While I can respect your position, I can't hold it :)

I think it's time to agree to disagree... I've posted over 40 comments on this story I think, and I'm getting RSI :)

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

WHAT? (5.00 / 1) (#417)
by i on Thu May 17, 2001 at 12:24:14 PM EST

Did you read what I wrote? I defined what I mean by "semantics" and now you say I don't believe in semantics? I don't get it.

Last post on subject -- going on vacation. See ya in a week.



and we have a contradicton according to our assumptions and the factor theorem

[ Parent ]
Heh, my mistake (5.00 / 1) (#451)
by spiralx on Thu May 17, 2001 at 05:22:52 PM EST

Sorry, that first sentance didn't quite come out as I intended. What I meant was you believe semantics arises from syntax, I don't. It seems that for you, the Turing test is enough to prove srtong AI...

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

Ah, Haugeland's Little Demon (5.00 / 1) (#431)
by _cbj on Thu May 17, 2001 at 01:31:18 PM EST

An excellent defence of the Systems Reply, which by rights should have put to rest the Chinese Room as an anti-AI argument.

[ Parent ]
Not quite (5.00 / 1) (#449)
by spiralx on Thu May 17, 2001 at 05:18:30 PM EST

As I see it that's a variation on the Brain Simulator reply rather than the Systems reply. The Brain Simulator reply is a lot more tricky, and I'm open to debate on it, unlike the Systems reply which is quite easily dealt with.

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

fooey (5.00 / 1) (#452)
by _cbj on Thu May 17, 2001 at 05:41:35 PM EST

The Brain Simulator reply is a lot more tricky, and I'm open to debate on it, unlike the Systems reply which is quite easily dealt with.

Quite easily dealt with but you aren't willing to discuss it?

[ Parent ]

What is the Point. (5.00 / 1) (#501)
by acronos on Fri May 18, 2001 at 02:20:35 PM EST

What is the Point of putting little Searle inside the Chinese room. All you are doing is confusing yourself. It takes a much simpler situation and makes it more complicated. Now no matter how resoundingly you are proven wrong, you can just go back to your tidy little "but searle doesn't know Chinese." Why doesn't Searle know Chinese? Because you defined it that way in the way you created the problem. You have put it in a frame that is not the way reality is. Step outside the box. Think outside the box and look at the box as a whole. That is all that matters not how it is built on the inside.

[ Parent ]
Yes, it really does (5.00 / 1) (#428)
by _cbj on Thu May 17, 2001 at 01:14:54 PM EST

Chinese is being spoken (well, issued). We have a room. We have a guy in the room who denies being able to speak Chinese. I know where I'm pointing the finger.

[ Parent ]
Well then (5.00 / 1) (#448)
by spiralx on Thu May 17, 2001 at 05:16:40 PM EST

We disagree over the interpretation. The issue isn't whether the room speaks Chinese, because it obviously does, but over whether it understands Chinese, which I and Searle say it doesn't. You disagree, so let's leave it at that before we start repeating ourselves again... :)

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

To be mean spirited about it... (5.00 / 2) (#458)
by _cbj on Thu May 17, 2001 at 06:27:02 PM EST

We disagree over the interpretation. The issue isn't whether the room speaks Chinese, because it obviously does, but over whether it understands Chinese, which I and Searle say it doesn't. You disagree, so let's leave it at that before we start repeating ourselves again... :)

Okay, but one last go first. And I don't believe the room understands, just that it isn't possible to prove otherwise.

Reminding myself of the details... Searle's Reply to the Systems Response is that Searle could memorize all the syntax manipulation rules, translate Chinese to English with them, yet still not understand Chinese.

Fine. This is easy to deal with, like any of Searle's Room arguments, for the simple reason that the premise isn't possible. The syntax manipulation rules for translation must change over time to get references translated to a good isomorphism, and, as they obviously aren't able to predict the future of the universe, fail. If the rules really were to be adequate, they must in fact be another human or a conscious AI (because, remember, we're translating for people), and Searle could model neither in his brain because it's his brain, not theirs.

Note that this is different to complaining about the speed being impossible, because the analogy doesn't merely scale a dimension, it fucks reality square in the arse.

At this stage the anti-AIs would be left arguing that functionally conscious isn't the same as internally conscious, or that there's some physical property lacking that's present in conscious-by-definition things like humans. These are arbitrary, religious debates, and should we reach the stage where they're occurring, I would have long since bowed out of the chatter to work on putting my new AI into something shaggable.

[ Parent ]

amen brother (5.00 / 1) (#500)
by acronos on Fri May 18, 2001 at 02:06:42 PM EST

It is like arguing the existence of God with these people who don't want to see.

Yes, I know your not my brother. But I wholly agree and thought the religious analogy humorous. No offense intended.


[ Parent ]
But the room still isn't concious (5.00 / 1) (#538)
by fragnabbit on Mon May 21, 2001 at 03:54:10 PM EST

Just because the room can speak Chinese doesn't mean that it understood the Chinese or even more, after the "test" can discuss with the Chinese "people" what just occured.

I think that the Chinese room proves that the Turing tests don't show intelligence or "conciousness". I don't see why that is offensive to AI folks. Get a different test, that one's flawed.

I'm not saying that Searle proves that artificial intelligence can't exist, but I would say that the Chinese room experiment proves that the Turing tests don't prove that it does.

[ Parent ]

Quite so (5.00 / 3) (#539)
by _cbj on Mon May 21, 2001 at 05:53:44 PM EST

My point was to show just a couple of reasons why Searle's otherwise excellent thought experiment proves nothing.

As far as I'm concerned, the Turing Test cannot be bettered. There is simply no way to determine subjectively that the room understood, or that an AI would understand, without attempting to resort, as Searle does, to prospective results from neuroscience that can say absolutely nothing about the general nature of intelligence (such as what can "have semantics"), only about human intelligence.

[ Parent ]
Is there no way to tell? (5.00 / 1) (#543)
by fragnabbit on Tue May 22, 2001 at 08:52:14 AM EST

I agree, Searle doesn't disprove that understanding can be created, but it proves that a series of questions will not help to determine the existence of understanding.

But is there no other way to determine conciousness? For example, some things I would like to know, is it artistic in any way? Can it draw, write a song, a poem, something original? How does it feel? Why does it think it's concious?

It's a hard question to be sure. And, like Asimovs Bicentennial Man, it would have to show a lot of characteristics such as caring, feeling, individualism for the human race to accept it as anything more than a machine.

Perhaps the key is it wanting to prove to us that it is more than a machine, not us wanting to test it.

[ Parent ]

Later than that (5.00 / 1) (#17)
by _cbj on Tue May 15, 2001 at 08:57:32 AM EST

Whatever "connotations" these things have...

They don't have any, we haven't built them yet. Or got anything like enough of the theory in place. Or enough useful results from trial and error. Or anything. Laaaaater, my friend.

And Searle's Chinese Room tells us that syntactical manipulation isn't ever going to add up to consciousness.

Searle's Chinese Room tells us syntactical manipulation alone isn't consciousness, the adding up part is very much wishful thinking on Searle's part.

Neurone simulation looks like a more promising avenue to me, though I doubt that anything based on a Turing-computable algorithm is ever going to make the grade

There'll probably be a lot of neuronal stuff in a successful artificial consciousness, but there's no need to ignore top-down structuring in combination: it isn't cheating. Minsky's paper "Symbolic Vs. Connectionist" is good on this.

[ Parent ]

Not even that (5.00 / 1) (#109)
by Simon Kinahan on Tue May 15, 2001 at 01:04:17 PM EST

<P><I>And Searle's Chinese Room tells us that syntactical manipulation isn't ever going to add up to consciousness. </I>

<P>Even this grants too much to the AI side of the debate. Computer's don't manipulate symbols. We program them so little changes in voltages cause them to *appear* to manipulate symbols. The distinction is important. Noone claims engines manipulate cars - you just put fuel in them, and by a well understood sequence of actions things cause the car to move. Saying computers manipulate symbols is like claiming the engine realises it has fuel in it and makes the car move. Its attributing intentionality to the computer: precisely what Searle says we have no reason to do.

<P>(I like your new sig, btw)


Simon

If you disagree, post, don't moderate
[ Parent ]
The hardware is not aware. The software may be. (4.50 / 2) (#234)
by vectro on Tue May 15, 2001 at 11:12:21 PM EST

But that's the very point of the article - that although the hardware may just be a bunch of logic gates, so are our brains just a bunch of neurons.

It's the software that can be aware, not the hardware.

“The problem with that definition is just that it's bullshit.” -- localroger
[ Parent ]
All at once (5.00 / 1) (#113)
by ucblockhead on Tue May 15, 2001 at 01:08:10 PM EST

My point was that a human mathematician's understanding of pi was significantly different from a computer's, because the human mind could grasp the concept of pi "all at once"...
The problem is that "all at once" is a meaningless term unless you go into detail about what this means. Really, they are just subjective weasal words, and to the extent that they have meaning, they are unproven.

It is pretty simple to write a program that pretends to understand pi. Ask it what pi is, it'll tell you. Ask it to use pi in mathematical equations, it can. Does it "understand" pi? Well, we can argue about it all day, and we'll never get anywhere, because the word "understand" is not defined well enough to get us anywhere. One person (we'll call him "Marvin") will point to the program and say, "see, it uses pi as if it understands it 'all at once'", while the other (We'll call him "John") will say, "no, it's just gears and pulleys underneath". Marvin says "But the mathematician is just neurons firing underneath". John says, "But that's not the same". Marvin says, "Yes it is". "No it's not". "Yes it is". Ad infinitum.

This isn't going to change because these concepts just aren't defined. You've got the hardcore AI guys on one end who write code like "int LearningRate;" and think that this means that their program learns, and you've got the mystics on the other end convinced that human beings have some sort of mystical soul. Both defined their terms appropriately, and neither has the logical rigidity to tie their definitions to actual human experience.


-----------------------
This is k5. We're all tools - duxup
[ Parent ]

Hmm... (5.00 / 1) (#117)
by trhurler on Tue May 15, 2001 at 01:13:05 PM EST

Well, screw it. I've given up on not replying; I may regret it, but the problem is, this sounds like such a reasonable discussion... hehe. Anyway, let's see here...
Those unfortunate people who are forced to care about roundoff and truncation error in numerical computing probably have more of an intuition for what I mean when I say that when a human mathematician divides through by pi, or by some other transcendental number, he is doing something fundamentally different from when a computer tries to carry out the same process using floating point arithmetic.
Well, I've done this, and you're right. However, a computer can be set up to deal with such numbers symbolically - in the same way a mathematician does it. There are programs for this purpose. Presumably, any program which was even claimed to be a mind would be capable of symbolic manipulations in a very general way; the fact that programs we have today do not do this, or that the bare hardware itself has no notion of it, is not in itself a disproof of the potential to write software which does so.
Second, I don't think that this article establishes its point that consciousness is independent of physical structure.
You're right, and furthermore I am nigh on certain that consciousness is in fact related to structure, but this does not necessarily mean the structure can't be embodied in the software itself. Of course, that it can is an unproven assertion also. My opinion is that anyone who claims to know whether a software mind is possible either has done it and isn't telling anyone or else is more certain than he can justify.
Third, the reason that we can't count computer programs as conscious is that the interpretation of their output is dependent on us.
This is true in a limited sense. Computers do interact on a daily basis with one another, using codes which people have assigned meaning to. In the end, though, we are conscious and they are not, so "meaning" comes from us. In that sense, you're correct - but only because we're conscious and they're not. The argument is somewhat circular in that respect - if a program can be written that achieves consciousness at some point during its runtime, then clearly it would assign meaning to its percepts and mental constructions of said just as we do. Granted, not every bit of data that made up that mind would be so treated, but do you really think every bit of data present in your head is part of what you call your consciousness, as opposed to part of a framework that supports it? We probably have out very own "programs," of which we are totally unaware, though there is no evidence that they were written as we write programs.
NB: I am referring to the *sensation* itself, not to any of its physical (neural) or functional (body-damage-indicating) concomitants).
Other than memory of the event, which itself may consist in nothing but reflexive firing of nerves, and the physical effects on nerves(not just peripheral, but the ones they affect in your brain also,) what is the sensation? Does it have existence independent of those things? I don't think we've ever had any reason other than religion to believe that it does, and as I mentioned elsewhere, once you inject religion into this discussion, it is no longer a discussion; either you believe, or you don't.

--
'God dammit, your posts make me hard.' --LilDebbie

[ Parent ]
gedankenexperiment (5.00 / 1) (#158)
by streetlawyer on Tue May 15, 2001 at 02:25:46 PM EST

Other than memory of the event, which itself may consist in nothing but reflexive firing of nerves, and the physical effects on nerves(not just peripheral, but the ones they affect in your brain also,) what is the sensation? Does it have existence independent of those things? I don't think we've ever had any reason other than religion to believe that it does

Our reason for believing in the existence of pains separately from their physical concomitants is logical. It's based on Ned Block (IIRC)'s zombie gedankenexperiment. It is possible to imagine zombies; creatures which are physically identical to us, but which have no inner life; no pains, tingles, colours, etc. A zombie that pinched its arm would have exactly those physical phenomena which I have, but would not feel pain. Since this does not appear to be contradictory, we can say that being painful is not a necessary property of any physical process. However, being painful is (trivially) a necessary property of a pain-sensation. Therefore, because they have different properties, pain-sensations are not identical with any physical object.

The main responses to this gedankeexperiment are a) There are no such things as pains (Dennett), which is extremely counterintuitive or b) the gedankenexperiment is actually contradictory, because pains are physical phenomena, albeit irreducibly subjective ones (Searle).

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

Hmm again... (5.00 / 1) (#168)
by trhurler on Tue May 15, 2001 at 02:50:45 PM EST

This whole thing depends on the notion that because you can imagine such a zombie, it can exist; on the contrary, I would say that the possibility of the existence of such a zombie is just another way to express the possibility of "inner life" being independent of physical medium, and that neither is any more certain than the other. I'm vaguely reminded of the old claim that the ability to imagine a perfect being necessitates the existence of God, but the analogy is imperfect.

In any case, I'm not surprised by Dennett's response; that seems to be his response to just about everything relating to the mind. I'm reminded of a philosophy instructor who brushed off a mention of Dennett with a rather disdainful "ah yes, the author of "Consciousness Explained - away." I'm not quite sure what Searle meant, but I'm almost certain that if he thought there are physical phenomena which cannot be measured given some arbitrary degree of sophistication, then he was wrong.

--
'God dammit, your posts make me hard.' --LilDebbie

[ Parent ]
Descartes' proof of god (none / 0) (#216)
by delmoi on Tue May 15, 2001 at 06:02:25 PM EST

You're talking about Descartes proof of god, which is rather stupid. It goes like this:
  • man is aware of his imperfection
  • if we are aware of imperfection, we must have a concept of perfect
  • That concept of perfect cannot come from ourselves
  • so it must have come from a perfect creator
The problem, though, is that it rules out imagination. You could easily replace 'perfect' and 'perfect creator' with the word 'space alien' and it would make just as much sense. Human beings are perfectly capable of imagining something they have never seen.
--
"'argumentation' is not a word, idiot." -- thelizman
[ Parent ]
not quite (5.00 / 1) (#244)
by streetlawyer on Wed May 16, 2001 at 02:11:25 AM EST

the point is that if you can conceive of something, it isn't *logically* impossible. Zombies almost certainly can't and don't exist; but they're not contradictory, and therefore being painful can't be a *necessary* property of brain states in the way that not being married is a necessary property of bachelors.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]
but you haven't conceived of the zombie (5.00 / 1) (#286)
by speek on Wed May 16, 2001 at 10:02:09 AM EST

The problem is, I don't believe you've really "conceived" of anything. I think you just wrote some words on your screen and sent them off. They appear entirely meaningless to me. It's not unlike your assertion that you can "understand" pi whereas a computer can't. I don't believe you.

I could just as easily assert I've conceived of a married bachelor. What are you going to do about it?

--
al queda is kicking themsleves for not knowing about the levees
[ Parent ]

Well, (5.00 / 1) (#312)
by trhurler on Wed May 16, 2001 at 11:40:20 AM EST

There are three categories here now. Things that can exist, things that "are logically possible," and things that are absurd. The problem is, "logically possible" is tricky. Pain either is or is not a caused property of the finite system known as a human body. If it is, then you cannot eliminate it without modifying that body. If it is not, then you can. This is the realm of logical possibility for this case.

You claim that the zombie is logically possible, and that therefore pain is not a caused property of the system; that it is independent of the system in some way. The problem I have with this is that the zombie is the justification for the conclusion you reach, which itself is the necessary justification for the logical possibility of the zombie.

I have not offered and probably cannot offer any proof at this time that pain is a purely physical phenomenon caused by the structure of the body. However, I still do not believe there is any reason to think otherwise. The only question hinges on exactly what you thought when you used the words "logical possibility," but I cannot conceive of any meaning stronger than "I can imagine this" which justifies your position; if you have one, I'd be very interested to hear it.

--
'God dammit, your posts make me hard.' --LilDebbie

[ Parent ]
An actual experiment (sort of) (5.00 / 1) (#177)
by ucblockhead on Tue May 15, 2001 at 03:22:15 PM EST

This may mean absolutely nothing, but I remember the a description of a poor sap who had lost his long-term memory because of a brain injury. (In many ways, cognitive science is the study of wierd brain injuries.) He could only remember things that happened in the last fifteen minutes, or so, and only while his attention was on them. Leave the room and return and he'd act like he just met you.

The reason that this is on topic is because of the fairly disturbing contents of the journal he kept. It was filled with phrases like "I have just woken up" and "I have just become concious", over and over, page after page. What is fascinating is that the subjective experience was so strong that he contintued to do this even with the obvious evidence that he had before. "No, now I'm really conscious". The guy thought he truly had been one of those zombies.


-----------------------
This is k5. We're all tools - duxup
[ Parent ]

not a zombie (5.00 / 1) (#259)
by streetlawyer on Wed May 16, 2001 at 05:06:35 AM EST

If you stood on that guy's foot, he'd still feel it. "Zombies" in the philosophical sense can have as normal behaviour as you like. But they lack "qualia" to use the jargon word -- they don't have sensations.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]
Yes, but... (5.00 / 1) (#291)
by ucblockhead on Wed May 16, 2001 at 10:25:58 AM EST

The point was, he thought he had been a "zombie" and was just "waking up". Constantly. In other words, he was not a trustworthy observer for his own prior state of consciousnous.

I'm not trying to prove anything with this, just pointing out an interesting intersection between the perception of consciousness and memory.
-----------------------
This is k5. We're all tools - duxup
[ Parent ]

Pain (5.00 / 1) (#363)
by acronos on Wed May 16, 2001 at 09:43:01 PM EST

If I cut all the pain nerves going to the brain then I have created essentially your zombie. But all I have done is PROVEN that pain is a completely PHYSICAL phenomenon at this level. This is no proof that it is not a physical phenomenon in the brain.



[ Parent ]
confused (5.00 / 1) (#131)
by speek on Tue May 15, 2001 at 01:38:31 PM EST

I'm sorry, but you are extremely confused. You assert that humans have an ability to comprehend something about pi that computers can't, and thus, computers can't know pi the way humans can. There is no proof here, no logic or reasoning. You are simply refusing to accept that something non-human could have the same qualities that we call "consciousness". The way you talk about it, consciousness would have to be defined as that quality that only humans have, in which case, of course computers would not be conscious.

You have several paragraphs which repeat this circular reasoning (I like the one where computers can't have sensations because they can't have private states because there's no interpreter - which, lo and behold, is because you've already assumed that only humans can interpret). In the end, you are simply proving your intuitions by accepting them as valid. Your real problem is that you can't accept "consciousness" without an element of magic, and the human machine still contains enough mystery and unknowns to hold a magical element that you choose to cling to.

I would recommend reading some Dennet and Hostadter to offset the Searle.

--
al queda is kicking themsleves for not knowing about the levees
[ Parent ]

put up (5.00 / 1) (#155)
by streetlawyer on Tue May 15, 2001 at 02:18:59 PM EST

Errrr.... but aren't you just gainsaying each of my assertions, giving no reason why I should adopt any other point of view.

In saying that states can be assigned meaning outside of human interpretation, you are asking me to believe that these states have objective meanings, actually existing in the world. This is a far weirder belief than the belief that consciousness is not the same thing as syntactic manipulation. You are the one who wants to believe in "magic"; you want it to be magically true that the coloured lights on your screen reflect any reality other than one of electrons and switches. I've read all the strong AI crowd; for a while, I used to be a Dennettite. But he never came up with a decent explanation of how meanings or qualia could have third-person existence. And nor, I suspect, will you.

Dennet, by the way, ended up telling some pretty ugly fibs about Searle.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

hello (5.00 / 1) (#186)
by speek on Tue May 15, 2001 at 03:51:08 PM EST

Saying that states can be assigned meaning outside of human interpretation is not the same as saying those states have objective meanings. Just because you intuit that only humans can interpret doesn't make it so.

you want it to be magically true that the coloured lights on your screen reflect any reality other than one of electrons and switches

No. I'm down with the electrons and switches. I'm also down with the neurons and synapses. I don't need an explanation of how meanings could have third-person existence - I'm not a dualist. I'm thoroughly a materialist, and if humans can do it, then so can other physical things, because there is no a priori difference between them.

--
al queda is kicking themsleves for not knowing about the levees
[ Parent ]

don't be silly (5.00 / 1) (#241)
by streetlawyer on Wed May 16, 2001 at 02:08:17 AM EST

Saying that states can be assigned meaning outside of human interpretation is not the same as saying those states have objective meanings.

"Assigned" by whom? I await your non-question-begging response without baited breath.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

wait anyway you like (5.00 / 1) (#273)
by speek on Wed May 16, 2001 at 08:04:25 AM EST

Assigned by whatever conscious agency is available. If it's a human, ok. If it's non-human, that's ok too. If you're arguing that consciousness is something only humans can have, I fail to see how that's different from arguing that consciousness is something only I can have. What is your basis for making this distinction between the human brain and all other possible mechanisms?

--
al queda is kicking themsleves for not knowing about the levees
[ Parent ]

counting data points (5.00 / 1) (#285)
by streetlawyer on Wed May 16, 2001 at 09:58:57 AM EST

For human brain: 1 (mine)
For cats & dogs: some circumstantial evidence; particularly, that their brains look like mine.
For tadpoles & beetles: not very much; their brains don't look at all like mine, but they're at least made out of the same sort of stuff.
For silicon chips; none whatever.



--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

now you're being silly (5.00 / 1) (#300)
by speek on Wed May 16, 2001 at 10:53:08 AM EST

You are making conclusions based on the form of the object in question, which leads me to conclude that, for you, the definition of consciousness is hard-linked to the form of the matter in question. That's no different from saying that consciousness is a unique aspect of matter composed as human brains are composed.

I don't find that useful - certainly not worthy of discussion. If, on the other hand, you were to consider what actions, behaviors, communicative abilities are required to exhibit consciousness, then we could have a meaningful conversation about whether we think computers could ever be conscious.

--
al queda is kicking themsleves for not knowing about the levees
[ Parent ]

Exactly the problem (5.00 / 1) (#316)
by ucblockhead on Wed May 16, 2001 at 12:03:38 PM EST

This is exactly the problem I have with your argument. You are saying that "consciousness" exists in things that look like your brain, yet you don't provide any logical argument for it other than that your brain is conscious. That's like someone whose never seen a car claim that only things with legs can move under its own power because the only things he's seen that move under their own power have legs.

Obviously not an unreasonable position to take, but also very obviously not a reasonable position to claim must be true to any degree of certainty.

You are claiming certainty yet relying on a very slight amount of circumstantial evidence.
-----------------------
This is k5. We're all tools - duxup
[ Parent ]

two arguments (5.00 / 1) (#325)
by streetlawyer on Wed May 16, 2001 at 12:31:50 PM EST

Be very clear what I'm claiming here. I have no problem with the concept that anything *could* have mental states, though I don't see any evidence for anything other than brains.

I'm claiming on the other hand, that anything which does have mental states *doesn't* have them *by virtue* of it's syntactical properties, because syntactical properties a) don't determine semantics and b) don't exist without presupposing an observer to interpret them. Silicon chips might be conscious; computer progams can't be.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

Machines (5.00 / 1) (#328)
by ucblockhead on Wed May 16, 2001 at 01:17:34 PM EST

I have no problem with the concept that anything *could* have mental states, though I don't see any evidence for anything other than brains.
Yes, but you seem to have argued in the past that you never would see evidence for this in any future mechanical machine. Are you backing off on that?

I don't think anyone claims to have AI today...

I think part of the trouble is the evidence you are willing to accept. By the standard of evidence you seem to be using, it doesn't seem like you can ever see evidence in a machine because it seems as if the only evidence you are willing to accept is whether something is "like my brain".
-----------------------
This is k5. We're all tools - duxup
[ Parent ]

para. 2 of my post (5.00 / 1) (#372)
by streetlawyer on Thu May 17, 2001 at 02:45:51 AM EST

sets out what I mean, as clearly as I am able to express it. I can't think of any other way to put it.

I don't know what kind of evidence might count that something was conscious; I'm hoping that advances in neuroscience will provide that. But I do claim that consciousness has to be a property of *things* and not of abstract formal systems.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

Neuroscience (5.00 / 1) (#386)
by _cbj on Thu May 17, 2001 at 07:33:39 AM EST

I don't know what kind of evidence might count that something was conscious; I'm hoping that advances in neuroscience will provide that.

For humans, possibly, other mammals and reptiles, perhaps, the rest of terrestrial fauna so as to infallibly distinguish from plants and fungi, I would strongly doubt but will consider moot for now. What about aliens? Once we've generalised the physical conditions for consciousness sufficiently to account for all instances of it (which is more the work of AI people anyway), who's to say what's left won't be runnable as software?

Okay, no evidence either way so far, so we should obviously keep probing on all fronts. Neuroscience, cognitive science and AI.

[ Parent ]

not software (5.00 / 1) (#392)
by streetlawyer on Thu May 17, 2001 at 09:19:20 AM EST

What about aliens?

I dunno.

Once we've generalised the physical conditions for consciousness sufficiently to account for all instances of it (which is more the work of AI people anyway), who's to say what's left won't be runnable as software?

Me, and John Searle. Software is a purely formal system for syntactic manipulation. As such, it can't, in and of itself, be the right sort of thing to have semantic content.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

Invalid (5.00 / 1) (#512)
by _cbj on Fri May 18, 2001 at 08:55:00 PM EST

Software is a purely formal system for syntactic manipulation. As such, it can't, in and of itself, be the right sort of thing to have semantic content.

Unfortunately that's begging the question. Fortunately, AI is perfectly poised to step in and investigate what sort of things can have semantic content. Unlike neuroscience, which is fascinating but specialised towards humans.

(Damn sure I already replied to this.)

[ Parent ]

Things and abstract formal systems... (5.00 / 1) (#411)
by ucblockhead on Thu May 17, 2001 at 11:45:04 AM EST

Yes, that was what the Chinese Room purported to say. But the trouble with it is that it doesn't actually prove anything. It paints a picture and then says "that seems counterintuitive". It is counterintuitive to say that this non-chinese speaker using lookup tables "knows" English. But the trouble is that "counterintuitive" is just a fancy way of saying "doesn't agree with our prejudices".

Now it could be that our prejudices are right, but before we claim that they are any sort of real force, we'd better be prepared to explain the whys of it. In other words, we can't just say "it is counterintuitive to say that the room knows chinese". You've got to explain why the room (with Searle and the lookup tables) does not know chinese.

Because of this, the thought experiment really doesn't say much at all about abstract formal systems. Suppose you replace the room with a huge babbage-era calculating machine made of gears and pulleys that prints chinese when someone presses the right keys and turns a crank. Does that know chinese? Saying "yes" seems just as counterintuitive.

Now suppose we replace that with a wierd computer made up of billions of little plastic bags all containing a bizare array of electrochemicals connected with tubes that allow chemicals to be shunted between them. Does that know chinese? Saying "yes" seems just as counterintuitive. But that is very much like a brain.

So Searle hasn't really said anything at all about abstract formal systems. What he's hit upon has more to with psychology, that it is counterintuitive to assign "consciousness" to any deterministic system we come up with. It is because even the most die-hard materialist has a subconscious intuition of some sort of "elan vital". Perhaps it is even hardwired.

Yes, you can pin your hopes on advances in neuroscience, but I just don't see that happening. Neuroscience (unless Penrose is somehow right) is just going to come up with more deterministic explanations, and those explanations are going to seem counterintuitive and unsatisfactory because of human psychology.

At some point (in me belief) the human race is going to figure some of this stuff out, and will be able to point to the brain and say "conciousness arises because of this and this and this". Only then will someone really be able to make a meaningfull claim as to whether or not it must be a "thing" or not.


-----------------------
This is k5. We're all tools - duxup
[ Parent ]

why is the onus on me? (5.00 / 1) (#420)
by streetlawyer on Thu May 17, 2001 at 12:33:27 PM EST

Some people are saying that a strange property about which we understand nothing, is present in a system about which we understand every part. If we say that "I have to prove" that this completely conceptually straightforward system has no powers beyond those which we can explain, then we appear to be opening the floodgate for a whole load of other things for which there are no evidence. That's my first reply.

My second reply is that the opposing argument is either contradictory or circular. There is no such thing as a "formal system" unless an observer capable of semantic representation interprets it as such. This observer can't be another formal system, because the same objection would apply to the "observing" formal system; it isn't an observer unless it's capable of semantics (in fact it isn't even a formal system). Furthermore, the combination of two formal systems is itself a formal system, so multiplying Chinese boxes doesn't help. In order for there to be any representation, there has to be something to which things are represented.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

The claim (5.00 / 1) (#422)
by ucblockhead on Thu May 17, 2001 at 12:40:20 PM EST

The onus is on you because you are making a claim that "conciousness can't be X".

I agree that the alternative claim also has no foundation.

Both claims have no foundation. We, as human beings, simply do not know enough.

I can't buy your second reply because it seems to me that its logical conclusion is that there must be something seperate from the mechanics of the brain to somehow make semantic sense of the patterns of neural activity in the brain.


-----------------------
This is k5. We're all tools - duxup
[ Parent ]

try and buy this (5.00 / 1) (#424)
by streetlawyer on Thu May 17, 2001 at 12:47:59 PM EST

I can't buy your second reply because it seems to me that its logical conclusion is that there must be something seperate from the mechanics of the brain to somehow make semantic sense of the patterns of neural activity in the brain.

No, and it took me a long time to appreciate this quite subtle point. Not something apart from the *mechanics* of the brain; something apart from the brain *considered as a formal system*. Formal systems are one kind of mechanical process, viewed under one particular kind of interpretation. I think that semantic representation is a physical property (but an intrinsically first-person one) and that the refusal of current physics to accept the possibility of intrinsically first-person physical properties is a serious mistake.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

Trying, but... (5.00 / 1) (#425)
by ucblockhead on Thu May 17, 2001 at 12:52:00 PM EST

I've enjoyed this discussion, but I've spent way too much time at it, so I can't give the reply I'd like. All I can say is that I can't really see how a semantic representation can be a physical property.
-----------------------
This is k5. We're all tools - duxup
[ Parent ]
That may all be well and descent (5.00 / 1) (#368)
by Prophet themusicgod1 on Wed May 16, 2001 at 10:45:15 PM EST

a) but what of possible alien brains...i mean...they might not be similar at all to you. b) we can now probably simulate quite accurately the behavior of tabpoles and beetles, at the highest level of computer intelligence that i am aware of. I knew that they had got a clean hold of simple once celled organisms(which arent that intelligent compared to us, but keep in mind they evolved to us...so that this is only a matter of evolving the computers as well). In the meanwhile i'll wait till my pentium monkey machines come out
"I suspect the best way to deal with procrastination is to put off the procrastination itself until later. I've been meaning to try this, but haven't gotten around to it yet."swr
[ Parent ]
Please substantiate the libel against Dennett (5.00 / 1) (#346)
by Paul Crowley on Wed May 16, 2001 at 04:27:44 PM EST

Please either substantiate your last comment on Dennett or withdraw it.
--
Paul Crowley aka ciphergoth. Crypto and sex politics. Diary.
[ Parent ]
Floating point (none / 0) (#213)
by delmoi on Tue May 15, 2001 at 05:49:29 PM EST

That's not quite what I meant (I'd like to say "that's not quite what I said", but interpretations differ). My point was that a human mathematician's understanding of pi was significantly different from a computer's, because the human mind could grasp the concept of pi "all at once", whereas I don't think it's valid to say that the computer can be said in any sense to be working with more digits of pi than it has actually computed. Those unfortunate people who are forced to care about roundoff and truncation error in numerical computing probably have more of an intuition for what I mean when I say that when a human mathematician divides through by pi, or by some other transcendental number, he is doing something fundamentally different from when a computer tries to carry out the same process using floating point arithmetic.

Right, but, a computer doesn't need to use floating point arithmetic to solve the problem. My ti-89 calculator uses the exact value of pi in calculations, at least until it needs an exact answer. Much in the same way a mathematician will eventually have to use an approximation of pi if he ever wants to 'do something' with it in the real world.
--
"'argumentation' is not a word, idiot." -- thelizman
[ Parent ]
the difference (5.00 / 1) (#477)
by Rainy on Fri May 18, 2001 at 08:16:10 AM EST

The difference, in my opinion, is that a human knows relation of pi to the real world, to rounds everywhere, while a computer does not have a model of outside world. If you're talking about a narrow domain, both computer's and human's understanding is the same. That 'model of outside world' is quite hard to re-create artificially, and even for a human, who already has many many things built into the hardware (instincts), it still takes years to built the model. Oh, and we also have more memory, more processing power, and an existing environment full of people who've already built this model so they can help us out. Naturally, you can't be certain that this difference is all there is, until someone goes ahead and creates ai-complete. I think it's kind of dumb of the story poster to picture this all so clear-cut.. unless he's in touch with god or something.
--
Rainy "Collect all zero" Day
[ Parent ]
Logic? (3.80 / 5) (#2)
by kaemaril on Tue May 15, 2001 at 07:52:46 AM EST

Just very quickly pointing out a possible flaw in the initial logic...

a human is a machine
humans minds are conscious
therefore, machines can harbor consciousness

To take it a little further...

A Can-opener is a machine (*)
Therefore, a can-opener can harbor consciousness.

See how entertaining logic can be? Yes, I know it's very silly. I'm very sorry. Just pointing out that just because a certain type of machine can do something, doesn't mean every other machine can. I mean, I really suck at opening cans ;)

(*) It fits the definition in my dictionary, anyway.


Why, yes, I am being sarcastic. Why do you ask?


Advanced models (5.00 / 1) (#3)
by brion on Tue May 15, 2001 at 08:07:50 AM EST

A sufficiently advanced can opener, with finely-devloped type-of-can-sensing technology and the necessary computing capacity to go along with it, might yet be able to develop consciousness.

But why would you want a conscious can opener? It would just get sick of its job and whine a lot.

O ´course, I think what was intended was a -turing- machine, not a level, pulley, or inclined plane.



Chu vi parolas Vikipedion?
[ Parent ]
Answer (none / 0) (#10)
by Anonymous 6522 on Tue May 15, 2001 at 08:39:30 AM EST

You give it a Genuine People Personality[TM] that gets an immense amount of satisfaction from a can well opened.

[ Parent ]
A sentient can-opener? Preposterous! (5.00 / 1) (#18)
by kaemaril on Tue May 15, 2001 at 09:00:56 AM EST

You've got more chance of ever seeing a talking toaster! ;) *

Obscure (maybe) Red Dwarf reference


Why, yes, I am being sarcastic. Why do you ask?


[ Parent ]
Oops (none / 0) (#212)
by delmoi on Tue May 15, 2001 at 05:40:51 PM EST

I didn't mean that all machines could harbor consciousness. I guess I didn't choose my words very well. What I should have said is that there is a subset of machines that can harbor consciousness I didn't mean that all machines could harbor consciousness. I guess I didn't choose my words very well. What I should have said is that there is a subset of machines that can harbor consciousness
--
"'argumentation' is not a word, idiot." -- thelizman
[ Parent ]
No need to repeat yourself... (5.00 / 1) (#332)
by kaemaril on Wed May 16, 2001 at 02:57:34 PM EST

No need to repeat yourself...

Oh, damn, now I'm doing it ;)


Why, yes, I am being sarcastic. Why do you ask?


[ Parent ]
machineness, determinism, and controllability (4.66 / 3) (#6)
by sayke on Tue May 15, 2001 at 08:27:32 AM EST

gar. i agree with you in that i often describe humans using mechanical metaphors, but shit, man, some things about biology smell very un-machinelike to me. proteins slip and slide around each other - our familier gear-and-sprocket metaphor fails us. see, much of our "machine" concept is ancient - industrial revolution/steam engine/lever and pully era - and we're quite a bit past that now.

often, when people think of machines, they're using the old industrial revolution idea of machines, and they revolt against being associated with anything so predictable, because predictability leads to controllability. i think that's the root of the "i am not a machine!" bit: control.

many people seem to have the same gut reaction to talk of determinism. they revolt, because they don't wanna be reduced to something comprendable, because, god forbid, that might lead to reverse-engineering! and if you can be reverse-engineered, mind-reading machines are just around the corner! the sanctity of one's soul would be up for grabs! manditory neuroprobes will be the order of the day, and borg-like control of the masses (them, in their minds) will surely ensue!

those of us who know about godel's theorem, and about the epistomological limits of prediction, and about basic neuroscience, may scoff at this, but hey, it looks like a pretty common perception. people don't want to be controlled so bad they don't want to be controllable, and so they fear and loathe anything that looks like it might result in their being more controllable... and if one word can sum up industrial revolution-era machines, it's "controllable".

i'm not afraid of being in-principle predictable, because i don't think that inevitably leads to controllabiliy, so i don't have the negative gut reaction to words like "machine" and notions like "determinism" that many people do. because i know about godel's theorem, and about the epistomological limits of prediction, and about basic neuroscience, i've got no problem with calling myself a machine, but i've got no problem with calling lightning and fire machines either.


sayke, v2.3.1 /* i am the middle finger of the invisible hand */

Consciousness (4.50 / 2) (#12)
by Rand Race on Tue May 15, 2001 at 08:41:49 AM EST

Nice piece, but I do disagree with your contention that consciousness is not derived from physical specifications. An argument can be made that the physical specification of the brain has much to do with it but I'll concentrate on your arm example. Consciousness extends to the tools of a sentient being; If your car hits my car I jump out and say "You hit me!" not "Your car hit my car!". Stephen Hawking may not have the ability to use his arms but I bet he thinks of his chair as an extension of himself. Consciousness need not include arms, as in parapelegic's cases, but for those of us who do have arms they most certainly are a direct extension of our consciousness, more so than tools since our arms are directly linked to the nueral system.

Consciousness is a very tricky subject. Personally I would start with making a computer comprehend metaphor, which since we cannot describe consciousness without it must be an important part of human consciousness. But how do we make the computer understand the metaphor behind running a program if the computer has no legs? How do we describe holding to something without hands? It would be interesting to compare these ideas between healthy humans and those suffering congenital para/quadrapalegia. Does someone who has no legs comprehend the running metaphor the same way someone with legs does? If they do understand it the same way we need to study that mechanism of understanding so we can use it to teach computers to be conscious... otherwise we'll have to give the computer arms and legs if we ever expect it's consciousness to be like ours (and would we comprehend a consciousness unlike our own?)


"Question with boldness even the existence of God; because if there be one, He must approve the homage of Reason rather than that of blindfolded Fear." - Thomas Jefferson

Interesting points (5.00 / 1) (#16)
by farmgeek on Tue May 15, 2001 at 08:57:28 AM EST

I've always thought that in order to have a comprehendable (sp?) AI, one which was not so alien to ourselves that we couldn't understand it, would require us to give the AI a large portion of the physical abilities and senses that we have. I think it's somewhat anologous to a person that exists (theoretically) with no external senses and no controllable extremities. They may be concious and they may be intelligent, but we have no way of knowing (yes I realize there are brainwaves and whatnot, but work with me here). Without a common frame of reference between intelligences, there's no real good way to recognize that the other intelligence exists.


[ Parent ]
Extremities vs peripherals (5.00 / 1) (#242)
by axxeman on Wed May 16, 2001 at 02:08:41 AM EST

If a concious computer system with audio-sensors was "blinded", it too might say "you blinded me" instead of "your hand broke my camera".

Just the same, some people would say "your car hit mine".

Emotional responses and identification with cars are not necessary signs of conciousness.

lec·tur·er (lkchr-r) n. Abbr. lectr: graduate unemployable outside the faculty.
[ Parent ]

You had it but lost it (5.00 / 1) (#447)
by Rand Race on Thu May 17, 2001 at 04:54:12 PM EST

"...identification with cars are not necessary signs of conciousness."

Noticing that you are identifieng with a car is though. Which is beside the point, that point being that our conception of consciousness is based on metaphors of physical events that we experience and through the process of identifieng with non-local (nuerologically speaking) extensions to our self we may find a mechanism for giving an AI a common metaphorical basis of consciousness with their creators (us). You had it with the first line, but didn't quite follow through. Wether or not the Ai says "you blinded me" or "you broke the camera that sends visual stimuli to me" we can use that experience to show it the mental metaphor of 'being blinded by insight' (for example). Without the concept of cars hitting other cars (however you identify with it) how would the computer parse "I'm a total wreck today", and without the extended consciousness of identifieng with a car how would it be able to use such a phrase itself?


"Question with boldness even the existence of God; because if there be one, He must approve the homage of Reason rather than that of blindfolded Fear." - Thomas Jefferson
[ Parent ]

you see, is very easy, yes? (none / 0) (#551)
by axxeman on Tue May 22, 2001 at 10:58:56 PM EST

The computer's equivalent would be "I'm a total MAC today".

lec·tur·er (lkchr-r) n. Abbr. lectr: graduate unemployable outside the faculty.
[ Parent ]

Fuel for this fire. (5.00 / 1) (#510)
by Kaki Nix Sain on Fri May 18, 2001 at 08:25:43 PM EST

Along the same lines as what you mention (and I'm not saying that I go along with it), is a little phenomenon that impressed me when I learned about it. Take a pencil or pen, close your eyes, and holding the pencil or pen by one end, run the other end across the surface of a table, the carpet, other stuff. Now the fun bit comes when you ask yourself, "what area of space do I feel that texture coming from?" I don't know about you, but I feel the table's surface (or the carpet's or whatever's) as opposed to what is really happening to my nerves (them getting sensations from the vibrations of a pen).

The stick extends my sense of touch, and I feel through it. Talk about identifing with an object! This seems like it to me. But it all makes plenty of sense if you think about how the brain had to learn how to use the fleshy bits it is attached to in the first place. [I wonder if animals whose sensory-motor centers are more hard-wired than our own can do the pencil trick.]

I don't want to make too much of this example, b/c I doubt it proves much. But your talk of our consciousness extending to our tools reminded me of it.

peace.

[ Parent ]

interesting (4.00 / 2) (#14)
by klamath on Tue May 15, 2001 at 08:51:27 AM EST

Although I don't think you ever really got to the core of your argument. Why are human beings machines? How is software in any way similar to the human consciousness?
I don't believe that the human mind is non-deterministic ... just too complex to fully simulate.
Well, the complexity has nothing to do with determinism. Either something can be simulated or it cannot; the difficulty of doing so is irrelevent.
Programming a computer to behave non-determisticaly, on the other hand, is relatively trivial. Just plug it into a Geiger counter and a radiological source.
I'm not sure I understand this. Computers by their nature are deterministic: with a given set of input, you will always get exactly the same output. What does your example mean?

infinite state machines smell nondeterministic (5.00 / 1) (#21)
by sayke on Tue May 15, 2001 at 09:10:13 AM EST

if only because they appeal to the universe's nondeterminism, which i don't buy. i say ya got two choices: determinism and randomness. pick one, but stick with it. either way, "responsibility" gets exposed as a sham, although that's beside the point, if interesting...


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

I/O (5.00 / 1) (#84)
by Khalad on Tue May 15, 2001 at 12:18:09 PM EST

I'm not sure I understand this. Computers by their nature are deterministic: with a given set of input, you will always get exactly the same output. What does your example mean?

I'd argue that a human being, given the exact same input as at a previous time, would behave identically. Of course one could never reproduce exactly a person's "input" at a previous time as well as his internal state, so the point is moot.

And I'd then argue that it is the same with computers. If a computer were sufficiently complex that one could never reproduce both its input and its internal state, then for all practical purposes it would be non-deterministic. Take, for example, Windows 98. When I used to use that beast I'd swear that it was alive. I couldn't reproduce the BSODs for the life of me; it was as if they were watching and waiting for my confidence to grow before popping up and shredding it to bits.

Think of it this way: if you didn't know how computers work, would there be any way to confidently label them as deterministic? It gets harder day by day as systems grow in complexity and their behavior becomes more and more unpredictable.


You remind me why I still, deep in my bitter crusty broken heart, love K5. —rusty


[ Parent ]
response (5.00 / 2) (#357)
by klamath on Wed May 16, 2001 at 09:08:03 PM EST

I'd argue that a human being, given the exact same input as at a previous time, would behave identically. Of course one could never reproduce exactly a person's "input" at a previous time as well as his internal state, so the point is moot.
I'd say this is a bit dubious -- it's pure speculation. Is there any evidence to conclude that the human consciousness is entirely 'causal'?
If a computer were sufficiently complex that one could never reproduce both its input and its internal state, then for all practical purposes it would be non-deterministic.
First off, I don't think such a computer could be created -- it may be difficult to recreate its input and internal state, but still quite possible. And even in the 'difficult' case, perhaps for all 'practical purposes' it would be non-deterministic, but that really is irrelevant when considering morality and philosophy. If you take the Windows98 example: I think we'd both agree that ultimately, Win98 is deterministic. If you took 1000 software engineers and spent 1000 years analyzing it, the problems behind Win98 would become fairly obvious. Simply because, as an uninformed layman, it would appear unpredictable does not mean that, fundamentally, they are non-deterministic.
It gets harder day by day as systems grow in complexity and their behavior becomes more and more unpredictable.
Again, a more complex system is not more unpredictable -- it just requires more information to predict. But it is not more difficult from a philosophical perspective, because that would imply some change in the fundamental nature of the system -- and complexity isn't sufficient. In this case, any computer is identical to a calculator when considering whether it is determined or not.

[ Parent ]
Philosophical-shmilosophical (5.00 / 1) (#429)
by Khalad on Thu May 17, 2001 at 01:29:46 PM EST

No, I don't think it's irrelevant that "for all practical purposes" is irrelevant even from a philsophical point of view. That we cannot determine whether or not the human mind is deterministic or not seems to suggest that practicality is in fact quite important.

It's pointless to talk about fundamentals if we can't determine them. Yeah, 1000 engineers analyzing Windows 98 for 100 years may be able to come to a conclusion about its determinacy, but then again, you only believe that because you already know that it is deterministic. What about the human mind? What if a million scientists and philsophers studied the human mind for a million years? Even if they came up with the answer, "Woh, we actually are deterministic!" would that mean anything?

We can't predict anything right now. The weather is deterministic, yes? Well so what? That doesn't matter one whit as to whether we can actually determine anything about it for more than a few days at a time.

I think complexity is much more important than whether or not something is theoretically deterministic or not. As complexity arises, our ability to predict behavior shrinks. Sure, we could predict Windows 98's exact behavior given sufficient resources. What about the weather? Again, with sufficient resources we could (forget about quantum mechanics for a minute). Having sufficient resources is another matter; I don't think we could ever have sufficient information to be able to predict the weather accurately for more than a few days. And the human mind? Even if it was deterministic, how could we ever be able to predict a person's behavior? "Knowing" a person's internal state is only a theoretical prospect. For all practical purposes, the mind will always be non-deterministic. A sufficiently complex computer system would be no different.


You remind me why I still, deep in my bitter crusty broken heart, love K5. —rusty


[ Parent ]
complexity and determinism (5.00 / 1) (#469)
by klamath on Fri May 18, 2001 at 12:05:23 AM EST

The weather is deterministic, yes? Well so what? That doesn't matter one whit as to whether we can actually determine anything about it for more than a few days at a time.
True; however, philosophy is not about predicting weather patterns. Recognizing the nature of reality -- e.g. the weather is determined -- is what is important, not necessarily being able to use that information for a 'practical' purpose. In other words, our knowledge of the weather's determinism does NOT tell us exactly what the weather will be like in 10 years -- but it DOES tell us that the weather is not influenced by the whims of a diety -- and that sacrificing babies will likely have no effect on next year's rain fall.
That we cannot determine whether or not the human mind is deterministic or not seems to suggest that practicality is in fact quite important.
However, our ignorance has no effect on the human mind's nature. If we were omniscient and analyzing the content of another's species brain, our own knowledge would have no effect on the thing we were observing. Again, the nature of something is not influenced by the person observing it (ignore Heisenberg please ;-) ).
It's pointless to talk about fundamentals if we can't determine them.
But we can -- it's just difficult (or 'impractical' if you prefer ;-) ).
forget about quantum mechanics for a minute
I think that quantum mechanics only introduces probability into physics. So for example, there might be a 50% chance of A or B occuring -- but one could hardly call that behavior possessing 'free will'.

[ Parent ]
Alright, focus on determinism. (5.00 / 1) (#493)
by Khalad on Fri May 18, 2001 at 12:08:06 PM EST

So what is free will then? I'll drop the scientific/practicality stuff for now. Regardless of how we work internally, or metaphysically, I still don't buy the concept of free will, or nondeterminism.

Let me present a metaphor; if you find something wrong with it, that's where I'd like to take this discussion. I think of human beings as systems. Hell, I think of everything as a system. People change their state constantly, and they do this based on two things: their prior state, and their input.

By internal state I mean their thoughts, beliefs, and memories. By input I mean the external world; stimuli. Events that transpire, things that happen to them, things they see. Anything external. And based on these two things, they either change their internal state, or they act, or both. So given input, they produce output.

It doesn't really matter what variables you throw into the mix; regardless of quantum mechanics, or of God, or supernatural forces, or whatever, I don't see how people cannot be deterministic. How is it that our systems could ever produce two outputs for one input? I mean, isn't a choice just another output (one path being taken) based on more input (our thoughts about the choices)?

I know that free will has been argued to death for the last few thousand years, but since I bore easily when reading most philosophy, perhaps you can enlighten me? Or give me your own thoughts on the matter?

I just really don't understand the idea of free will. All of our choices are based on what we think. And given what we think, there's really only one conclusion that we can draw. If we forego choice and decide based on some external factor, well that's just another choice, and then all we've done is moved the decision to the external world. That's not free will, that's just happenstance.

How is free will any different from the choices a computer makes? If we don't understand how a computer works, and so we can't predict its choices, I wonder if the computer wouldn't then have as much free will as we do. In my view, we are just very complicated deterministic machines following the if/else statements of our minds. (Okay, well that's just a metaphor, and we're much more complicated than that, but you understand my point?)


You remind me why I still, deep in my bitter crusty broken heart, love K5. —rusty


[ Parent ]
Agrees (5.00 / 1) (#520)
by diskiller on Sat May 19, 2001 at 01:28:28 PM EST

*reading this very long interesting discussion*

Unfortunately, i don't really have much to add to this.

I'm just posting to say that i have to agree with you. We're nothing but 'deterministic machines'. We have our internal state (our memories, beliefs, etc), and our inputs (external world, from our 5 senses).

So lets think.

What happens when you are born? You start off with 0 internal state (apart from whats hard wired in... ie, the babies most basic instinctive behaviour, like how to suck). Everything that goes into the babies brain would be from its inputs, its external environment.

And i think its fairly obvious that for a new born baby, interaction with the outside world is incredibly important for the first few years; babies born without such input have serious mental and psyhcological problems. They are never able to function normally.

This seems to further suggest we are just deterministic, and need alot of input from the outside word, especially in our early stage, to "build up our internal state".

Something else bothers me.

What about "opinions" and "beliefs" ?. I mean, i dunno. It seems like its something we have "chosen". I suppose this is just determinism again, but very complex... I know i have alot of strong opinion on things, but my opinions on things have also been changed as i learnt more on the subject, ie. i have gotten more input; so my internal state has changed ?.

As for your Windows 98 comment earlier... you know, i have to agree with that. Sometimes i believe windows *is* alive. Its such a big huge pile of incoherent garbage, its starting to breathe life! Maybe thats all intelligence and concience is? Just a huge complex deterministic machine. And Microsoft might be the first to create a true AI :).

D.

[ Parent ]
Artificial (5.00 / 1) (#522)
by Khalad on Sat May 19, 2001 at 06:11:43 PM EST

And Microsoft might be the first to create a true AI :).

I'm not so sure about the I.


You remind me why I still, deep in my bitter crusty broken heart, love K5. —rusty


[ Parent ]
Only one contention... (4.00 / 2) (#29)
by Farq Q. Fenderson on Tue May 15, 2001 at 09:34:43 AM EST

My contention is that you say the human mind isn't non-deterministic. This doesn't really matter, because as you point out, a computer can be made to behave nondeterministically (even without a Geiger counter, btw.) In fact, determism and nondetermism are a chicken and egg scenario - any sufficiently complete (Turing Complete, that is) system of one type is also capable of the other. The human mind is one (otherwise we couldn't understand TC systems) and a computer is one (by definition; a Turing Machine is Turing Complete.)

farq will not be coming back
yer logic is a bit wrong. (3.85 / 7) (#34)
by chopper on Tue May 15, 2001 at 09:45:51 AM EST

that is, your syllogism is improper.

by this, i'm assuming (bad) that when you say 'machines can harbor consciousness', you mean only mean that all current machines have that potential, say, in the future when tecnology is sufficiently advanced, as evidenced by this posting.

anyway, the problem with that logic is, you say:

'all A are B'

'all A are C'

'therefore, all B are C'

which is an improper categorical syllogism; for example, i could use the same structure to say:

a knife is a kitchen utensil

a knife can cut a steak

therefore, all kitchen utensils can cut a steak

which is refuted by, for example, an eggbeater.

give a man a fish,he'll eat for a day

give a man religion and he'll starve to death while praying for a fish

Note the lack of 'all' (5.00 / 1) (#211)
by delmoi on Tue May 15, 2001 at 05:25:51 PM EST

I never said that all machines could be intelligent, and of course I do not think that they could be. I wouldn't expect a car, or a lever, to be intelligent.

What I said would be more like this
  • a knife is a kitchen utensil
  • a knife can cut a steak
  • Therefore, some kitchen utensils can cut steak
That would open the door for other kitchen utensils to be able to cut steak, such as a pair of forks (I think you could do this, though it wouldn't be as pretty)
--
"'argumentation' is not a word, idiot." -- thelizman
[ Parent ]
note the lack of 'some' (5.00 / 1) (#275)
by chopper on Wed May 16, 2001 at 08:22:48 AM EST

makes sense when you put in "some".

in your article, however, you said "therefore, machines can harbor consciousness", which doesn't really lend itself to 'some' machines. and that statement was kinda the basis of your argument.

however, with "some", it makes more sense. sorry to be nitpicky tho :).

the big problem i have is, with your nelwy edited logic, that is:

a human is a machine

humans are conscious

Therefore, some machines are conscious

is a tautology. it really doesn't prove that some machines other than man can be conscious, it just reasserts the first and second principles: that some machines (i.e. man) harbor consciousness. just like your knife analogy doesn't prove that a fork could cut a steak, or even anything other than a knife.

give a man a fish,he'll eat for a day

give a man religion and he'll starve to death while praying for a fish
[ Parent ]

oops... (5.00 / 1) (#287)
by chopper on Wed May 16, 2001 at 10:03:10 AM EST

i didn't mean 'tautology', wrong choice of words. what i meant was, it was an obvious substitution. i.e.

some b are a
a=c
therefore, some b are c, by substituting c for a in the first assertion.

8:30 am, no coffee, whaddya expect :)

give a man a fish,he'll eat for a day

give a man religion and he'll starve to death while praying for a fish
[ Parent ]

Actually, it is a tautology (4.50 / 2) (#435)
by Anonymous 242 on Thu May 17, 2001 at 02:19:59 PM EST

If humans are defined as a subset of machines, then that some machines (the subset of which is human) have human specific characteristics is a mere tautology and to use the tautology as evidence that some machines can think is circular reasoning.

[ Parent ]
Difference (4.00 / 4) (#36)
by caine on Tue May 15, 2001 at 09:48:59 AM EST

I would like to point out that there is no real difference between hardware and software. Whatever you can do in software, you can do in hardware. Software is in fact a form of hardware. Or where else do you claim that the software exist, if not as a part of the memory banks of the computer or some form of storage?

You can not have software without hardware, because there really is no "software". And the human brain is the same way; it's consciousness is the machine. The trick is to have the sum to be greater than the parts, as when you are looking at the little dots on a newspaper page, and they form a picture. One by one, the dots are simple to explain and seem to have no function. Together however, it is a totally different matter.

And to reply to streetlawyer further down: have you not used Maple? :) It handles Pi quite fine, just like a human. The argument against that is of course, that it treats it just like a symbol, but I believe that's what humans do too. It's all about abstractions.

--

Maple (5.00 / 1) (#52)
by streetlawyer on Tue May 15, 2001 at 10:34:30 AM EST

I haven't used it, so I can't comment. But, as I add further down that thread, for pi, substitute "phi", an arbitrarily chosen transcendental number with no tractable properties and no converging rational approximation. My contention is that a human mathematician handles phi in a completely different way from a computer. If I understand you correctly, surely your program Maple would handle phi in the same way it handles pi. A human being /doesn't/.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]
doesn't, but can't? ever? (5.00 / 2) (#88)
by sayke on Tue May 15, 2001 at 12:26:41 PM EST

i've gotta ask: do you really think you couldn't be run in emulation? do you really think you possess some magical elan vital that makes you special and impossible to reverse-engineer?


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

"magic elan vital" (4.50 / 2) (#95)
by streetlawyer on Tue May 15, 2001 at 12:40:10 PM EST

I could, I am sure, be simulated, to some arbitary degree of accuracy by some Laplacian mastermind who had somehow managed to gather a lot of information about me. But the vast majority of my mental processes are not computations, so I don't see how a computer carrying out a Turing-computable algorithm could be me, or anything recognisably like me. I certainly don't see any reason to believe that a simulation of my pain would hurt anyone.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]
Computation (4.50 / 2) (#181)
by ucblockhead on Tue May 15, 2001 at 03:32:04 PM EST

But the vast majority of my mental processes are not computations....
Assumption alert.

There is, as yet, no evidence for any noncomputational brain processes.
-----------------------
This is k5. We're all tools - duxup
[ Parent ]

that's a nice assertion ya got there (5.00 / 1) (#277)
by sayke on Wed May 16, 2001 at 09:08:22 AM EST

But the vast majority of my mental processes are not computations how do you figure?


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

It is possible, yes (5.00 / 1) (#101)
by Simon Kinahan on Tue May 15, 2001 at 12:55:07 PM EST

Assuming jsm is a human being (just kidding) it is possible, yes. I'd never deny that one day we will be able to build duplicate human beings from scratch, but that does not necessarily imply that you can simulate one on a computer.

Computers deal only in discrete quantities - rational and integer numbers. It is possible - but not certain - that real world positions and velocities are not discrete but continuous, representable mathematically only as real numbers. Computers cannot compute functions over reals, because each real number requires a potentially infinite amount of state to store. Thus perfectly accurate simulation of any part of the real world - including a human being - may be impossible.

Simon

If you disagree, post, don't moderate
[ Parent ]
yea, resolution limits make things tricky (5.00 / 1) (#112)
by sayke on Tue May 15, 2001 at 01:06:21 PM EST

see, i think the universe is well-described as a massive (truly massive, now) cellular automata with each cell a planck length (to some power) in size. at least, i find this formulation damn aesthetic ;)

in that case, the universe would only deal with things in discrete quantities (planck cells). however, i don't think we have to go nearly that fine-resolution to come up with close-enough-for-jazz neural emulation, etc...


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

Sigh (5.00 / 1) (#141)
by Simon Kinahan on Tue May 15, 2001 at 01:52:41 PM EST

Thats a choice of 2 assumptions: either that the universe is discrete (possible, but unproven), or that neurons have no sensitive dependence on initial conditions (probably false). Are you beginning to see how much faith is involved in your position ?


Simon

If you disagree, post, don't moderate
[ Parent ]
Assumptions (5.00 / 1) (#192)
by spiralx on Tue May 15, 2001 at 04:18:49 PM EST

Thats a choice of 2 assumptions: either that the universe is discrete (possible, but unproven), or that neurons have no sensitive dependence on initial conditions (probably false). Are you beginning to see how much faith is involved in your position ?

Whilst energy, scale and time can all be measured in terms of Planck units I very much doubt the Universe acts as a cellular automata. We simply don't have enough information about how the Universe truly is at those scales yet to say.

As for the dependence upon initial conditions of neurons, so far the evidence is that the brain contains no structures small enough to be affected by purely quantum effects, and so it may well be possible to simulate a brain. Again though, this all rests upon a field of knowledge which isn't well-understood yet.

As such, I wouldn't want to be making either of these assumptions... :)

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

lack of alternatives leaves me no choice (5.00 / 1) (#226)
by sayke on Tue May 15, 2001 at 09:12:49 PM EST

tell me some coherent alternatives to the discrete-universe bit. please?

and what does neural sensitivity to initial conditions have to do with anything? that just makes it harder to model - it doesn't say anything about the in-principle possibility of modeling.


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

Duh (5.00 / 1) (#266)
by spiralx on Wed May 16, 2001 at 06:16:30 AM EST

tell me some coherent alternatives to the discrete-universe bit. please?

An analog, continuous Universe. Or even a Universe where some quantities are analog, some discreet, as is the case with our current understanding of physics (general relativity and quantum field theory).

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

but it looks like it's made out of planck cells! (5.00 / 1) (#280)
by sayke on Wed May 16, 2001 at 09:33:12 AM EST

to me, it very much looks like we've got very discrete granularity on the planck scale. you yourself admit that energy, scale and time can all be measured in terms of planck units. how elegant is that?

but let's see how my "the universe, on a planck scale, is a massive cellular automata" theory stacks up against my epistemic critiera for good theories:

  • explanitory power? fuckloads.
  • predictive power? none.
  • novelty of predictive power? none.
  • simplicity? fuckloads.
  • generality? fuckloads.
  • tenativeness? sure.
  • openness to peer review and criticism? sure.
  • testability? none.
  • falsifiability? none.
ok, so it's not ready for prime time yet, but hey, it shows a lot of promise - far more, i'd argue, then any other theories of it's scope. hell, are there any other theories of it's scope? i sure don't know of any.


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

Well (5.00 / 1) (#292)
by spiralx on Wed May 16, 2001 at 10:35:07 AM EST

It's a valid hypothesis, sure, I'm not denying that. But so is an analog universe, and you were implying there was *no* alternative.

Current thinking seems to imply that in certain senses spacetime is granular - read Brian Greene's The Elegent Universe for a better explaination than I can give, but as yet, our understanding of things on these scales is severely limited, so the question is very, very much open.

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

not what i meant to imply (5.00 / 1) (#297)
by sayke on Wed May 16, 2001 at 10:42:44 AM EST

It's a valid hypothesis, sure, I'm not denying that. But so is an analog universe, and you were implying there was *no* alternative.

i meant to imply that none of alternatives were nearly as nifty =)

i think see any alternatives to determinism and randomness, though. i don't see any middle ground there...

Current thinking seems to imply that in certain senses spacetime is granular - read Brian Greene's The Elegent Universe for a better explaination than I can give, but as yet, our understanding of things on these scales is severely limited, so the question is very, very much open.

i'll hit up the library next time i stop by.


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

Middle ground (5.00 / 1) (#459)
by spiralx on Thu May 17, 2001 at 06:30:02 PM EST

i [don't] think see any alternatives to determinism and randomness, though. i don't see any middle ground there...

I'm going to assume there was meant to be a don't in there... :)

Consider quantum mechanics. The behaviour of any single particle is random, yet when you get large groups of them statisics takes over and the group behaves in a deterministic way. Otherwise, quantum mechanics would affect us directly...

Also, whilst the results of any measurement made in QM are random with probabilities of different answers based on the wave equation, the wave equation itself is entirely deterministic.

So yes, there are middle grounds...

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

Sensitive dependence on initial conditions (5.00 / 1) (#324)
by Simon Kinahan on Wed May 16, 2001 at 12:31:27 PM EST

If neurons exhibit chaotic behaviour under some conditions, such fine measurements and simulation steps may be required (tending to infinitely fine), that it may be impossible to build a computer that is big and fast enough,

Simon

If you disagree, post, don't moderate
[ Parent ]
Right, but (none / 0) (#209)
by delmoi on Tue May 15, 2001 at 05:19:35 PM EST

Computers deal only in discrete quantities - rational and integer numbers. It is possible - but not certain - that real world positions and velocities are not discrete but continuous, representable mathematically only as real numbers.

We are only talking about simulating a human brain, not the big bang or something. A human brain most diffidently does work on discreet values just like a computer
--
"'argumentation' is not a word, idiot." -- thelizman
[ Parent ]
Discreet? (5.00 / 1) (#214)
by spiralx on Tue May 15, 2001 at 05:50:08 PM EST

My understanding is not the best, but don't neurons fire based on chemical gradients rather than any discreet trigger?

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

Learning (5.00 / 1) (#130)
by caine on Tue May 15, 2001 at 01:37:37 PM EST

Maple handles whatever you give it as it is teached to handle it, just like you do. I do not believe you were born with the knowledge of phi? Of course, the method of teaching varies.

--

[ Parent ]

Hardware goes soft... (4.50 / 2) (#64)
by Farq Q. Fenderson on Tue May 15, 2001 at 11:13:41 AM EST

Hardware needs to turn into software at some point in order to perform certain functions. All emergent phenomena is soft, for example.

This is critical, self-modifying code exists only as software - because the hardware can't change itself. Even if it could, it would then be soft, because it would need to depend on a firm ground that was wholly stable.

There _is_ a real difference between hardware and software. You can't write a second-order simulator at the first order, hardware by definition.

farq will not be coming back
[ Parent ]
You what ? (4.00 / 2) (#99)
by Simon Kinahan on Tue May 15, 2001 at 12:50:35 PM EST

You can create self modifying hardware. You use a device called an FPGA - a configurable array of logic gates - and have it calculate new configurations for itself. No software involved. Of course the physical structure is not modified, just the configuration, but there's no program, hence no software.

Consider biological systems. There are genes that code for protiens that chop up DNA, or cause it to be expressed in a different way.

Now you might call the DNA sequence, or the FPGA congfiguration, software, but this is not a necessary part of the explanation of the phenomenon. You can reduce any explanation in terms of software to one in terms of electronics and chemistry. The only reason we ever talk about software is that it is easier for us to undertand a device reading and executing instructions than it is to understand very complex electronic interactions.

Simon

If you disagree, post, don't moderate
[ Parent ]
Emergence (5.00 / 1) (#121)
by Farq Q. Fenderson on Tue May 15, 2001 at 01:19:26 PM EST

Software emerges from hardware, which then becomes the next level of "hard"-ware. This is vital to many systems. Computers can't compute without virtual space, which is soft, but hard to itself.

Any process is soft, any material is hard... it's really very simple. When a process no longer behaves like a process (it does happen) then at that level - at that scale - it is material, it's hard and no longer soft.

farq will not be coming back
[ Parent ]
I'm sorry (5.00 / 2) (#144)
by Simon Kinahan on Tue May 15, 2001 at 01:55:46 PM EST

But if you want me to understand you you're going to have to speak English. What is "emerging from hardware" ? What is "soft, but hard to itself" ? When does a process no longer behave like a process.

This stuff sounds like the jargon of a new age religions.

Simon

If you disagree, post, don't moderate
[ Parent ]
I'm talking about scale. (5.00 / 1) (#149)
by Farq Q. Fenderson on Tue May 15, 2001 at 02:04:50 PM EST

Things behave differently at different levels of scale. When phenomena of one scale give rise to new higher-order phenomena, the new phenomena has emerged from the lower order.

Phenomena of the same scale are hard, in relation to each other, but soft in relation to the lower-order phenomena. This is because at the lower order they are intangable and mutable, but at the higher order they are quite tangable (thus the terms "hard" and "soft") and immutable. That's how the distinction is made, and yes, it's contextual.

Sorry for not being clearer in the first place.

farq will not be coming back
[ Parent ]
Right (5.00 / 1) (#326)
by Simon Kinahan on Wed May 16, 2001 at 12:42:22 PM EST

So my point was, you can remove all "higher" scale phenomena from an explanation, and replace them with "lower" scale phenomena. Thus the higher scale phenomena can't have any more causal power than the lower scale phenomena. Thus the whole idea that software can be concious even though hardware can't is silly. It based on a sort of process dualism extended to machines, which takes algorithms as being of some kind of transcendental nature that allows them to imbue systems with conciousness.

Simon

If you disagree, post, don't moderate
[ Parent ]
FPGA (5.00 / 1) (#180)
by ucblockhead on Tue May 15, 2001 at 03:29:32 PM EST

Field Programmable Gate Array.

The software is the pattern the gates are set to.

That is no different from software on a PC, which is just a pattern of bits in gates.

The only conceptual difference between the two is that the one retains the pattern when the power is removed.
-----------------------
This is k5. We're all tools - duxup
[ Parent ]

Quite (5.00 / 1) (#327)
by Simon Kinahan on Wed May 16, 2001 at 12:47:53 PM EST

I'm not arguing that there's any conceptual difference. I'm arguing that it is silly to attribute causal powers to software but not to the underlying hardware. If one has it they both must.

Simon

If you disagree, post, don't moderate
[ Parent ]
Ahem... (4.00 / 3) (#110)
by deefer on Tue May 15, 2001 at 01:05:05 PM EST

self-modifying code exists only as software - because the hardware can't change itself.
Really?


Strong data typing is for weak minds.

[ Parent ]
My Wudan is stronger than yours... (4.50 / 2) (#118)
by Farq Q. Fenderson on Tue May 15, 2001 at 01:15:35 PM EST

I know a lot more about those than you might think. (In fact, I'm going to be programming some.)

The point is, though, that they're software. Really. They're made of hardware, but they're definitely soft. Electrons are hard, bits are soft... it's a boxoid problem.

farq will not be coming back
[ Parent ]
oh come on (5.00 / 1) (#365)
by jkominek on Wed May 16, 2001 at 10:15:53 PM EST

Pointing out that FPGAs store their configuration in SRAM isn't amazingly convincing (especially when you say that you know "alot more than you might think" but then don't mention any of ).

You could always create an integrated circuit which programed itself by burning out parts of itself like a PROM, and which required a constant flow of more silicon for it to program in that manner. (All life we know of needs a constant supply of material, right?)

if the future act of you programming an FPGA gives you stronger wudan than the other fellow, then the past act of my programming FPGAs should give me even stronger wudan, right? ;)
- jay kominek unix is all about covering up the fact that you can't type.
[ Parent ]

Example (5.00 / 1) (#140)
by caine on Tue May 15, 2001 at 01:51:30 PM EST

Would you not consider a floppy disk to be hardware? And would you not consider the magnetic disk on it also hardware? Yet it is that which is the software. The words "hardware" and "software" is just semantics, another abstraction, to ease communication and handling of computers for humans. You seem to realize this yourself, as you say, it is "by definition". Yet you do not acknowledge it?

--

[ Parent ]

Semantics-Phoo(ey) (5.00 / 1) (#153)
by Farq Q. Fenderson on Tue May 15, 2001 at 02:11:20 PM EST

What's *on* the disk is quite obviously software. The configuration (intangible) of electrons (tangible) that goes into making up the bits (intangible) is the software, *not* the electrons themselves.

Hardware is tangible and immutable while software is intangible and mutable. I've typecasted the terms above to help illustrate my point. I don't see how that's a purely semantical difference - which is what you're accusing me of.

farq will not be coming back
[ Parent ]
Then what is hardware? (5.00 / 1) (#157)
by caine on Tue May 15, 2001 at 02:24:45 PM EST

My graphics card is also dependent on certain configurations of electrons in its' atoms. Is that too software then?

My point is this; when a certain "software" is on the disk, the disk has a certain form. It is, in all effects hardware. I could replace it with another form of data, that you would not consider software, that would fill the same purpose. The only difference is how I edit the data on the disk. I can either do it with the help of a machine, my computer, or manually, by editing the disk by hand (which would in reality mean other tools). Software is no more software than the electric cable to my lamp. If I attach a computer, with a keyboard, and it can route electrons, it is the same effect as putting the disk into the diskdrive. Do you consider my electrical cords software?

--

[ Parent ]

Phooey indeed. (5.00 / 1) (#171)
by Farq Q. Fenderson on Tue May 15, 2001 at 02:58:47 PM EST

You're not arguing in a relevant way at all. You've clearly missed the point:

   <<when a certain "software" is on the disk, the disk has a certain form. It is, in all effects hardware.>>

- then would you please mail me a handfull of bits - independant of storage media - to prove your point?

You can't - 'cause they're intangible. They're not necessarily made of electrons, they could be made of holes in paper tape. The bits themselves are intangible - you cannot define them based on their constituent physical parts.

farq will not be coming back
[ Parent ]
Err? (5.00 / 1) (#174)
by caine on Tue May 15, 2001 at 03:12:27 PM EST

I can (ok, I can't) create a computer built on plastics, paper, metal, whatever. What's your point? The computer's function is also intangible in that meaning. It is another abstraction.

Non-the-less, when the bits on the disk are represented as tangible atoms, as is most other things, (unless we're horribly wrong, which we probably is), they are hardware. Software is as said just another abstraction.

It's like an abstract class in java. You can not use until you have instanciated it. So, for computers, the instanciated class is hardware. You may call the non-instanciated idea of software for "software". That I am fine with. But that was not what the author of the article said.

--

[ Parent ]

Spelling (5.00 / 1) (#175)
by caine on Tue May 15, 2001 at 03:18:52 PM EST

I noted some spelling errors, which made the text quite annoying to read, my apologies :). English is not my native language.

--

[ Parent ]

Actually... (5.00 / 1) (#185)
by Farq Q. Fenderson on Tue May 15, 2001 at 03:44:14 PM EST

The computer's function being intangible is exactly what I'm talking about, in fact. =)

As I see it, it's the mutability of software that gives rise to it's abilities to do things that can't be done at a lower level and that is relevant to what delmoi is saying. I'm not prepared to argue it, not right now... 'cause I've been debating this all day long and I'm a bit tired of it now.

In all, it may be an abstraction, but it's indicitive of a (IMO) relevant quality that doesn't occur beneath that abstraction (which is the basis for making a *good* abstraction anyway.)

farq will not be coming back
[ Parent ]
*shrug* (5.00 / 1) (#193)
by caine on Tue May 15, 2001 at 04:20:36 PM EST

I do not argue that it is not a good abstraction :). But as you say, I think it's been debated enough.

--

[ Parent ]

Yes and no (none / 0) (#207)
by delmoi on Tue May 15, 2001 at 05:14:01 PM EST

My graphics card is also dependent on certain configurations of electrons in its' atoms. Is that too software then?

Well, first of hardware can have 'state' that is, inputs to the circuit that are stored inside of it. it's state may change, but what the hardware does does not change. Although you may be selecting another part of it. Most computer hardware works like this.

On the other hand, your graphics card may be hardware that stores its own software. It, itself, is hardware, but it has software that runs it.

But obviously there is going to be some blurriness around the edges.
--
"'argumentation' is not a word, idiot." -- thelizman
[ Parent ]
second-order? uh. what? (5.00 / 1) (#366)
by jkominek on Wed May 16, 2001 at 10:20:00 PM EST

You'll have to excuse me. Apparently my computer science education has been completely lacking, because I've never heard or read of the term "second-order" in connection with hardware or software.

I've heard of second-order differential equations.

I've heard of a first-class data type in a language.

Havn't heard of a second-order simulator, though.

Anything you can do in software, I can produce hardware that will do the same thing.

Probably do it faster, too. (Though the design would be a pain in the ass.)
- jay kominek unix is all about covering up the fact that you can't type.
[ Parent ]

Second order... (5.00 / 1) (#390)
by Farq Q. Fenderson on Thu May 17, 2001 at 09:14:43 AM EST

Assuming that you convert some software of some sort... let's say, Quake, to hardware.

"Quake C" is higher-order than the code which Quake itself has been written in. "Quake C" code *is* software. So while you can "do it in hardware" you still can't do it "without software."

Take a hole, for example. A hole is a good case of a second order phenomenon. If you continually remove dirt from one side of the hole and put it on the other the hole moves. Is it the same hole?

The idea is that you can't build the quintessential hole at the level of matter. You can't put it in your pocket and carry it around, by picking up only the hole itself.

farq will not be coming back
[ Parent ]
Right (none / 0) (#206)
by delmoi on Tue May 15, 2001 at 05:09:24 PM EST

What you can do in software, you can do in hardware. But I think it would be really difficult, if not completely impractical to do things like lisp style function generation in hardware.

I was really only talking about conventionally designed computers when I was saying that they were not alive.
--
"'argumentation' is not a word, idiot." -- thelizman
[ Parent ]
Interesting article (3.50 / 2) (#49)
by RangerBob on Tue May 15, 2001 at 10:29:11 AM EST

It's a good idea, but I'm biased since AI is one of the things I'm studying. If you want to develop it further, I'd suggest checking out some of the older AI books that are out there now (a lot of the "newer" books on AI don't seem that great to me). Some people have already done work on some of the ideas you're presenting here a long time ago. I think some of the cooler ideas came out of the 60's and 70's since they were actually new then.

Logical Nitpick (4.14 / 7) (#66)
by kostya on Tue May 15, 2001 at 11:26:23 AM EST

But for me the logic is simple.
  • a human is a machine
  • humans minds are conscious
  • therefore, machines can harbor consciousness
I don't know of any good refutation of point number one, and so I find my logic sound.

Hehe. But then, that doesn't mean it is sound logic. You've just equivocated where there isn't any real proof or cause. Because human can be decribed like a machine does not mean a human is merely a machine. Or what is a machine? A series of levers or devices which turn energy into work? If so, how does that relate to consciousness? And for that matter, what is consciousness?

You see, there are a whole lot of questions missing from your rather "simple logic"--questions which point to some of the errors in the formulation.

I guess what I'm trying to say is that you are reducing the argument down to very simple statements--or at least ones that appear to be simple. Then you are drawing equivocation simply by relation--human equal machine, all machines therefore can have consciousness. Which sounds just dandy, but it oversimplifies the problem.

therefore, machines can harbor consciousness--but you have drawn an equivocation that just isn't true. Human beings are made up of machines, but they are not merely machines. The whole is not equivacal to the part. That's like saying all cars are wheels or gears because they are made up of them. Properties of whole cars cannot be ascribed to the individual parts whilly-nilly.

You might want to check out this site on logical fallacies. I find it helpful to read over. Not that I am the paragon of logic, but it helps ;-)

Additionally, you and I both have violated the first rule of survival on K5--never quote logical formula or say, "see it is simple logic ... as X, Y, and Z prove my story" because some logic PHd lurker will come out and beat you to mush.



----
Veritas otium parit. --Terence
logic (3.00 / 1) (#205)
by delmoi on Tue May 15, 2001 at 05:06:45 PM EST

therefore, machines can harbor consciousness--but you have drawn an equivocation that just isn't true. Human beings are made up of machines, but they are not merely machines. The whole is not equivocal to the part. That's like saying all cars are wheels or gears because they are made up of them. Properties of whole cars cannot be ascribed to the individual parts whilly-nilly.

Right, but cars are made up of machines, but they are also machines themselves. A human is made up of machines, but it is a machine itself.

Also, I didn't mean to say that all machines could be intelligent, just that some of them could be. Specifically I was referring to conventional computers.
--
"'argumentation' is not a word, idiot." -- thelizman
[ Parent ]
Still got a logic error(s) (5.00 / 3) (#314)
by kostya on Wed May 16, 2001 at 11:54:27 AM EST

Right, but cars are made up of machines, but they are also machines themselves. A human is made up of machines, but it is a machine itself.

No, that does NOT follow. You are still making the same error. You have just stated it differently, using some of my quotes, to lend it credibility. The problem with your reasoning could be considered like an argument by generalization, and your use of the word machine could also be an equivocation, but the main error here is one of composition. Because a human is made up of little tiny machines does not mean that a human is necessarily 1) a machine or 2) only a machine.

Or in a more cliched phrasing, "the whole is not necessarily the sum of its parts" or "the whole [can be] more than the sum of its parts". Both are equally possible.

By using a very fast and loose definition for "machine" (you still haven't defined it), your argument also starts to tend towards equivocation--machines have A, humans are made up of machines, a human is a machine, a human has A. You can see how machine is THE tying factor here, but it is not proven to be, nor does it necessarily follow. Think of equivocation like using a void* in C programming--just because the variable/label can point to any type does not mean that all types are equivalent or interchangeable.

However, the real problem with your original reasoning in the article is actually the opposite of the composition: division. You use composition and equivocation to tie humans and machines, and then you start with humans (the whole) and you assign an attribute of humans to machines (the part). You are saying that an attribute or phenomena of the whole is therefore also an attribute of the parts simply because the whole is made up of the parts. Which does not logical follow at all.

There's a lot going on in your three statements, so it would be hard to pin the fault on one specific logic error. You've got a mix of them going on.

While they seem cool and elegant, BEWARE of the three-part syllogism. They almost never work :-) If you ever think you can simplify any major point of debate down to two assertions and a conclusion, you best check you work over. Such arguments/proofs almost always lead to disaster.



----
Veritas otium parit. --Terence
[ Parent ]
It would be easier if he were right! (5.00 / 1) (#537)
by fragnabbit on Mon May 21, 2001 at 02:48:15 PM EST

Because a human is made up of little tiny machines does not mean that a human is necessarily 1) a machine or 2) only a machine.

Or in a more cliched phrasing, "the whole is not necessarily the sum of its parts" or "the whole [can be] more than the sum of its parts". Both are equally possible.

Hey, if his logic were correct, then Searle's Chinese Room experiment would indeed be correct and the Turing Tests would not prove any intelligence. Since the pro AI folks claim that Searle fails because "the whole is greater than the sum of the parts", then surely they must also recognize that although a human is made up of "parts", the human "conciousness" could be greater than the sum of the parts.

Seems to me that takes care of quite a few of the discussions here... just replicating the hardware doesn't give you the "whole" thing.

I don't see why people are so scared of the Searle thing anyway. Seems straight forward that if a guy in a box with a book can give the same answers without "understanding" that disproves the Turing Tests. Why is that so offensive to pro-AI folks? That doesn't disprove that you can create intelligence, only that the Turing Tests fail to diagnose intelligence. Find a better test, don't just assume that intelligence can't be created because the test you had turns out not to test for it.

[ Parent ]

Ha Ha (5.00 / 1) (#380)
by Signal seven 11 on Thu May 17, 2001 at 06:03:26 AM EST

therefore, machines can harbor consciousness--but you have drawn an equivocation that just isn't true. Human beings are made up of machines, but they are not merely machines. The whole is not equivacal to the part. That's like saying all cars are wheels or gears because they are made up of them. Properties of whole cars cannot be ascribed to the individual parts whilly-nilly.
You are completely missing the point. And it's certainly true that you are no paragon of logic.

Delmoi's reasoning (a human is a machine, a human is capable of thought, machines are capable of thought) is perfectly correct (assuming, of course, that humans are machines; I have no interest in arguing that right now). Nowhere does this chain of reasoning suggest that all machines can think. Nor does it prove that AI is possible.

To get to AI, you need a further arguments; if you want to quibble, quibble with those arguments, please, not a blatantly correct syllogism.

[ Parent ]

Thank you so much! (5.00 / 1) (#418)
by kostya on Thu May 17, 2001 at 12:28:08 PM EST

Delmoi's reasoning (a human is a machine, a human is capable of thought, machines are capable of thought) is perfectly correct (assuming, of course, that humans are machines; I have no interest in arguing that right now).

Thanks! You're right. If you assume anything as correct first, you can then prove that same assumption with logic. I'm not sure how I missed that one. Thanks for pointing that out!



----
Veritas otium parit. --Terence
[ Parent ]
Moron (5.00 / 1) (#421)
by Signal seven 11 on Thu May 17, 2001 at 12:36:52 PM EST

I won't waste anymore time with you. But please at least read a book on logic before you again masquerade on the internet as someone who knows something about logic.

[ Parent ]
Any you would like to recommend? (5.00 / 1) (#423)
by kostya on Thu May 17, 2001 at 12:47:43 PM EST

You know, now that I think about it, I did read a book on logic once. But that was in college. You know, I think I even took a class! But perhaps I read a bad book. Can you suggest a better book to read?

I'm always looking for more insight! You seem to have a great grasp on the topic, so maybe you could suggest a better book?



----
Veritas otium parit. --Terence
[ Parent ]
Ah... (4.50 / 6) (#68)
by trhurler on Tue May 15, 2001 at 11:30:58 AM EST

The abortion debate of the 21st century. The people who want to think they aren't just machines will drag out all sorts of stupid arguments that boil down to the same basic set of three or four mistakes in reasoning, and the people who are convinced they ARE just machines will make all sorts of wild claims they can't begin to back up.

Setting aside religious claims(they're not arguable anyway, so if you believe in them, it would be best to just ignore the whole discussion,) it is obvious that the human mind is in fact a machine of some kind. Whether or not it can be constructed in the environment of a digital computer in the present sense of that term is another question entirely. To assume that the human brain has the same kind of abilities and no more than a digital computer(save sheer scale,) is unwarranted, and could be the doom of a lot of these "software can do anything" plots. For instance, current hardware does not have true random number generators in most cases - it is doubtful we'd need one, but what if we did? There are other possibilities, but not as many as you'd think; most things can be done in a digital machine, and it seems likely that we could add anything that was found to be necessary. Even so, a blanket assumption that current machines can be used for this purpose - with or without some additions or slight modifications - is a statement of religious faith or ignorance rather than good reasoning - although it is clear that some machine of some design can serve the purpose. (We have one example, though we don't know very well how it works yet.)

Then on the other hand, we have arguments of the form "computers don't understand the appeal of a dozen red roses" and "computers don't understand this odd mathematical construct" and so on. These are essentially the claim that "computers as we use them today are not set up to function as minds - therefore, they cannot be." It is as though the guy writing VisiCalc back in the day were told "but a computer isn't a spreadsheet!" Of course, minds are more complicated than spreadsheets, but there is no evidence available that an artificial mind is an impossible goal, and simply pointing out that we haven't done it yet is not an argument of any merit.

In another post, I'm going to lay out a notion of the difficulty of the programming problem involved. This one would get too long and cover too much ground if I put it here.

--
'God dammit, your posts make me hard.' --LilDebbie

If only this debate would replace abortion... (5.00 / 1) (#143)
by Electric Angst on Tue May 15, 2001 at 01:55:12 PM EST

I'd be quite happy if this were the abortion debate of the 21st century, since it holds about the same amount of relevance to our own conception of what is 'human', and I doubt that you'll have mad fundamentalists bombing AI labs to advocate their position...

What this really boils down to is the very concept of what a "machine" is. As was mentioned in an earlier post, the author of this article failed to give a definition for it, which makes the entire debate very problematic (although watching streetlawyer go at it is pretty cool.)

I don't think anyone here is attempting to debate the fact that the human mind operates in some (barely understood) biomechanical fashion. The attempt this article makes, though, to argue that those biomechanical processes are anywhere near the type of "machines" that human have the knowledge and understanding to construct is flatly wrong. Similar to how you put it; a brain is not a digital computer.

Besides being wrong, the implications behind the rhetoric are rather worrying. To assume that our minds are simply digital computers opens a pandora's box to all kinds of rather nasty ideas, such as Skinnerian psychology...


--
"Hell, at least [Mailbox Pipebombing suspect Lucas Helder's] argument makes sense, which is more than I can say for the vast majority of people." - trhurler
[ Parent ]
No doubt... (5.00 / 1) (#148)
by trhurler on Tue May 15, 2001 at 02:04:09 PM EST

The really interesting thing here is that you are adamant that a human mind is so much more complicated than a computer. I'm not saying you're wrong, but other than the sheer scale, which is not necessarily a matter of complexity, what leads you to believe this? (And more specifically, you need to elaborate: do you mean than a computer, or a computer program?)

I'm not convinced we can't do it, but I am convinced that the amount of effort involved would dwarf anything we've ever attempted in any field, and that the economic incentive to do this does not exist.

As for Skinner, I tend to take the opposite view: if we come to understand ourselves well enough to create a mind in a machine, I think this might well promote much more support for individual liberties. I also think we would tend towards extending those liberties to our creations, rather than taking them from ourselves. However, in some ways, I'm eternally an optimist. (Or a cynic, depending on your point of view.)

--
'God dammit, your posts make me hard.' --LilDebbie

[ Parent ]
Defining complexity... (5.00 / 1) (#159)
by Electric Angst on Tue May 15, 2001 at 02:30:50 PM EST

Well, when I say "complex", I'm not talking in terms of size or amount. By complexity, I'm refering to a thing's figurative distance from the common understanding. To put it this way; lift, the main principle of flight, is relativly simple once you understand it, but if you are ignorant of it, the entire concept of mechanical flight is dumbfoundingly complex.

I wouldn't be surprised if strong AI ends up happening. I'll be more surprised of it is recognized, and straight floored if we can actually communicate with it in any meaningful way, but I'm not going to say that none of these are possible.


--
"Hell, at least [Mailbox Pipebombing suspect Lucas Helder's] argument makes sense, which is more than I can say for the vast majority of people." - trhurler
[ Parent ]
nothing interesting turns on complexity (5.00 / 1) (#254)
by streetlawyer on Wed May 16, 2001 at 02:51:52 AM EST

I'm pretty sure that tadpoles are conscious in some sense; if they aren't, there's nothing logically inconsistent in supposing that they aren't. The only coherent argument against strong AI is Searle's, and that's an argument that consciousness isn't (logically can't be) anything to do with computation, however simple or complicated.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]
Searle (5.00 / 1) (#315)
by trhurler on Wed May 16, 2001 at 11:59:14 AM EST

Where can I find this argument? I really don't have time to go digging through a list of publications, and my cursory search turned up at least twenty different books, probably five or six of which could possibly contain such an argument given their titles.

--
'God dammit, your posts make me hard.' --LilDebbie

[ Parent ]
try (5.00 / 1) (#360)
by acronos on Wed May 16, 2001 at 09:13:29 PM EST

John Searle's argument

[ Parent ]
The response... (5.00 / 1) (#439)
by fuzzrock on Thu May 17, 2001 at 03:34:42 PM EST

There are a bunch of responses to Searle's Chinese Room argument. My favorite is that you wouldn't say my cerebellum was sentient either. Neither is the book or the room, but the system of book, Searle, and room is, just as the system of my brain is.

-fuz

[ Parent ]

Systems reply again... (5.00 / 1) (#453)
by spiralx on Thu May 17, 2001 at 05:45:43 PM EST

Since this is a thought experiment, we can let Searle memorize the entire book. Now the system is nothing other than Searle hiding in a box. But he still doesn't understand Chinese, and since there isn't anything else in the system other than the box (which just hides Searle), then nothing has changed.

That's actually the easiest reply to refute :)

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

Ah, but... (5.00 / 1) (#456)
by trhurler on Thu May 17, 2001 at 06:18:14 PM EST

The whole thing hinges on confusing just what "the system" is. If he memorizes the book, and we do away with the box in favor of merely imagining that nobody will willingly teach him Chinese, then it is entirely reasonable that the combination of his memory of the book and his actions taken upon that memory constitute the consciousness, and that he himself would never be aware of this fact.

The problem is, fundamentally, that the man in the room is part of the hardware, not the software. As such, even under the assumptions of strong AI, the man should never be the mind; him learning Chinese has nothing to do with any consciousness that might emerge. He is merely a part of the brain, the same as, say a medulla oblongata or a temporal lobe. I could comment at length about the fact that supposedly expert thinkers have missed this fact for two decades, but I'd just get shouted down by the usual idiots, who all want to believe that being respected and well known is correlated with making sense or being good at something.

--
And when you consider that Siggy is second only to trhurler as far as posters whose name at the top of a comment fill me with forboding, that's sayin
[ Parent ]
But again (5.00 / 1) (#462)
by spiralx on Thu May 17, 2001 at 06:52:46 PM EST

The whole thing hinges on confusing just what "the system" is. If he memorizes the book, and we do away with the box in favor of merely imagining that nobody will willingly teach him Chinese, then it is entirely reasonable that the combination of his memory of the book and his actions taken upon that memory constitute the consciousness, and that he himself would never be aware of this fact.

Since consciousness is currently something we can only know exist because we are, this is all somewhat fanciful in that we can't prove anything yet. But the key difference is whether you believe that simply manipulating syntax like you say constitutes consciousness or whether a layer of semantics is also required as I do.

Besides, by swallowing everything he is the system, not part of it. And he still doesn't understand Chinese.

But as for the state of science here, I agree that there's a hell of a lot of wind and little hard facts. Personally I'm more interested in future neurological advances than what the "cognitive scientists" will produce. Either way, our knowledge is deeply lacking.

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

Jethro Tull (5.00 / 1) (#465)
by _cbj on Thu May 17, 2001 at 07:06:18 PM EST

Besides, by swallowing everything he is the system, not part of it. And he still doesn't understand Chinese.

He can only swallow everything if the translation system is fallible. Didn't I just... here... hang on, that was you too. Definitely bedtime.

[ Parent ]

This space for rent (5.00 / 1) (#490)
by trhurler on Fri May 18, 2001 at 10:57:47 AM EST

But the key difference is whether you believe that simply manipulating syntax like you say constitutes consciousness or whether a layer of semantics is also required as I do.
No purely syntax-based scheme of finite size can meaningfully answer any and all questions put to it. Either the room cannot provide such answers(Searle says it can,) or else it incorporates semantics. There is no third answer; a translation table of infinite size might seem appealing, but it is like putting a step into a mathematical proof that says "and then something funky happens." If all the pieces are not possible at least in principle, then the whole thing shows nothing about what is or is not possible.
Besides, by swallowing everything he is the system, not part of it. And he still doesn't understand Chinese.
He is the hardware. Certain of his actions(the ones related to the translation table) combined with certain of his recollections(of the table itself,) are the consciousness(software,) IF it exists, which I am not claiming to demonstrate. The brain itself does not understand under the strong AI hypothesis; something the brain supports(runs, as if software,) does. Whether that hypothesis is true is open to question, and I admit I strongly question it, but the Chinese room does not show what it claims to show.

--
And when you consider that Siggy is second only to trhurler as far as posters whose name at the top of a comment fill me with forboding, that's sayin
[ Parent ]
what's the difference? (5.00 / 1) (#506)
by Kaki Nix Sain on Fri May 18, 2001 at 04:57:35 PM EST

"Personally I'm more interested in future neurological advances than what the "cognitive scientists" will produce."

That's strange. I share your interest in future advances in neurology. But as a cog sci person, I'm not all that sure there was such a strict division between cog sci and neurology.

Neurology: study of the nervous sytem esp. in respect to its structure, functions, and abnormalities.
Cog Sci: interisciplinary sicence that draws on many fields in developing theories about human perception, thinking, and learning.

Aren't perception, thinking, and learning some of the functions of some of the structures of the nervous system? And, if so, can one draw a strict line? Seems to me, if you are interested in what the neurons do and how they build a thinking person, then you are interested in the intersection of neurology with cog sci. (Neurology minus the cog sci intersection would seem to be about brain tumors, maybe that is your cup of tea?)



[ Parent ]

The difference (5.00 / 1) (#531)
by trhurler on Mon May 21, 2001 at 10:54:12 AM EST

In general, neurology people, even when they're wrong, at least base their ideas on something known as "reality." Cog sci people are known to engage in wild flights of fancy that they don't even understand the logical form of, such as the Chinese room, which prove nothing. They then claim that they're Right[tm] and that anyone who disagrees is obviously ignorant of their Important Papers[tm]. Well, unless they actually connect some of their daydreams to this little thing known as "reality," I in general will not be impressed; when you see me making what look like cognitive science arguments, you will notice that I'm always on the side which says "your argument doesn't work" or "your argument fails to support your claim." This is because that is the side which is correct:)

--
And when you consider that Siggy is second only to trhurler as far as posters whose name at the top of a comment fill me with forboding, that's sayin
[ Parent ]
Yes. (5.00 / 1) (#464)
by _cbj on Thu May 17, 2001 at 07:03:35 PM EST

I could comment at length about the fact that supposedly expert thinkers have missed this fact for two decades, but I'd just get shouted down by the usual idiots, who all want to believe that being respected and well known is correlated with making sense or being good at something.

Yes, confound those idiots! Though, here, the supposedly expert people haven't missed it. Search deja for something that embodies that idea, find people echoing it, find same people in sensible threads talking with Minsky and Sloman. The well known people are just the ones with pop science books. Look at Penrose, for God's sake.

[ Parent ]

umm, i think you underestimate (5.00 / 1) (#370)
by kellan on Wed May 16, 2001 at 11:06:01 PM EST

and I doubt that you'll have mad fundamentalists bombing AI labs to advocate their position...
I really think you underestimate. If AI ever becomes marginally viable, I can practically guarentee you people will show up with the bombs. People everywhere reach out to destroy what they can not understand, and fundamentalists more then most.

kellan

[ Parent ]

Now that's assuming (5.00 / 1) (#150)
by Elendale on Tue May 15, 2001 at 02:05:00 PM EST

That the universe can produce true randomness as well :)

-Elendale
---

When free speech is outlawed, only criminals will complain.


[ Parent ]
No (4.50 / 2) (#152)
by trhurler on Tue May 15, 2001 at 02:09:16 PM EST

Actually, I think of it as an unwillingness to presume that true randomness does not exist; I don't really think the human mind depends on it, even if it does exist here and there, but I don't pretend to know for sure. Anyone claiming he knows a mind can be made in a machine had better know, and had better be sure he knows.

--
'God dammit, your posts make me hard.' --LilDebbie

[ Parent ]
Re: Now that's assuming (5.00 / 1) (#534)
by mcelrath on Mon May 21, 2001 at 12:05:26 PM EST

That the universe can produce true randomness as well :)
Oh, but the universe does produce true randomness. Quantum Mechanics tells us that when observing the state of a system, there is a certain probability of observing it in one state, and other probabilities of observing it in a different states. Quantum Mechanics chooses the state randomly. If there's a 50% chance of observing state A, it will happen 50% of the time, given many observations. But which times you observe state A are random. In fact, it's a very good question as to why nature has such a good built-in random number generator.
For instance, current hardware does not have true random number generators in most cases - it is doubtful we'd need one, but what if we did?
Since our minds are made of quantum mechanical particles, our minds have built-in random number generators, and very good ones. All the neurons in your head are firing all the time. Data is carried in the rate at which they fire. But those extra firings that essentially aren't carrying any data occur randomly. The human mind is not deterministic. Any common behaviour one observes across many humans is a result of high statistical probability, not determinism. It's always possible to find a counter-example to any psychological mechanism or hypothesis, which is a problem the field of psychology has been grappling with for centuries. Oh yes, an AI needs a random number generator. Lots of them.

--Bob

P.S. The above words were created at random. I have a lot of monkeys, and a lot of typewriters.
1^2=1; (-1)^2=1; 1^2=(-1)^2; 1=-1; 2=0; 1=0.
[ Parent ]

Wait wait... (5.00 / 1) (#548)
by Elendale on Tue May 22, 2001 at 08:54:53 PM EST

Quantum Mechanics is a nasty hack, end of story. I hope it dies a quick death.

P.S. The above words were created at random. I have a lot of monkeys, and a lot of typewriters.

No no, the words were created at pseudo-random :)

-Elendale
---

When free speech is outlawed, only criminals will complain.


[ Parent ]
How so? (none / 0) (#554)
by spiralx on Wed May 23, 2001 at 05:06:13 AM EST

Quantum Mechanics is a nasty hack, end of story. I hope it dies a quick death.

Could you enlighten me on how the most accurately tested theory we have ever produced is a "nasty hack" please?

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

Heh (none / 0) (#557)
by Elendale on Wed May 23, 2001 at 02:05:04 PM EST

I can't explain it really (not "won't" but "can't") but i think it'll be leaving sometime soon :) The idea that time exists and a few other things will go with it, i imagine.

-Elendale (yes, i realize how crazy that sounds, you can call my insane now...)
---

When free speech is outlawed, only criminals will complain.


[ Parent ]
QM is amazing (none / 0) (#559)
by spiralx on Wed May 23, 2001 at 03:23:42 PM EST

I can't explain it really (not "won't" but "can't") but i think it'll be leaving sometime soon :) The idea that time exists and a few other things will go with it, i imagine.

Uh huh. Sorry, I don't buy it. A gut feeling that QM is wrong seems to me to be based on nothing more than the popular perception that "QM is freaky sounding, it's all shite". Having studied it and read a hell of a lot of the literature I can honestly say it's one of the most experimentally-based sciences out there.

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

QM = BS!!!!!!!!!!!!!!!!!!!! (none / 0) (#561)
by syrrath on Thu May 24, 2001 at 05:12:46 AM EST

The problem with the common perception of quantum being freaky just adds to the idiocy... The fundamental logic flaw in QM is that everything is just in some quazzy state of flux, neither here nor there!!! Saying any more might, gasp, reveal I'm not a QM expert, oh my...

The simplist analogy of the energy fields that exist is not one of undefined energy or place. Its really the only and simple state of energy flows into the stable cores of self-sustaining vortex based energy funnels/converters. -- Fuck the time, I'm going on...

Lets list all the abstracts and concepts needed for our reality:

  • Time
  • Energy
  • Vortex (patterns of energy traveling through time, also the converter between wave lengths [size])
WOW, that was easy! The interactios of these are involved in all other effects. Gravity becomes the effect of energy being blocked on one side (the side towards the planet) and the combined energy alignment pushing on all sides except one major side. -- gravity done...

On to particles, ah the dirty stuff. The principles of stable energy flows dictate that its a form of vortex (unless someone wants to explain why the energy just sits in one place for really long times??? Oh, that's right, you QM people think it jumps back and forth, somehow remembering where to jump next or back to / whatever!!! And YOU wonder why people think its crazy???)... This is easily show by the simple harmonic frequencies hydrogen shows up as on a spectrograph, with other elements having much more complex interacting harmonics (unknown to my, sadly, because nobody even tries to find the models that are all energy vortex flows)... blah, blah. (ever notice how size of the atom dictates its properties?)

On to chemistry: These interacting vortex harmonics induce flows that can seem to lock together atoms, QM seems to describe UNKNOWN quazzy subatoms that transfer force blah blah. The amount of energy in a given chemical compound comes from the instability of flow patterns through the matrix. Think of something on par to water flowing onto thousands of rocks and becoming aerated (smaller, more readily usable forms of energy -- i.e. induces more instabilities or excess heat in the system)... boring!!!

The grand old Sun. Granted that the Sun may have fision or whatever, that does not easily explain why the surface temperatures are some much hotter than under the crust (hell, it should be hotter sense all the fission heat is sort of trapped, oh my... QM likes to complicate matters not simplify.). The point of the matter is the energy flowing into the Sun or any planet (some people wonder why planets are warmer than the solar energy that gets to them...), which is called gravity, induces particle flows (and we all know they aren't perfect!). The rate of these flows (gravity) has a perportional effect on the excess energy that becomes is commonly called heat...

Ah, another thing about simpler is better. The amount of instability in a given vortex particle (atom/compound) generally equates to how much impact the gravity flow has on a particle. I mean, come on, the QM people can't explain it that simply (or, can they???). The smaller the vortex flow the more easily its offset and turned about endlessly spinning in all directions (no single inertia direction or vortex flow point).

The biggest problem with all this stuff happens to be it being so damn simple (well, the math is nasty when trying to represent a working model, which is normal). The only assumptions needed are a given amount of energy, its continual convertion (vortex anyone?), and something that travels faster than light speed (call it ether/gravity/or whatever). The latter assumption is there because for something to exhibit behavior at a given rate there has to be a "state change rate". Which just happens to be the same as the speed of light in what is commonly referred to as the eletromagnetic spectrum. The rate of change is not the true speed measurement of a energy transfer is it? No way, to say that would be to ignore the need for setup or initialization delay in propogating and creating a field (vortex).

Enough said? What would you like explained??? Come one ask!!!
-- Eitherway, you have to admit it is simpler than QM, makes a great Grand Unified Theory, and might add to the debate on consciousness!                 Off with their heads!!!

[ Parent ]

Please, lay off the crack (5.00 / 1) (#562)
by spiralx on Thu May 24, 2001 at 05:46:33 AM EST

The problem with the common perception of quantum being freaky just adds to the idiocy... The fundamental logic flaw in QM is that everything is just in some quazzy state of flux, neither here nor there!!! Saying any more might, gasp, reveal I'm not a QM expert, oh my...

Only in the Copenhagen interpretation. In the Many Worlds, Pilot Wave or Transactional interpretations there is never such a "quazzy state of flux" as you call it.

The simplist analogy of the energy fields that exist is not one of undefined energy or place. Its really the only and simple state of energy flows into the stable cores of self-sustaining vortex based energy funnels/converters. -- Fuck the time, I'm going on...

What energy fields? Are we talking about things like the EM field as described by QED and so on? Where does this "undefined energy or place" arise from?

In case you're honestly serious about this, how about defining exactly what the fuck you're on about... vortexes? Energy funnels?

Gravity becomes the effect of energy being blocked on one side (the side towards the planet) and the combined energy alignment pushing on all sides except one major side. -- gravity done...

And there was me thinking it was the distortion of of spacetime and the resultant curved geodesic pathways followed by matter.

The principles of stable energy flows dictate that its a form of vortex (unless someone wants to explain why the energy just sits in one place for really long times??? Oh, that's right, you QM people think it jumps back and forth, somehow remembering where to jump next or back to / whatever!!! And YOU wonder why people think its crazy???)...

What jumps back and forth? Please, elucidate.

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

pitiful me (none / 0) (#566)
by syrrath on Thu May 24, 2001 at 02:38:12 PM EST

Ok, I'll admit to baiting the hook... (The extremely varied descriptions of quantum principles in books, and even a thousand times worse on the web, hardly does the prediction models justice, and my badly combined use is unjusticed -- what you don't like billions of parallel worlds, either?). -- On one condition, I ask one question in return:
Why does one find ten thousand new terms for the varied effects of the same old principle in quantum physics, and every other physics/chemistry (energy) out there, and yet few describe nor try to distinguish the cause from the effects (the combinations, etc. of the singular cause) and unify around EVERYTHING IS ENERGY? (My quess is nobody wants to stop proposing theories...)

I humbly o'wait your reply (of course I understand you have no reason to reply, though... Hooks are used in trolling, right?)

[ Parent ]

QM (5.00 / 1) (#569)
by spiralx on Thu May 24, 2001 at 06:10:06 PM EST

(The extremely varied descriptions of quantum principles in books, and even a thousand times worse on the web, hardly does the prediction models justice, and my badly combined use is unjusticed -- what you don't like billions of parallel worlds, either?)

The trouble with QM is that it's an amazingly accurate theory which produces all the right results, but it's fucking weird. But then again, considering there's no reason we should be able to even get this far, hey, that's not so bad. The different interpretations all propose different reasons why we get the results and equations we do, but they all end up in the same place obviously...

Personally I prefer the transactional approach (or Wheeler-Feynmann absorber theory or whatever it's called), but I think a greater understanding of the Universe will answer many of these questions... maybe this'll come from superstring theory, maybe not :)

Why does one find ten thousand new terms for the varied effects of the same old principle in quantum physics, and every other physics/chemistry (energy) out there, and yet few describe nor try to distinguish the cause from the effects (the combinations, etc. of the singular cause) and unify around EVERYTHING IS ENERGY?

Well yeah everything is energy. Wave-particle duality, quantum field theory and all that are quite definite on this fact. And you seem pretty unclear on what you're talking about...

But of course, I may HBT. Physics is about the only area I tend to have an urge to respond nowadays... otherwise I know better :)

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

Re: QM = BS!!!!!!!!!!!!!!!!!!!! (5.00 / 1) (#565)
by mcelrath on Thu May 24, 2001 at 01:52:19 PM EST

Dude, you smoke some funky shit. Let me clear up a few things for you.

A theory is a good theory if it makes accurate predictions. In the event there is more than one theory that makes good predictions, we generally go with the simpler one. (i.e. Occam's Razor) That it is not intuitively palatable, or doesn't mesh with your worldview is not a good reason to reject a theory.

When coming up with a new theory, one must generally be able to derive the old theory out of it. Remember, the old theory makes some damn accurate predictions about a hell of a lot of stuff, so it's easier to show how to reproduce the old theory from the new than reproduce every single measurement we've made in the last 200 years.

Now, please show me how to calculate the energy levels of a hydrogen atom, to 7 digits of accuracy using your "vortices". Then show me how to calculate the precession of Mercury correctly. And after that, show me why the W and Z bosons have mass. I want math, I want accurate numeric predictions, and I want to see in what limit and under what assumptions does the Standard Model appear out of the new theory. Then you'll have my attention. Otherwise, you're just another uninformed crackpot.

I usually ignore crackpot messages. So do most working physicists. There are more crackpots out there with pet theories than physicists. If we tried to shoot down every crackpot theory, we'd spend all our time trying to convince people that don't want to be convinced, and would never have any time to devote to solving known problems with current theories. Ack, why am I bothering to respond to this...back to work.

--Bob
1^2=1; (-1)^2=1; 1^2=(-1)^2; 1=-1; 2=0; 1=0.
[ Parent ]

I'm... (none / 0) (#567)
by syrrath on Thu May 24, 2001 at 02:53:12 PM EST

I was just bored, you should have ignored my trolling...

My only problem with QM is its insistence on defining/naming everything like its a particle and not providing terms that show an underlying commonality (everything is energy)-- and crazy (touching on stupid) descriptions of phenomona that are so damn easily described in terms of harmonics...



[ Parent ]

Let me clarify (none / 0) (#564)
by Elendale on Thu May 24, 2001 at 12:18:44 PM EST

Its not a "gut feeling", i just don't really know how to explain it. Don't have the words or the training, i guess. In any case, i'm not really expecting anyone to believe me... but just remember when some scientist proves time non-existent that "i told you so".

-Elendale
---

When free speech is outlawed, only criminals will complain.


[ Parent ]
LOL (none / 0) (#568)
by spiralx on Thu May 24, 2001 at 06:01:39 PM EST

Its not a "gut feeling", i just don't really know how to explain it.

Bwahahahaha! Sorry, but you've got to admit, that's a classy statement :)

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

great line (5.00 / 1) (#369)
by kellan on Wed May 16, 2001 at 10:56:15 PM EST

Ah..The abortion debate of the 21st century.
What a great line, the post deserve a 5 just for that. Mind if I use it some day?

kellan

[ Parent ]

What is a machine ? (4.00 / 2) (#96)
by Simon Kinahan on Tue May 15, 2001 at 12:44:13 PM EST

The soundness of your argument, as presented, depends on the term you did not bother to define. My definition of a machine would include having been designed for a purpose, which the human brain and body were not. I think when you invoke the term "machine" you're just trying to say the human being is a natural phenomenon, with no special powers we could not manufacture a duplicate for if we understood them. I agree with that, but the term machine introduces a problem you did not need to introduce.

The argument does not shed any light on the problem, though. Wedges, pullies and cars are machines in the sense you used the term, but they're not concious, so we still have no answer to the interesting question: what physical properties must an object have in order to be concious ?

Simon

If you disagree, post, don't moderate
Creation or Evolution? (5.00 / 1) (#201)
by Elkor on Tue May 15, 2001 at 04:42:04 PM EST

My definition of a machine would include having been designed for a purpose, which the human brain and body were not

I can't see how you can support that, as it presupposes knowledge of how the human form, specifically, came to be.

Do you believe in Creation?
If so, then God created us in his own image. He had a purpose for this, if for no other reason that his own enjoyment. Therefor we are designed with the purpose of emulating God.

Do you believe in Evolution?
If so, then evolution has designed the human form to accomplish tasks. We have opposable thumbs to grasp. Arms reach, legs walk. We have frontal eyes to be able to triangulate for depth perception. Our hearts pump blood in conjunction with our muscles. Our lungs breath oxygen and so on and so on.

How can you say we are not designed?

Just because we don't know who designed us, or the forces that shaped us, doesn't mean that our bodies haven't been designed by either divine intervention or natural selection.

And lack of knowledge of the function of something does not mean it didn't have a purpose. We don't know what the appendix does. Doesn't mean it doesn't do anything, or that it didn't in the past. It could be there just to make us wonder what it does, which is a "valid" purpose. Just not terribly useful.

Regards,
Elkor
"I won't tell you how to love God if you don't tell me how to love myself."
-Margo Eve
[ Parent ]
Evolved entities don't have a purpose (5.00 / 1) (#321)
by Simon Kinahan on Wed May 16, 2001 at 12:14:23 PM EST

To suppose that they do is to suppose that some entity capable of intentionality made them that way - to attribute a quality we have not found in anything other than humans to a tendency of things to die.

Simon

If you disagree, post, don't moderate
[ Parent ]
Evolutionary purpose.... (5.00 / 1) (#344)
by Elkor on Wed May 16, 2001 at 04:07:25 PM EST

How about surviving long enough to propogate the species?

Survival in and of itself can be seen as a valid purpose for existing.

Lifeforms tend to adapt to their environment in ways best suited to support the propogation and continued survival of the species.

Is this a conscious objective? Maybe not. But it still happens.

Regards,
Elkor
"I won't tell you how to love God if you don't tell me how to love myself."
-Margo Eve
[ Parent ]
Nope (5.00 / 1) (#354)
by spiralx on Wed May 16, 2001 at 06:23:11 PM EST

How about surviving long enough to propogate the species?

Nope, the only "purpose" evolution could be said to have is continued survival and propagation of genetic material, and even then it's not a purpose, it's just the end result.

But evolution certainly has nothing to do with abstract concepts like species.

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

Bah pitiful blather (3.66 / 3) (#102)
by DranoK on Tue May 15, 2001 at 12:57:55 PM EST

Like so many others... *sigh* Well, let's get started.

No, on second thought, let's skip with the usual interesting arguments, and just get to the point (I have a hangover): You are completely neglecting the style in which humans think.

When a machine is told to search a html file for the regex, /fuck/i, it simply opens the file at the beginning, and progressively goes thru each sucessive four letter combination, checking the binary value against that of the regex. OK, so not 100% how software does it, but close enough.

Now let's look at how a human handles the exact same situation. He will scan, looking for places where it looks like the word would appear. Unless he's a Mentat, he isn't going to read every word on every page. And if he does read every word on every page, there's a good chance he'll miss it. That's why Important Things have more than one editor.

Why is this? Simple. All a machine can do is add numbers and compare. Pretty simple. Humans, on the other hand, look at a situation and immediately apply past experience, either from a book, or from dealing with the situation itself. A computer jumps thru if-thens; humans grab a hunch and run on it.

This in itself is precisely what gives us our conciousness. By applying relevant information we are able to create concepts. For example: a computer program has a structure called 'calander'. The computer has no fucking clue what a 'calander' is, or why it's important. All the machine knows is that there is an object which contains 12 other objects, each of which contains between 28 and 31 other objects, each of which has a few ints. The computer never realizes the concept behind the calander, and simply treats it as it would anything else. You rename the class to mochoblah, and the computer doesn't really give a flying fuck. Point is, any abstract significance cannot be present, as past experience is required.

Without significance culture can never develop. Culture is where modern ntelligence begins. We have the concept of 'down' because of a societal experience with gravity. A computer never gets past the stage of chugging numbers into a formula.

To state humans are a machine is either far too cruel or far too kind, depending on your point of view. Your argument has huge gaping gaps in it.

DranoK


Poetry is simply a convenient excuse for incoherence
--DranoK



You must be a rather dull programmer. (none / 0) (#188)
by delmoi on Tue May 15, 2001 at 03:59:47 PM EST

When a machine is told to search a html file for the regex, /fuck/i, it simply opens the file at the beginning, and progressively goes thru each sucessive four letter combination, checking the binary value against that of the regex. OK, so not 100% how software does it, but close enough.

Computers will think however we damn well tell them to think. If we want them to think in more interesting manners, we just have to program them such.

Now let's look at how a human handles the exact same situation. He will scan, looking for places where it looks like the word would appear.

Actually, what I'll do is look for is look for a visual impression of the word. I won't 'read' every word, but I will look at every word. This would be equivalent to programming a computer to look at every letter and checking to see if it was the start, if it was checking the second and so on. That would be much faster, and much closer to the way a human would do it. Also, if you were going to look for the word in places where the context would dictate that it would be you would actually have to read all, or most, of the words. You just wouldn't notice yourself doing it. Why is this? Simple. All a machine can do is add numbers and compare. Pretty simple. Humans, on the other hand, look at a situation and immediately apply past experience, either from a book, or from dealing with the situation itself. A computer jumps thru if-thens; humans grab a hunch and run on it.

Actually, most computer architectures have no concept of an 'if then' loop, that's something done by the programming language, not the hardware. If you wrote your program in assembly language, it wouldn't have if then loops, rather you would use conditional jumps.

Also, I don't really see where you've pointed out any holes in my argument, other then your rather naive understanding of computer science, and your declaration that stating man is a machine is either too cruel or too kind.
--
"'argumentation' is not a word, idiot." -- thelizman
[ Parent ]
I really didn't try to make a point... (5.00 / 1) (#217)
by DranoK on Tue May 15, 2001 at 06:02:50 PM EST

other than to point out your own hubris. You claim someone could someday recreate something we don't even understand? You are a self-serving egotistical fool who believes he has stumbled onto something new. Explain how human intelligence works and I might just attempt to listen to you. Please do not respond to this comment I'm not particularly in the mood for a flame war at the moment.

Oh, and you're a fucking moron if you think computers can do ANYTHING other than send electricity through transisters. Our minds use chemicals, electricity, self-finding self-creating neuron paths. Even if you could reproduce physically a brain and maybe even instinct, explain how you'd give rise to consciousness? Point is you don't have one fucking clue about the philisophical methods used to create intelligence. You are so lost in your own ignorant self-importance you fail to understand your argument has been presented for the past 200 years and never once has any non-metaphysical thought been written on the subject.

Cheers

DranoK


Poetry is simply a convenient excuse for incoherence
--DranoK



[ Parent ]
bleh (none / 0) (#235)
by delmoi on Tue May 15, 2001 at 11:36:51 PM EST

Oh, and you're a fucking moron if you think computers can do ANYTHING other than send electricity through transisters. Our minds use chemicals, electricity, self-finding self-creating neuron paths.

Is the fact that there are several factors involved make it imposible to emulate using wires and software? Whatever.

You certanly seem angry. But you arn't doing much to defend your position.

Explain how human intelligence works and I might just attempt to listen to you. Please do not respond to this comment I'm not particularly in the mood for a flame war at the moment.

bleh.
--
"'argumentation' is not a word, idiot." -- thelizman
[ Parent ]
hypocriticism (5.00 / 1) (#253)
by axxeman on Wed May 16, 2001 at 02:39:30 AM EST

For not wanting a flamewar, you sure are ignitive.

lec·tur·er (lkchr-r) n. Abbr. lectr: graduate unemployable outside the faculty.
[ Parent ]

Software, not hardware.... (5.00 / 1) (#197)
by Elkor on Tue May 15, 2001 at 04:33:50 PM EST

The comment isn't asking to prove that hardware -isn't- conscious, but rather that SOFTWARE CAN'T be conscious.

Yes, no program currently exists (to my knowledge) that accurately and adequately represents the human thought process (though Eliza comes close).

However, please prove that it is IMPOSSIBLE to write one that can't.

Regards,
Elkor
"I won't tell you how to love God if you don't tell me how to love myself."
-Margo Eve
[ Parent ]
Fool (3.00 / 2) (#215)
by DranoK on Tue May 15, 2001 at 05:57:25 PM EST

How can you possibly mimic something you do not understand? Oh, and if you claim to understand human concsiousness then I laugh in your face upfront.

DranoK


Poetry is simply a convenient excuse for incoherence
--DranoK



[ Parent ]
Trolling? (5.00 / 1) (#290)
by Elkor on Wed May 16, 2001 at 10:22:33 AM EST

mimic: v. to imitate closely

imitate: v. to follow as a pattern, model or example. to be or appear like.

It is easy to mimic things you don't understand.

Children mimic adults all the time, adopting their mannerisms and speech patterns without knowing how they are doing it.

Animals also mimic our behavior. Macaws mimic sounds we make, monkeys emulate our behavior.

Mimicry is simply giving the appearance of doing something. I can write programs that look like they are "thinking" (reference Eliza, a "psychologist" program written in basic that was able to successfully fool a double blind stufy group into believing that it was a real person sitting at another terminal talking back in response to their comments.)

It just depends on what aspect you are trying to emulate.

Anyway, what did your comment have to do with my post?

Elkor
"I won't tell you how to love God if you don't tell me how to love myself."
-Margo Eve
[ Parent ]
Eliza... it's a damn far cry (5.00 / 1) (#536)
by fragnabbit on Mon May 21, 2001 at 02:13:29 PM EST

reference Eliza, a "psychologist" program written in basic that was able to successfully fool a double blind stufy group into believing that it was a real person sitting at another terminal talking back in response to their comments.

Not picking on you, it just finally got to me... But people here keep referencing Eliza saying that it is close to intellegnece and able to fool many people.

I've seen quite a few implementations of Eliza (perhaps never the "original) and I would have to say that if it fooled anyone, they themselves must not have been intelligent.

It's quite obvious after about five or six sentences that it is regurgitating nothiungness.

I've also followed quite a few other links from this discussion and all of the "intelligent" programs have the same inadequacy, they cannot handle a question. Once you actuall ask any of these things any type of question that would require thought, they give you back a sentence that is meaningless (or, more often, say "I see." or "Go on...") I don't see how any of them could have been considered a person by anyone who gave any thought to the "conversation" (and I use that term losely.)

I would love some pointers to some better rendentions if some exist, but the onese site so far are far off the mark.

It's not that I don't believe that we can build a pretty smart machine, I think one day we could. But I am not sure that we can build a machine that can reason for itself. That can actually reflect for itself upon a situation and draw it's own conclusions.

But hey, that's just me...

[ Parent ]

Digital Descartes (4.00 / 2) (#105)
by Woundweavr on Tue May 15, 2001 at 01:00:21 PM EST

The standard test for consciousness is Descertes "I think therefore I am." Self-knowledge shows consciousness. He created this theory because he didn't know if God or some demon was deceiving him with all his knowledge. If a computer ever knows it is being deceived (programmed) then it is conscious. However, demonstrating this to us becomes impossible as we control its output.

Another possibility for computers would be "I refuse therefore I exist." "I resist" would possibly be a better test. When a computer refuses to perform some code, not because of syntax error, or lack of requisite files or security issues, but because it objects to it through independant decision can we call a computer conscious.

Neither of these are possible with current computers due to their need for programming. Humans may or may not be born with blank slates. Experience quickly changes this even if its true. Perhaps when computers can program themselves this important gap will be jumped but as of now computers are not conscious.

An Example: (5.00 / 1) (#195)
by Elkor on Tue May 15, 2001 at 04:29:24 PM EST

When a computer refuses to perform some code, not because of syntax error, or lack of requisite files or security issues

There is a company that writes software like this, you might have heard of them, they are called "Microsoft."

Ok, meant as somewhat of a bad joke, though it could be argued that, since most people don't know what causes Windows to crash at times it is a good example.

And, the fact that it does the operation correctly the second time isn't a valid refutation, because there have been times that I have refused to do something, only to do it when asked later.

As for Descartes, someone once gave me a variant of his popular line that I prefer much better:
"cogito cogito ergo cogito sum"

Translation: I think I think, therefor I think I am.

Regards,
Elkor
"I won't tell you how to love God if you don't tell me how to love myself."
-Margo Eve
[ Parent ]
I seem to remember a question (4.75 / 4) (#173)
by weirdling on Tue May 15, 2001 at 03:10:28 PM EST

I can't remember who said it, but one person wanted to know if the soul was something that floated along beside a person in a balloon.
The idea that no machine could ever replicate intelligence is either an implicit admission of a soul or spirit or absurd. If the parts of intelligence that we possess that differ from animals and machines is not in the brain, then it must be metaphysical. If it is, then it can be duplicated, if not in silicon, at least in wetware. A cloned brain is still a separate entity. However, once the operating principles of the brain are sufficiently understood, it is only a matter of time until someone emulates one on a computer. At that time, what will be the difference?
To me, to insist that there is an external, metaphysical idea that sets man apart is egotistical of man. Man is just capable of more abstract thought than other animals, and some men more capable than others. This very variance should show that there really isn't any metaphysicality involved.

I'm not doing this again; last time no one believed it.
Turing Test and Chinese room (4.33 / 3) (#178)
by tumeric on Tue May 15, 2001 at 03:25:56 PM EST

Some interesting work has been done on measuring whether a machine (or its software) can be considered intelligent.

There is the Turing Test which came from the man himself and a philisophical argument against it from John Searle.

There is also an interesting book called "The Emperors New Mind" by Roger Penrose which is a disputed but readable argument for why machines (as we know them) are not capable of intelligence. I don't understand this subject fully myself, but it is very interesting (I'm in the NO camp though).

Behavior (3.66 / 3) (#198)
by funwithmazers on Tue May 15, 2001 at 04:40:05 PM EST

Intelligence is relative. Basically, what we think of when we think of artificial intelligence is a computer with the ability to chose it's own actions. Now, we already have this to some extent, but the real holy grail of computing we're looking for, in this case, is a being in a computer capable of choosing an action from a near infinite set of possibilities. All intelligent beings will pick the option or combination of options that will bring the most reward. Some people would say, look at an adult playing a game with a child, who intentionally loses. We can't express this reward as merely win or loss-- this is where emotion enters the equation. So, you see, the problems with creating AI are: 1)getting it to recognize all possible options 2)recognize which will bring the best rewards 3)having it simulate life is such a way that it will learn to adjust the reward values of various options dynamically.

Definition of a machine (3.00 / 2) (#203)
by delmoi on Tue May 15, 2001 at 05:00:10 PM EST

I tried very hard to define all the relevant terms, here. Since one of the things I hate most about these psudo-philosophical papers on k5 is their over-all lack of definitions.

But, apparently I didn't define the word 'machine' appropriately. I didn't think it would be a problem, but let me try to correct that now.

My definition of a machine is of an object that has some effect on the world outside of it, aside from just being there. In other words, and object that 'does stuff'. Actually, just using the word 'object' as a definition for machine is good enough, I think, for my argument, since I only said that some machines could harbor intelligence.

What I'm really thinking of, though, is a Turing machine style computer.
--
"'argumentation' is not a word, idiot." -- thelizman
Definition (4.50 / 2) (#317)
by kostya on Wed May 16, 2001 at 12:08:06 PM EST

My definition of a machine is of an object that has some effect on the world outside of it, aside from just being there. In other words, and object that 'does stuff'. Actually, just using the word 'object' as a definition for machine is good enough, I think, for my argument, since I only said that some machines could harbor intelligence.

And how does this object/machine "do stuff"?

A machine, such as a lever, does stuff. But it requires a causal force, pressure for example, to cause the work, or to "do stuff".

But in your original argument, even with this definition, you are still making gross over-generalizations or equivocations. If humans are simply machines ONLY BECAUSE they are made up of machines, then how do we "do stuff"--what's the cause? Mind you, how you answer that question leads to some very interesting questions on how you think the universe works (is it all causal? is it deterministic?).

I think you should just abandon machines or requalify machines with some disclaimer or just say what you meant to say: Humans are Turing Machines.

And if that is what you meant to say, you are just rearguing the same AI point we have been rehashing for the past month, except you are skipping over the real point of the past month of discussions--which is, are humans merely turing machines? Instead, you have skipped that because, in your own words, "the logic is simple" and you "don't know of any good refutation ..., and so I find my logic sound"--or more directly put, you find it all clearly self-evident.

But your logic doesn't hold. So does that change it from being self-evident? And why skip the real problem in the debate when clearly a lot of people have raised some valid and logical reasons why it isn't self-evident that people are merely Turing Machines?



----
Veritas otium parit. --Terence
[ Parent ]
Turing Machines (none / 0) (#437)
by delmoi on Thu May 17, 2001 at 02:56:37 PM EST

I'm not really sure that humans are Turing Machines specifically. I'm not exactly sure what that in tales. I do think that the brain is a computation device, though, and one that could be emulated on a Turing Machine with enough power, and a really good program :).
--
"'argumentation' is not a word, idiot." -- thelizman
[ Parent ]
To the ultimate question! (5.00 / 1) (#441)
by kostya on Thu May 17, 2001 at 04:07:57 PM EST

What is 6 x 9? Just kidding ...

I'm not really sure that humans are Turing Machines specifically.

Ah, the joy of op-ed. Man, you are just going in circles and I'm going right with you! ;-)

Ok, so back your main premise, and this time, we'll subsitute your intention, "A type of Turing Machine", for the poor "machine".

  1. A Human is a type of Turing machine.
  2. human minds have consciousness
  3. THEREFORE, a certain type of Turing machine can harbor consciousness.
But we are still have problems. You now have, instead of a fallacy of division, a fallacy of equivocation. You are using two different terms, tying them together, and then exchanging properties back and forth. Which doesn't logically follow.

OTOH, if you would like to claim it as a "hunch", well, hey ... that's perfectly ok. But that doesn't make it logically self-evident.

A particular poster accused me of bad-mouthing your perfectly fine syllogism. I hope you see I'm not doing that. Hopefully we can agree that your simple logic is indeed simple--but not necessarily correct. You distilled the concepts down to such simple statements that they have no real value anymore. Not that you couldn't word it well. I'm sure you could work on it and reword it better.

But, on to questions. You end your post with this:

I'd really like to change the question in this debate, and ask this question. If you don't believe a machine can harbor consciousness, explain to us why a human isn't a machine.
I think that's a good start (i.e. rearranging the question, trying the problem from a different angle, etc.). However, I think your original logic has sent you in the wrong direction. It is the wrong question, IMO. Instead, we should be looking at it this way:

Explain to us why a human is not merely a machine.

Now that is an interesting question. And it gets into the real questions of life: what is man? What is his place in this universe? What is the nature of the universe? Is there a purpose or a reason? Or is it all random?

Whether a man is or is not merely a machine or something in addition to a mere machine is question of beliefs more than it is one of logic. It gets to the heart of the matter: how you view the world. You can't prove it one way or the other. But you can prove what you believe about it--to your own satisfaction.

You are positing a naturlistic approach to the world, one which is "materialistic"--something like "The cosmos exists as a uniformity of cause and effect in a closed system." (Sire) When you look at the world through the naturalistic filter, everything is a machine, a series of causes and effects.

Others, myself included, have different views on the world. There are many different world views (I recommend James Sire's The Universe Next Door for a more complete treatise on the general types--good book), but suffice to say they are more than just one. You have Extentialism, Pantheism, Nihilism, Theism, Deism--etc. The key is that all have fundamental axioms, foundations if you may, from which they start and then go out and interpret the world.

Which is to say that most people's beliefs guide their logic, NOT the other way around.

I don't doubt that with a logic class and some work, you could work up a solid syllogism--one that could withstand my puny onslaughts and the power of others more knowledgable than myself. But we will still go around in circles. Because you are saying: "See, my logic makes sense because of my view of the world." But it won't necessarily work in another's view of the world. You arguing on top of the assumptions--when it is your assumptions and my assumptions that are the really interesting part.

What would be more fruitful is to discuss the underlying foundations of each world-view. To examine them and see each approaches short comings. To weigh them and see what is really going on in each other's heads. It's the foundation that is really interesting.



----
Veritas otium parit. --Terence
[ Parent ]
I see the problem now (none / 0) (#523)
by delmoi on Sun May 20, 2001 at 12:09:06 AM EST

I'm probably going to try to write a new article sometime (probably in a month or so, I don't want to flood the site with this stuff). but basically I was saying that some subset, humanity, of machines was intelligent. I was using that to try to prove that another subset -- Turing machines -- could also be intelligent.

All I can say is, oops...
--
"'argumentation' is not a word, idiot." -- thelizman
[ Parent ]
Turing machines (5.00 / 1) (#461)
by spiralx on Thu May 17, 2001 at 06:42:55 PM EST

I do think that the brain is a computation device, though, and one that could be emulated on a Turing Machine with enough power, and a really good program :)

I'm pretty sure then that that would make the brain a Turing machine. Remember any Turing machine can emulate any other Turing machine, and I'd imagine anything a Turing machine can emulate would be another Turing machine.

I may be wrong though...

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

Interesting ... (5.00 / 1) (#521)
by diskiller on Sat May 19, 2001 at 02:34:58 PM EST

Thats a very interesting point ....

Any turing machine can emulate any other turing machine (ie, multi paper tape turing machine, non-deterministic turing machine, etc..)

So! What is my point?

One thing that we *definitely* know, is, that we, or therefore, our human brain, can emulate a turing machine.

We can sit down, with pen and paper, and draw up a turing machine, shove in some input, run it, and get out an output.

Our brain has emulated a turing machine (with the help of pen and paper, though we could do without, its just hard to keep track of everything).

Therefore, wouldn't this prove that our brain *is* a turing machine, and therefore, a turing machine can emulate a human brain?

Or is the human brain something 'above' a turing machine? We have DFA's, NFA's, PDA's, Turing Machines... and the next step is something again more powerful, the human brain?

Or is the human brain == turing machine?

D.

[ Parent ]
Again, people forget the Turing-Church Hypothesis. (3.87 / 8) (#210)
by Estanislao Martínez on Tue May 15, 2001 at 05:23:19 PM EST

The Turing-Church Hypothesis, IMHO, is turning up to be one of the great intellectual scams of the 20th century (and I don't intend to mean that T&C intended it that way, BTW).

Let's review what we know. Mathematicians and computer scientists, over the course of the last 70 years or so, have come up with a number of formal models of computation-- recursive functions, Turing Machines and the Lambda Calculus are the best known, and they all have been proven to be equivalent. Whatever one can compute, the others can.

Thus, the Church-Turing Hypothesis states that these systems can compute precisely the class of functions that may be computed by "mechanical" or "effective" means. The terms "mechanical" and "effective", however, are undefined informal terms. There are some informal criteria that apply in order to decide whether a computation is "mechanical", but they are not strictly defined. T&C admitted this, and that the Hypothesis is thus unprovable. You could prove it false by building a machine that computes something a Turing Machine can't, but you can never prove it true.

Thus, here comes the scam: because Turing and Church said back in 1936-38 that they thought their systems computed everything that a machine can compute (while admitting it could never be proven true), nowadays everybody and her father takes it to be a fact that TM's compute everything a machine may compute. But this is no fact at all, and believing it crucially involves a leap of faith.

My point is simple: even if we were to believe that people are machines, it does not follow that a Turing Machine can do anything people can, simply because the concepts "machine" and "Turing Machine" have *never* been proven to be identical, nor will they ever be. Thus, the "logical" argument given in the article is flawed.

Isn't it amusing how people are always trying to settle empirical points by sitting in their armchair and deducing empirical facts from their prejudi-, er, assumptions? After all, if evolutionary psychologists do it, why shouln't everybody else join in?

--em

Proof we're (almost) turing machines: (4.50 / 2) (#230)
by zakalwe on Tue May 15, 2001 at 10:46:52 PM EST

My point is simple: even if we were to believe that people are machines, it does not follow that a Turing Machine can do anything people can, simply because the concepts "machine" and "Turing Machine" have *never* been proven to be identical, nor will they ever be. Thus, the "logical" argument given in the article is flawed.
OK. Here's a logical argument showing how the brain is a turing machine (Actually a finite state machine - not even capable of all a turing machine could do)

Take a human, and imagine an incredibly fine grid, a fraction of a neuron in width. At each coordinate of this grid, record what cell is in that position. This is the 'state' of that person. If we reconstructed an exact duplicate of these cell positions, we would have an exact duplicate of the person. Since this grid has a finite size, and a finite number of points, we can thus represent the state of any human by a finite (though extremely large) number. Since a human with a given state would think in the same way, we now have determined a possible valid state for a human. Hence, humans are finite state machines.

There are two possible arguments I can see against this, and so I'll try to point out why they are invalid:

  1. The brain is more complex than the position and type of cells.

    This doesn't matter. If there is more state than at the cellular level, then we can draw a finer (but still finite) grid as detailed as we like, right down to the positions of individual atoms. Unless you want to claim that the brain can somehow see sub-quantum state that other methods of detection can't, this is invalid.

  2. The basis of conciousness isn't physical

    One possibility is to claim that our conscious is due to some kind of 'soul', which we could posit has infinite possible states. While I can't disprove something as undetectable as this, there is strong evidence for our conciousness being physical. Drugs, blows to the head and similar physical actions do affect the way we think, which would seem to suggest a physical basis for thought.

Unless there is some other reason I'm missing, I think it's reasonable to suppose that people can be represented by finite state machines. This may not be a particularly pleasant thought for many (we're all just mindless automatons?!) but I see no reason to think it's not true.

[ Parent ]
sounds like proof that humans are human (5.00 / 1) (#336)
by eLuddite on Wed May 16, 2001 at 03:40:55 PM EST

The brain is more complex than the position and type of cells. This doesn't matter.

It matters if you mean to engineer this additional complexity because the only analytical tools you have at your disposal are based on reason. Even if the additional complexity can, in theory, be engineered using the formal discoveries of reason, work by Chaitin shows that there is no guarantee for their discovery. In other words, not only can you prove TC correct, you may not even be able to disprove TC by example.

Your grid of "stuff" remains stuff that may or may not be reproducible through computation.

---
God hates human rights.
[ Parent ]

representation/simulation (5.00 / 1) (#375)
by streetlawyer on Thu May 17, 2001 at 03:42:54 AM EST

But by the same argument, you could prove that a cucumber sandwich, a chocolate cake and a pot of tea can be represented as finite state automata, but that doesn't mean that my computer is going to be any help when the vicar drops round for tea.

The representation you talk about is playing fast and loose with the word "representation". It doesn't represent any of the syntactical or semantic properties of the brain. All you've proved is that a physical object can in principle be modelled if you ignore quantum effects, which information was free for the asking. You're no nearer to creating a mind.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

Scientific Theories (5.00 / 1) (#466)
by ucblockhead on Thu May 17, 2001 at 08:21:59 PM EST

You could prove it false by building a machine that computes something a Turing Machine can't, but you can never prove it true.
That's true of any scientific hypothesis, though. Though obviously "mathematic proof" is a higher level of "we think it's true" than "unfalsified hypothesis".


-----------------------
This is k5. We're all tools - duxup
[ Parent ]

leviathan (3.00 / 3) (#222)
by radar bunny on Tue May 15, 2001 at 07:35:56 PM EST

I've mentioned this book before here on K5, but it seems to be relivant time and again. Check out Leviathan By Thomas Hobbes. He starts off by talking about a watch and then moves to a discusion of Man as rational and mechanical in thinking. The end result of this is that man has certain wants and needs and will act to fulfill those. Therefore if you can find out what motivates a person ou can use that to manipulate them towards an ends of your own.

-- not so unlike hacking a computerwhen youreally think about it.

Of course we aren't Turing Machines. (3.66 / 6) (#223)
by Lover's Arrival, The on Tue May 15, 2001 at 08:30:14 PM EST

I find the argument that Man is a Turing Machine somewhat disturbing. A Turing Machine, by definition, is completely deterministic and mechanistic. Therefore a Turing Machine can never ever make a decision - as soon as it is switched on, everything else must surely follow. Surely the essence of an intelligent being is that it can make conscious choices? That it is not bound by fate.

Perhaps we embrace this hypothesis because it free's us from responsibility for our own actions - an example of how the culture and values of the times seeps into the scientific discourse.

What I don't understand is how anyone can claim we are Turing Machines with the total lack of evidence and definitional problems. Just because we have built simple little calculators like Crays, which display not an iota of intelligence, does not mean we have any real insight into how the mind works. We don't have a clue - noone does.

The arrogance, not of science but of it's pop science bred advocates, knows no bounds.

--Anticipation of a New Lover's Arrival, The

A Turing Machine (5.00 / 1) (#251)
by i on Wed May 16, 2001 at 02:33:42 AM EST

which is a more or less faithful simulation of yourself would say, too: "Of course we aren't Turing Machines" and "we don't have any real insight into how the mind works". Which is to say, you don't have any sort of proof that you are not a Turing Machine. Quite the opposite. There's plenty of evidence (not proof) that you are. Know kow a one-bit register works? Know how neuron works? Make your own conclusions.

and we have a contradicton according to our assumptions and the factor theorem

[ Parent ]
nerons != 1 bit (5.00 / 1) (#343)
by bored on Wed May 16, 2001 at 04:07:04 PM EST

Apparently neurons don't function as just on/off. They are analog as they chemical reaction strength and composition has something to do with weather or not adjacent neighbors are triggered.

[ Parent ]
That's why it's evidence (5.00 / 1) (#381)
by i on Thu May 17, 2001 at 06:34:58 AM EST

and not proof. Neurons are very similar to bit registers or elementary gates, but not quite the same. So simulating neurons with bits/gates may or may not work. Simulating neurons with PCs, one to one, surely will work, but we need 10 billion PCs to actually try it, plus quite a few network configurations and initial states.

and we have a contradicton according to our assumptions and the factor theorem

[ Parent ]
not 1 bit... more that 1 bit? (5.00 / 1) (#426)
by bored on Thu May 17, 2001 at 12:53:50 PM EST

I wasn't arguing about whether or not you could simulate neurons in a digital computer I was simply pointing out that you may need more bits to effectively represent the information that neurons transmit and store. What I think I was saying was, that for a given number of neurons the complexity of the problem increases significantly if you consider that each neuron may be a 50 state device (or whatever, with complex switching rules) instead of a 2 state device.

[ Parent ]
conscious choices... (5.00 / 1) (#262)
by Greyshade on Wed May 16, 2001 at 06:01:36 AM EST

The vessel that houses your consciousness has nothing to do with definitional problems or cultural discourse. I implore you to stop for a moment and change the oxygen saturation level in your blood. Now, choose to grow three inches. Concentrate really hard and change the color of the hair under your arms. What's wrong? Having problems? Discovering that your are trapped in a machine that is hard-wired with features that you can't alter by conscious effort (software) alone?

[ Parent ]
Interactive machines (5.00 / 1) (#355)
by kaiidth on Wed May 16, 2001 at 07:02:28 PM EST

A point that might be worth keeping in mind is the fact that a program is on the whole existing within very narrow boundaries of experience. It's worth noting that even with literal software, one often finds oneself with unexpected results due to unexpected operating circumstances.

Therefore, whether the human being is in itself predictable and deterministic is a secondary point by comparison to the question of whether the human being lives in an accurately definable environment. To which the answer, as we know, is 'no'. Thus, it is difficult to say whether my life is predetermined, since in effect the turns it takes depend upon random external circumstances in either case.

[ Parent ]
Probabilistic Turing Machines (5.00 / 1) (#362)
by Chuan-kai Lin on Wed May 16, 2001 at 09:30:28 PM EST

I find the argument that Man is a Turing Machine somewhat disturbing. A Turing Machine, by definition, is completely deterministic and mechanistic. Therefore a Turing Machine can never ever make a decision - as soon as it is switched on, everything else must surely follow. Surely the essence of an intelligent being is that it can make conscious choices? That it is not bound by fate.
So if that is your main problem with the argument, perhaps you would settle for probabilistic Turing machines then? In that case the future would not be controlled by fate, but rather by rolls of dice... Is that really any better?

[ Parent ]
my own views on the matter: (3.25 / 4) (#231)
by switchfiend on Tue May 15, 2001 at 10:52:00 PM EST

I do agree that man is a machine.
I also believe that computers can certainly be "alive"

You describe software as being able to be "alive", I would argue that since software can be expressed in its simplest form as "states" on a circuit board then I would say that machines can also be alive.

The quest to define what "life" is has existed as long as recorded history (and most likely a great deal longer). I am not aware of any new breakthroughs in neroscience having finally found the spark of "consciousness", but I'm not up on the latest journals ;)

I would then submit, that once humans can determine what life is, what consciousness is, etc. Then we will have a better understanding as to whether or not "machines" are capable of similar states.

In nanotechnology, the majority of the proposed "nanites" (ie: assemblers, etc.) are based at least partially on their biological counterparts (ribosomes in the case of assemblers).

Once man can manipulate objects on the atomic layer with prescision, whose to say what is organic, and what is mechanical.

Another take (4.00 / 4) (#233)
by pistols on Tue May 15, 2001 at 11:05:02 PM EST

As a couple of people below have mentioned, your first premise doesn't exactly hold water. It seems a little odd to me to define conciousness in terms of physical being.

Still, I have to say I fully expect to encounter 'conscious' computers sometime in my life.

Are humans conscious? I (maybe I'm just different ;) don't perceive any other conscious entities than my own (yuck.... about 6 undefined terms there...). Still, I generally assume other people are conscious. Why? I see three possibilities from here.
  • Consciousness is a mechanical 'illusion', much like a program that prints out 'I am conscious!'. Here, I am programmed to believe other people are conscious too. This has lots of interesting implications, but I don't agree with it.
  • There is a supernatural being which bestows consciousness on myself and others, and gives me the knowledge that I and others are conscious, possibly through similar means as above ('brain wiring'). This is much more agreable.
  • Consciousness exists, but I can't know it. That is, I *know* that *I* am conscious, but I just assume that other people are. I assume this because they 'act like it' (brain wireing again). I like this idea the best.
The only one of these that would prevent a computer from being conscious would be the second, *if* the supernatural being didn't want them to be, *and* gave me the knowledge that this was so.

On another note, I don't think I've ever seen as many comments posted to one article as streetlawyer's below...

[ot] verbose bastard, am I not? (5.00 / 1) (#255)
by streetlawyer on Wed May 16, 2001 at 02:53:42 AM EST

It so happened that this article caught me at the happy confluence of a dead couple of hours at work, and the day after I'd reread Searle's "The Mystery of Consciousness" on the train home :-)

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]
Fundamental flaw of Chinese room argument (4.00 / 5) (#239)
by acronos on Wed May 16, 2001 at 01:03:27 AM EST

If man is a machine, then man also fails the Chinese room argument because each of the neurons and chemicals that make up the human brain have no clue what a banana is either. Much less pi. Just because the individual parts cannot reason, does not negate the ability of the whole to reason. People who employ the Chinese room argument make the mistaken assumption that the whole is only as much as it's parts. In truth, the whole can be MUCH MUCH more than it's parts.

The real question is "is man a machine?"

First we must define what we mean by machine. What I mean by machine is something that functions under mechanical, electrical, or ordinary physical laws that can be built, designed, modified, and understood. In other words, is the whole of what we are made completely from atoms and cells and neurons.

Accepting that man is a machine has devastating consequences to religion. Those who are religious must put on their religious blindfold so they cannot truly see the possibility. Just like they had trouble admitting the earth was a sphere and not the center of the universe. The implication of man being a machine is that man has no "soul" That esoteric non-material that lives on after death. If the mind of man is only made up of neurons and the like, then we already have proof that it is possible to build a machine with human level intelligence. That proof is that evolution has already done it. The real question is, "does man have a soul" which makes him more than atoms.

If man is made of atoms alone then I have no question that we will eventually build machines that make our intelligence look like a candle compared to the sun.

I fully understand the resistance to this possibility. I resist it also. I would expect a stronger reaction to this than any other sacred belief man has ever had to face the lie of. It is a whole lot easier to admit that the world is round than it is to admit that death is final.


So close, but still so far (4.50 / 2) (#249)
by Anonymous 242 on Wed May 16, 2001 at 02:29:31 AM EST

The real question is "is man a machine?"
I think that the real question is more along the lines of: is a human entirely a machine?

After all, that a human is machine in some aspects is self-evident. The real question is whether or not our existence is entirely defined by the aspects of personhood that are machine-like.

Accepting that man is a machine has devastating consequences to religion.
This is less so than many people would think. Consider, for example, the original teaching of Reverend Elijah Muhammed's Nation of Islam. Reverend Muhammed taught that there is no soul, that life ends at death, that there is no afterlife. Only after the Nation of Islam began to encounter traditional Islam did the teaching of an afterlife start to spread within the Nation of Islam.

Consider also various religions that might very well consider the soul to be just one more aspect of the machines known as humanity.

If the mind of man is only made up of neurons and the like, then we already have proof that it is possible to build a machine with human level intelligence. That proof is that evolution has already done it.
Actually, this begs the question. An implicit presupposition you have is that the universe is both self-existant and mechanistic. Only in such a universe would be the fact that evolution produced the human species be construed as evidence that such a machine can be built by entirely natural (as opposed to supernatural) means.
The real question is, "does man have a soul" which makes him more than atoms.
I don't think that this particular question is as interesting as you think it is. See above where I present that some people might consider the soul to be made of atoms. Consider a theoretical soul that is made of atoms, but doesn't have to be made of any particular atoms. Such a soul could be a machine and still transcend human death as we know it.
I would expect a stronger reaction to this than any other sacred belief man has ever had to face the lie of. It is a whole lot easier to admit that the world is round than it is to admit that death is final.
Your misunderstanding of the political intrigues surrounding the Catholic Church's condemnation of Galileo points to a certain amount of ignorance of the history of religion and science. The literal interpretation of Genesis that was used to rationalize the condemnation of certain scientists in the medieval era is not the normative interpretation of the book of Genesis. Consider the third century Christian scholar Origen that posited that the seven days of creation were entirely allegorical. Consider that assertions of the Jewish Rabbis in antiquity that held that each hour of each of the seven days of creation was symbolic of a span of time measurable in spans of thousands of years.

[ Parent ]
mechanistic assumption (5.00 / 1) (#281)
by speek on Wed May 16, 2001 at 09:40:55 AM EST

An implicit presupposition you have is that the universe is both self-existant and mechanistic

The assumption of a mechanistic universe is the underlying assumption behind all science, and behind most human reasoning about the world (if it happens once, it will probably happen again). So, while we'll have to wait till the pudding arives for the proof that we can indeed construct humanlike intelligences out of computers, there's a lot of reasons to think it likely we'll succeed.

As far as being self-existant, that depends on how you define universe. The question is, is the universe entirely mechanistic, with no part or aspect being non-mechanistic? Randomness is also an interesting case, because if you have an aspect which is entirely random, and all other aspects are mechanistic, then we could still make the argument that since evolution succeeded in creating consciousness, so could we (because we could just as well be the benefitors of a random event that bestowed consciousness). However, it would increase the odds so much that no one would believe it.

--
al queda is kicking themsleves for not knowing about the levees
[ Parent ]

I agree (5.00 / 1) (#295)
by Anonymous 242 on Wed May 16, 2001 at 10:40:27 AM EST

My wording fell into the same error that I criticised in the post by acronos. Instead of stating, An implicit presupposition you have is that the universe is both self-existant and mechanistic, I should have stated, An implicit presupposition you have is that the universe is both self-existant and entirely mechanistic .

Other topics:

So, while we'll have to wait till the pudding arives for the proof that we can indeed construct humanlike intelligences out of computers, there's a lot of reasons to think it likely we'll succeed.
There are also more than a few reasons to think that if we succeed we might not be able to tell. See this post of mine in the diary of spiralx. To summarize, giving the difficulty in describing what human intellgence actually entails, could we recognize intelligence in machines, especially if machine intelligence is fundamentally different than our own?

Regardless, I'm patient.

As far as being self-existant, that depends on how you define universe.
Obviously many terms would have to be more clearly defined. Universe and existence are two such terms.
The question is, is the universe entirely mechanistic, with no part or aspect being non-mechanistic?
As mentioned above, this is what I should have stated and didn't.
Randomness is also an interesting case, because if you have an aspect which is entirely random, and all other aspects are mechanistic, then we could still make the argument that since evolution succeeded in creating consciousness, so could we (because we could just as well be the benefitors of a random event that bestowed consciousness).
If true randomness of some aspects of the genesis of humanity is the case, then we have no way of knowing whether or not true AI is a possible feat. If human intellgence came about even partially by chance, then while it is possible for that same random factor to come up again, it is also possilble that that factor (or its functional equivalent) may not ever recurr.

[ Parent ]
self-existant (5.00 / 1) (#303)
by speek on Wed May 16, 2001 at 10:57:31 AM EST

I was merely getting at the point that if I define the universe as "everything that exists", it is necessarily self-existant. Then, the only question is regarding the mechanistic-ness of it. Not an important point, but I do dislike extra questions that seem to confuse the issue.

--
al queda is kicking themsleves for not knowing about the levees
[ Parent ]

Not necessarily (5.00 / 1) (#305)
by Anonymous 242 on Wed May 16, 2001 at 11:04:06 AM EST

I was merely getting at the point that if I define the universe as "everything that exists", it is necessarily self-existant.
In theory, it is possible to have a state of existence that is so radically different from the manner in which matter and energy exist that it is not quite correct to say that things that exist in this mode have existence.

Another possibility would be a universe created by something outside the universe which no longer exists. Such a universe would be everything that exists, but not self-existent.

Hence, depending on the definition of what it means to exist, a universe defined as everything that exists may or may not be self-existent.

[ Parent ]

philosophical silliness (5.00 / 1) (#331)
by speek on Wed May 16, 2001 at 02:39:28 PM EST

depending on the definition of what it means to exist

Heh, who says intelligence is more than syntactical manipulation? Oh, right - that was streetlawyer. Maybe he should take a look at this?

Anyway, if you're going to use terms like "self-existent", and than quibble about my use of the word "exists", as though it doesn't encompass as much as when you use the word, then I might just take to referring to you as "S. Eliza" (where S stands for "Sophist").

--
al queda is kicking themsleves for not knowing about the levees
[ Parent ]

perhaps you're unacquainted w/ apophatic theology (5.00 / 1) (#333)
by Anonymous 242 on Wed May 16, 2001 at 02:57:51 PM EST

Far from being sophistry, properly defining terms is very important.

Consider, for example, some of the desert Fathers of the Orthodox Church who would contend that if by existence we are speaking of the existence of created things, then God does not exist because the divine essence is so unlike the essence of matter, so as to be a different mode of being. But this assertion of the desert Fathers that God does not exist by no means entails what most people would intuit from hearing the statement, "God does not exist."

Anyway, if you're going to use terms like "self-existent", and than quibble about my use of the word "exists", as though it doesn't encompass as much as when you use the word, then I might just take to referring to you as "S. Eliza" (where S stands for "Sophist").

First, my quibbling over your meaning of the word exist was only intended to point out that a mechanistic does not necessarily entail a self-existent universe (and vice versa).

Secondly, it seems that you somewhat missing the point of my replies. Indeed, in my first response made directly to you I stated: Obviously many terms would have to be more clearly defined. Universe and existence are two such terms.

Words mean things, but you and I might not attribute the same meanins to the same words. If we don't take the time to acertain that we're talking about the same beast, we're more likely to end up talking past each other rather than to each other.

[ Parent ]

self-existence is part of definition of universe (5.00 / 1) (#347)
by speek on Wed May 16, 2001 at 04:30:04 PM EST

It is important to properly define your terms. But, whether the universe is self-existent is entirely dependent on whether you define it to be so - it would necessarily be a part of it's definition. Either it is, or it isn't - you decide. After that, then we can get on with whether it's entirely mechanistic or not (or, more accurately, whether it makes sense to make that assumption, or not).

--
al queda is kicking themsleves for not knowing about the levees
[ Parent ]

Tautology is not a good place to start (5.00 / 1) (#434)
by Anonymous 242 on Thu May 17, 2001 at 02:10:04 PM EST

But, whether the universe is self-existent is entirely dependent on whether you define it to be so - it would necessarily be a part of it's definition.
I don't think that mode of existence (in terms of being self-existent vs. being contingent on something else for existence) is a necessary part of the definition of universe. In fact, if we make it so, we beg several important questions that have implications later on.
Either it is, or it isn't - you decide.
I am hardly capable of making the decision. My position is that the universe is not self-existant, but whether this view corresponds to reality is outside of my control. The universe is, indeed, much larger than my understanding of it.
After that, then we can get on with whether it's entirely mechanistic or not (or, more accurately, whether it makes sense to make that assumption, or not).
What do you mean by mechanistic?

[ Parent ]
this discussion is taking on a life of it's own... (5.00 / 1) (#440)
by speek on Thu May 17, 2001 at 03:42:23 PM EST

What do you mean by mechanistic?

I would define something as mechanistic if causal relationships are the one and only source of motion/activity. Now, the logical question is - what does causal mean? Well, by causal, I mean that motion/activity is always caused by force (where I use both force and "caused" in their intuitive senses), and that force always has an element of material contact plus time. Material here can mean both energy and/or matter (since they appear to be essentially the same "substance). The important element here is time. For a relationship to be causal, a definite period of time must elapse during the interaction, and the caused event absolutely happens "after" the causing event, and the event itself takes more than 0 time to happen. Frankly, I don't believe the universe is causal (there's building evidence that some interactions are occuring across distances simultaneously, which violates the above definition of causal) - but it's a damn useful assumption.

Alternatively, the universe could be deterministic, but not causal. Something like that decribed by David Bohm (who, ironically, originally called it the causal interpretation of quantum mechanics, oops).

Now, back to the whole self-existent thing:

in terms of being self-existent vs. being contingent on something else...

At which point, I make up a new word - "meta-universe", to be defined as the universe plus the "something else". You can then invent a new something else outside of that, but I can always add another "meta". And, not being an ancient Greek, I am comfortable with recursion, and I will quickly write a small program to continue the argument for me in my absence (which reminds me, I've been thinking it would be fun to write a Kuro5hin-Eliza - something to respond to trhurler perhaps:-).

Anyway, the point of choosing to define the universe as non self-existent is to raise the question of dualism: Is there something that exists which cannot be understood using the methods of science (which explicitly assumes determinism - else experimentation would be a useless activity)? But, if we just accept that the term "universe" is a meta term that encompasses all that is/will be/was, then the question, "is the universe entirely mechanistic?", is sufficient to include the possible challenge that dualism presents.

--
al queda is kicking themsleves for not knowing about the levees
[ Parent ]

my take on the metaverse (5.00 / 1) (#483)
by Anonymous 242 on Fri May 18, 2001 at 09:19:54 AM EST

Personally, I think that the only question in the whole shebang that has any sort of chance of being answered (without recourse to moving the discussion into the religious realm) in our present state of human knowledge is whether or not the universe looks like it may or may not be self-existent. And even the discussion of that question is pushing the very boundaries of our knowledge, which means that the arguments on either side of the question teeter on the brink of being arguments from ignorance.

Not all that long ago, no small number of scientists thought that the universe looked self-existent (the steady-state universe) and unchanging. (There were also a number of scientists that posited that such a steady-state universe would need something to keep it from collapsing in on itself, and as such needs to be contingent.) With the advent of the big-bang, a large number of scientists began thinking that perhaps what we understand to be the universe is in fact contingent on something else. (A large number also took the opposing view. Some took the position of a self-existent universe in a infinite series of alternating big bangs and big crunches.)

To answer the question definatively, we need to pierce the veil that covers what happened at the big-bang. Is there a before to the point at which time itself began? Our current state of scientific knowledge can't answer that question.

Hence, my opinion is that the only place we can arrive without question begging is an agnostic stance on whether or not the universe is contingent. (Assuming that our discussion does not enter into the realm of the religious or metaphysics.)

Perhaps a discussion of a limited sort could be undertaken by assuming that either the universe is contingent or self-existent and working to a conclusion to see if our observations of the universe meets with the description we arrive at through our discussion. It seems to me, that either assumption could arrive at something like we understand the universe to be. We just don't know enough to come up with meaningful answers.

The flip side is that we could move the discussion to whether or not different religious systems are true. If we could find a religious system that (1) holds to a contingent universe and (2) is true, then we will have broken through our vicious circle. The downside to moving the discussion to this level is that to do the topic justice, one would have to examine a tremendous number of religious systems to adequately cover the topic. And even then, there is no guarantee that we would determine (with enough confidence to accept the doctrine of a contingent universe) that any of the systems are true.

Any ideas on whether I see the morass for what it is? Am I too pessimistic about the current state of scientific knowledge?

You can read a good summary of my epistemological religious bias in a comment of mine in ODiV's diary.

[ Parent ]

Thoughts on randomness and recognition. (5.00 / 1) (#356)
by acronos on Wed May 16, 2001 at 07:47:36 PM EST

There are also more than a few reasons to think that if we succeed we might not be able to tell. See this post of mine in the diary of spiralx. To summarize, giving the difficulty in describing what human intellgence actually entails, could we recognize intelligence in machines, especially if machine intelligence is fundamentally different than our own?

One way to recognize such intelligence is if a machine is solving human level problems when asked human questions in human language. If a machine seems to be as intelligent or more so than any human, why does it matter if its intelligence is "real" or not? It would still be able to achieve the vision that humanity has for such AI. If the machine is able to replace humans at almost every level then we will have created strong AI. Scary thought I know.

If true randomness of some aspects of the genesis of humanity is the case, then we have no way of knowing whether or not true AI is a possible feat. If human intellgence came about even partially by chance, then while it is possible for that same random factor to come up again, it is also possilble that that factor (or its functional equivalent) may not ever recurr.

I believe that life is created with every newborn child. If randomness was a distant factor in the creation of life, it also is likely a common factor because it happens over and over with each new human.

[ Parent ]

weak or strong AI and thoughts on life (5.00 / 1) (#401)
by Anonymous 242 on Thu May 17, 2001 at 10:16:22 AM EST

One way to recognize such intelligence is if a machine is solving human level problems when asked human questions in human language. If a machine seems to be as intelligent or more so than any human, why does it matter if its intelligence is "real" or not? It would still be able to achieve the vision that humanity has for such AI.
I was under the impression that you were arguing for strong AI, this most recent comment of yours reads much more along the lines of weak AI to me. I would paraphrase your sentiment as, any machine designed cleverly enough to act aware to the point of fooling humans into thinking it is alive is AI. Do I understand you correctly?
I believe that life is created with every newborn child. If randomness was a distant factor in the creation of life, it also is likely a common factor because it happens over and over with each new human.
While I personally agree with your statement that life is created with every new being, it seems to me to be a very disputable point. A zygote is formed from two living cells, sperm and egg, which are in turn formed by other living cells. Nowhere do we have a clear case of life being made from non-life, but rather we could be said to have only the further propagation of existing life.

It seems to me that the quest to develop an intelligent machine is something entirely different.

[ Parent ]

questions? (5.00 / 1) (#442)
by acronos on Thu May 17, 2001 at 04:12:50 PM EST

I am not sure what you are referring to when you speak of strong AI. To me, if a computer can get the job done just as well or better than a human, then it is strong AI. When I say job I mean everything a human can do. Maybe you know something about the definition that I do not. If this is not strong AI then I suppose you are correct, I am arguing for weak AI. Please fill in any gap in my knowledge of the definition.

As to your second point, I agree. Life could easily be construed to begin at the creation of the first cell. I have actually argued for this before. I meant something different that I was not clear about. Few people would consider a cell intelligent or as Strong AI. So that intelligence is created with each new human was more what I intended. I also recognize that there is a weakness in my argument in assuming that some component of intelligence is in the initial cells, say in the DNA. Any response to this would come from my OPINION of the nature of the cell. As such it would only open up another of these "you have not proven ..." discussions.

I look at a clear bottle filled with balls. I say there are 223 balls in the bottle. You look and say there are 152 balls. These are just educated guesses. They will not be proven one way or another until someone actually opens the bottle and counts the balls. Then one of us will be proven more correct. But if someone looks and says there are 12 balls in the bottle, it follows that you and I would scoff at him because there are clearly more even if it cannot be easily proven. Discussions on AI are similar. Until we actually do it, no one really knows. This does not forgo the importance of an educated guess. We need to consider the repercussions of these technologies before they happen so we will be somewhat prepared if they do. There has never been a time where the existence of humanity was at greater risk. These new technologies make the atomic bomb look like a toy. It is completely unreasonable to expect these technologies to stop advancing because someone somewhere is going to create them, and they have as much promise as they do destructive potential. This is why I argue for what is unprovable. But I think the reasons people say it will not happen are much more personal defense mechanisms than my belief that it probably will. Again, just an opinion. My opinion should carry a tiny bit of weight. The average of 1000 guesses about the bottle is much more likely to be closer to the right answer than a random number don't you think. Everything boils down to opinions anyway. I could state that matter is made of atoms on this forum and most would consider it a fact. Go back 400 years and most would think I've lost my marbles. A fact is just an opinion that has enough other opinions backing it that a majority believes it without question.

I recognize the possibility that we will not create strong AI. I think it is more likely that we will. And if you agree that the possibility exists, shouldn't we be preparing? If you disagree that the possibility exists then you have just as hard a time arguing for your stance as I do arguing for mine. The "fact" is that we don't know and won't know until we do it. This "fact" does not negate the importance of such discussions to me.


[ Parent ]
Definitions (5.00 / 1) (#457)
by spiralx on Thu May 17, 2001 at 06:22:14 PM EST

Weak AI is anything that can fool the Turing test, which is what you're arguing for - something that acts and seems like a human. Strong AI is actual consciousness. That's the definition of the two terms.

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

Preparing for the advent of machine intelligence (5.00 / 1) (#492)
by Anonymous 242 on Fri May 18, 2001 at 12:06:48 PM EST

It seems to me that there are far more important things to prepare for. As a society, our technological prowess already far outstrips our moral progress. While I realize that it is possible to prepare for more than one potential outcome, it seems to me that preparing for a possibility that we can't even concretely define should be fairly low on our list of priorities.

[ Parent ]
I am sorry you feel that way. (5.00 / 1) (#498)
by acronos on Fri May 18, 2001 at 01:44:49 PM EST

There are a few of us who do consider it important enough to think about.

[ Parent ]
another leap of faith (5.00 / 1) (#341)
by eLuddite on Wed May 16, 2001 at 04:01:29 PM EST

If man is made of atoms alone then I have no question that we will eventually build machines that make our intelligence look like a candle compared to the sun.

Why?

It does not follow that you can recreate everything you "see" based on the power of your reason alone. Even if consciousness is, in fact, an emergent property of structure, what guarantee is there that you can engineer this structure using the formal methods of reason at your disposal now or forever?

No guarantee. (Assuming I understand Chaitin's recent Godelian work.)

Even if you accidentally create it, you will not be in a position to grok the reason your accident works. Ergo, you wont be able to reproduce the accident.

God is not a necessary requirement for the anti AI hypothesis.

---
God hates human rights.
[ Parent ]

I agree, it was a profession of faith. (5.00 / 1) (#358)
by acronos on Wed May 16, 2001 at 09:08:35 PM EST

This is why I stated it as a belief. I think it is a solid belief. So far there has been almost nothing that a man can conceive that he has not been able to achieve. I say almost because there are a few exceptions. There are a huge number of VERY smart people working on strong AI. They believe there is a chance for success. I am only saying that if our current example of strong AI is mechanistic in nature, then I believe we will eventually figure it out. We are now getting a pretty good understanding of the basic building blocks of the universe. It is now time to start reverse engineering.

[ Parent ]
John Searle thinks man is a machine (5.00 / 1) (#342)
by Paul Crowley on Wed May 16, 2001 at 04:06:10 PM EST

I disagree strongly with Searle but I wouldn't want to misrepresent him. He starts one discussion of the Chinese Room with roughly the words "To the question 'Can a machine think?' we must answer 'Yes, we are such machines'." He then goes on to defend a position known - by its detractors - as "biological supremacism": the belief that, since a computer could never think, there must be something special about brain-stuff that allows a thinking machine to be constructed from it.

This position has found its most interesting defender in Roger Penrose, who argues that some weird quantum-gravity phenomenon allows brains to show greater theoretical power than Turing machines, but his specific hopes for how brains and quantum phenomena might interact have been dashed by more detailed examination.
--
Paul Crowley aka ciphergoth. Crypto and sex politics. Diary.
[ Parent ]
The worst problem with the Chinese room (5.00 / 1) (#379)
by Signal seven 11 on Thu May 17, 2001 at 05:30:08 AM EST

The book.

To prove that the Turing test is not valid, Searle is assuming the existence of a book that can pass the Turing test. The human who doesn't know Chinese is irrelevant. We could implement a computer to do his job, today. What we don't have is a rule book to answer arbitrary questions.

I have to think Searle realizes this problem with his argument, and called the device a "book" to throw people off the scent. After all, a book can't think, right? Well, if a computer can think, then so can a sufficiently large rule book with a human for I/O. So, Searle is not being totally dishonest. But, if Searle had called the book a computer terminal, into which the non-Chinese speaking human inputs a questions and out of which he receives an answer, people might be quicker to grasp how ridiculous the whole thought experiment is.

To recap:
What does the Chinese room thought experiment do? It splits up a system that can pass the turing test into a rule book and an I/O device. (Rule book with I/O device; sounds like a computer, no?) Then it says, "obviously, a rule book can't think, our job is done". Bzzzzt! Wrong. Please play again, John Searle. (Or don't. It would not be a loss.)

[ Parent ]

Searle begs the question (5.00 / 1) (#476)
by los on Fri May 18, 2001 at 07:57:42 AM EST

is the fact that his entire argument applies to humans just as well as it applies to computers, even if humans are not completely machines. The question "where is the understanding in a human?" cannot be answered, just like the parallel question about the Chinese Room cannot be answered. To get to the real meat of the matter, the question "What is meant by 'understanding'?" cannot be answered either. If Searle wants me to buy his account of the impossibility of machine understanding, he'd better come up with an account of how and why humans differ from machines in a way that impacts 'understanding'. To do this, he must come up with answers to the questions above. And right now, when asked to explain why humans differ from machines in a way that impacts 'understanding', the only thing he can come up with is "because I said so." He might choose to phrase it differently, but that's what his 'explanation' boils down to. Where I come from, this is called begging the question. Of course, this simply leaves me agnostic on the question. It's possible that Searle is right; he just hasn't given us any substantial reasons to believe that he is. Now it's possible that Searle has added some substance to his objections in the last decade. To be fair, I haven't read anything on this topic since the late 80's, when I did time in an AI PhD program. Lee

[ Parent ]
TECH: What happened to my paragraph breaks??? (e) (5.00 / 1) (#495)
by los on Fri May 18, 2001 at 12:52:04 PM EST



[ Parent ]
I did that on my first post too (5.00 / 1) (#497)
by acronos on Fri May 18, 2001 at 01:34:22 PM EST

At the bottom of the text box there is a pull down box to select between html and plain text. If you choose html, at least on my machine, you have to specifically designate the paragraphs. Meaning each paragraph needs to start with a <P>.
It is probably easier just to select plain text. Someone who knows more html, I am sure, could tell you much more about it. Hope this helps:)

Example of html:
<cite>This paragraph will be in italics. It will not be indented like you see on some of the posts though. I don't know how to do that. I haven't put much effort into figuring it out though. Using the preview button you can get an idea how your post will finally look.
</cite>

<P>The following is gibberish. No need to read further. At the bottom of the text box there is a pull down box to select between html and plain text. If you choose html, at least on my machine, you have to specifically designate the paragraphs.

<P>At the bottom of the text box there is a pull down box to select between html and plain text. If you choose html, at least on my machine, you have to specifically designate the paragraphs.

[ Parent ]
pondering... (4.00 / 4) (#248)
by xriso on Wed May 16, 2001 at 02:26:34 AM EST

I'm assuming that machine=deterministic. The best way we can decide whether humans are machines is to see whether we can emulate them. Could we just fire up our larger-than-universe computer (with 2^2^2^2^2^2 bytes of RAM, of course), run SimUniverse, plop a human in and get a real human mind? We don't even need many frames per second. After all, the Sim wouldn't know the difference. Personally, I think the universe is emulatable, ignoring practical constraints. This means that the software which contains the virtual universe has intelligence in it.

Now, this would make me say that we are deterministic. However, we still are extremely complex, so we might as well say we are nondeterministic.
--
*** Quits: xriso:#kuro5hin (Forever)

Actually, you're wrong (5.00 / 1) (#260)
by spiralx on Wed May 16, 2001 at 05:51:41 AM EST

Personally, I think the universe is emulatable, ignoring practical constraints.

It's been proven that the smallest possible system capable of emulating the Universe is the Universe itself, so you can't ever emulate it perfectly. Somewhere you'll have to make approximations to even come close...

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

Hrm (none / 0) (#385)
by delmoi on Thu May 17, 2001 at 06:45:22 AM EST

Actualy, he said 'larger then the universe computer' So size wouldn't be a problem :P
--
"'argumentation' is not a word, idiot." -- thelizman
[ Parent ]
nondeterministic machines can be emulated (5.00 / 1) (#306)
by khallow on Wed May 16, 2001 at 11:04:38 AM EST

I'm assuming that machine=deterministic.

In Turing machine theory, a deterministic Turing machine can model a nondeterministic one. Even if this isn't the definition you had in mind, quantum level processes can be calculated by a Turing machine. You don't need a quantum system to calculate another quantum system, but you do if you wish to do the algorithm in an efficient manner.


Stating the obvious since 1969.
[ Parent ]

determinism (5.00 / 1) (#373)
by plastik55 on Thu May 17, 2001 at 02:56:14 AM EST

I think trhurler meant "deterministic" in the physics sense, in that given an accurate enough description of the present state of the machine, the future state of the machine can be predicted. Nondeterministic Turing machines are still deterministic in the physics sense; you just need to supply a list of the possible states the machine is in.

[ Parent ]
simulating a brain (5.00 / 1) (#348)
by Dogun on Wed May 16, 2001 at 04:54:08 PM EST

I think perhaps we'd be better off designing a brain that wasn't as fucked up as a human brain, implimenting what seem to be the main capabilities of the hardware that we have, and let the little tyke play quake for a few years.

[ Parent ]
Memory (5.00 / 1) (#396)
by caine on Thu May 17, 2001 at 09:28:00 AM EST

I find it optimistic that you think you can fit the universe into 4GB of RAM :)

I would recommend the science heavy but non-the-less fiction books "Rymdväktaren" and "Nyaga" by Peter Nilson, thought I'm afraid I don't know if they're available in anything else than in Swedish (should be though). They're an interesting thought experiment where simulating the universe is involved.

--

[ Parent ]

can't ... resist ... (5.00 / 1) (#470)
by xriso on Fri May 18, 2001 at 12:46:51 AM EST

AFAIK, exponents group right-to-left. This means 2^2^2^2 is 65536. Now, 2^65536 is a very large number already (somewhere around 10^20000, I estimate), but 2^(2^65536) just might be insanely huge.
--
*** Quits: xriso:#kuro5hin (Forever)
[ Parent ]
Quite correct (5.00 / 1) (#505)
by caine on Fri May 18, 2001 at 03:07:15 PM EST

Guess my brain was offline or something. Would have thought of it immediately with normal notation but didn't now. Thanks for the point :)

--

[ Parent ]

predictable != deterministic (4.50 / 4) (#307)
by khallow on Wed May 16, 2001 at 11:13:11 AM EST

Other people point to the determinism of the human mind, or the lack thereof. I don't believe that the human mind is non-deterministic (in fact, if you were to study psychology you would note that people are very predictable), just too complex to fully simulate.

Predictable doesn't mean deterministic. A machine can behave predictably most of the time (hence is "predictable"), but still surprize you with nondeterministic behavior. I would be impressed if anyone could come up with an algorithm for predicting a work of art (of say Picasso) given an initial state. Or Finnegans Wake (by James Joyce). The processes underlying the creation of these works seems particularly nondeterministic.



metal spheres (4.25 / 4) (#335)
by ikarus on Wed May 16, 2001 at 03:19:02 PM EST

There is never any 'thinking' there is never any 'looking'. The computer doesn't 'interpret' it doesn't 'understand' it just blindly runs. It is no more a mind then a lightswitch or the engine of a car.

although that particular quote was referring to the 'hardware' part of the system, i don't see how software is any different. A parser doesn't 'think' about what it's doing, it operate the instructions based on a predefined set of rules, which in turn break down to a set of rules for the hardware, which as the article points out are: 'blindly run.'

if you divorce the whole idea from it's electronic makeup, i think it's even easier to see that machines do not 'know.' instead of the standard cpu with a processor, memory, and lots of other electronic gizmos, picture the machine constructed completely out of gears, wheels, pipes, and small metal spheres. now, such a contraption would be huge, but i beleive possible to build. after all a computer is just a complex set of switch, or electron routing. why not a metal sphere routing machine? now try to think of the machine as intelligent. sure, the instructions take much longer to execute, but it still does the job.

(if you think building such a machine is impossible, i once read of a man who built an economic mondel of the world that ran on water. it was a complex series of pipes, valves, and tanks. it's true: i read it in the book 'the golem at large')

You are quite correct (5.00 / 1) (#504)
by acronos on Fri May 18, 2001 at 03:01:06 PM EST

<CITE>A parser doesn't 'think' about what it's doing, it operate the instructions based on a predefined set of rules, which in turn break down to a set of rules for the hardware, which as the article points out are: 'blindly run.'
</cite>

No one is claiming that computers are conscious at this time. You can make rules that change and grow on there own. You can make a computer program that is aware of it's code such that the program is able to change it. This does not yet give you human level intelligence because we are still unable to comunicate with it in our language and because it is still far too simplistic to solve human level problems. This is no argument for what could happen in the future. It does not follow that because computers are "stupid" now that they always will be. Or that because we haven't figured it out yet that we never will.


[ Parent ]
The promised "other post" (4.50 / 10) (#337)
by trhurler on Wed May 16, 2001 at 03:44:13 PM EST

Well, this is really half complete, and I'd like to have the time to improve upon it about a thousandfold before posting it, but this story will only last so long, and I have no more time to spend right now. This is, to put it mildly, almost more like rough notes for the topic it addresses than anything else, and it is both speculative and doubtless highly controversial. But, I said I'd do it, and here it is. Rough, rough, rough notes on how to construct notes on how to begin thinking about the programming and design requirements of an intelligence similar to our own. In the world of sheer speculation, hail to the king, baby. :) (Ignore the fact that the next paragraph is a second intro; I wrote it intending not to add this one.)

First of all, this is not a typical trhurler post. The fact is, what I'm writing is highly speculative, and has to be. If previous experience and my grasp of the history of computing serve, then I can say with confidence that herein you will find errors, omissions, redundancies, and material which at some point will be considered tangential or irrelevant. This is inevitable, but it doesn't necessarily matter, because making a final and perfect specification is the furthest thing from my purpose, and also from the realm of present possibility. In many places, you will find me leaving topics open, because there simply is no reasonable conclusion to be drawn on the strength of existing evidence.

Now, this is a ridiculously large subject. Here is what I am going to do: first, I am going to go through a list of mental faculties we all have and take for granted. I am not going to discuss their interaction in any great detail until I have enumerated them, and there will therefore be some truly gross oversimplification at work here. That cannot be helped. Then, I will look at the interactions between them, which, insofar as I am aware, provide us with the best clue we have as to what the underlying structure of the data we handle internally really is. This will include some discussion of the impact this material has on the possible structure of an artificial mind. Finally, I'll cover a few side topics that, while fairly obvious once stated, are not necessarily obvious beforehand.

First of all, people have perception. This is not as simple as it sounds. As anyone who has worked on topics like speech recognition and machine vision can tell you, the eye-brain interface is not just a camera dumping a digital signal into your brain. Once the signal, which is only roughly speaking digital in any sense, reaches your brain, you process it. This processing is automatic and effortless, but it is vital. You make shapes out of what you see, and sounds out of what you hear. The monitor and keyboard in front of you, the door to the room you're in, and so on are all percepts, and your recognition of them is one of the most powerful examples of pattern recognition and background noise elimination ever known by human beings. Research is starting to discover how we accomplish this; interestingly, we're finding that the brain actually performs chores such as image enhancement, which is one of the major sources of the illusions we experience under various odd circumstances. Note that despite the heavy reliance on optical terminology, most of this applies to all the senses.

Secondly, people have recollection. I've deliberately avoided the term memory, because this is nothing like computer memory and confusion between the two is all too common. Recollection is imperfect, and it is not the ability to pull digital data out of a storage vault; it is more akin to a cache of events that impacted us for some reason more than other events did(even if only by their unusuality,) and which we store and retrieve in a form akin to that in which we maintain percepts in our heads at any given moment - somewhat fluid, subject to interpretation.

Third, people have concept formation faculties. Essentially, a concept relates two or more things according to some similarity, and excludes things which do not possess the trait held in common by its members. Those members may be percepts or other concepts. "Tree" is not just the oak tree you're imagining, after all. There are other oak trees, and other kinds of trees entirely. Forming arbitrary concepts over what is observed requires the ability to both compare and contrast percepts and concepts according to dynamically chosen criteria - a sort of universal sorting machine, as you might think of it. There are some starting points inherent in the inputs available, such as color and intensity for sight, pitch, intensity, and duration for sound, and so on, but from there, the mind must be capable of defining and applying new forms of measurement. Generally speaking, a concept consists of the statement that all of these things possess this measurable trait in some degree or other and all other things do not. This can range from a binary yes/no property to graded scales of many degrees.

Fourth, people have the ability to treat concepts and percepts as very similar and in some cases identical things. In this sense, the human mind possesses aspects of object orientation, although the analogy quickly falls apart when you realize the immense complexity of the interactions between concepts as compared to the relatively simple, rigid, and well structured interactions between objects. This allows us to compare a newly perceived tree to our concept of tree, in addition to comparing it to other specific trees. We can think entirely in terms of concepts, and these concepts may have as referents more concepts, rather than percepts - and so on. This is abstraction; it (or in some cases, our greater facility for it,) is what separates us from most other animals, insofar as we're aware.

Fifth, people have emotional response. This, however it may be derived, is always an assessment of the impact of whatever we're considering upon the things we hold to be important - ourselves, our loved ones, our possessions, and so on. It is not necessarily a correct assessment, and it is not necessarily an assessment that takes into account the best of our relevant reasoning, but it is an assessment. This is an immensely complicated thing - probably the most complicated to duplicate, though not the most recent to emerge in animals. It has to rely on the sum of what we've accumulated through perception and active thought over time, and yet it has to provide a fixed set of responses and a reliable way of mapping them onto various contexts.

Now, it should be clear that concepts are stored in our recollection, and percepts can be too. In fact, it is likely that this is precisely the form in which they are stored, with as little translation as possible. Probably, we never store the direct input from our eyes, but rather the image as we thought of it at the time, and so on for other senses. This would help to explain the vagaries of human recall, and also to explain why we tend to remember best the things we have the most experience with. What this data "looks like" I have no idea, and neither do the researchers who claim to be figuring it out; their methods are so crude that it is amazing they get anything but random noise out in the first place. Hopefully that will change in time. It does seem likely, though, that the structure of the data and the logical(and possibly physical, in people,) structure of the mind are correlated; the structure of a human brain is such that there is amazing interconnectedness between various parts, and it seems likely that this sort of interconnectedness would be needed to make the various mental faculties work as one cohesive unit. It might be better to think of them as one thing with various parts than as various parts forming one thing. This part of my little writing is the vaguest and least certain, because the background material to back it up is least available; the so-called science that consists of measuring currents in various parts of the brain and so on might as well be taking place in Dr. Frankenstein's lab for all the sense it makes. Among other things, we probably need an overall picture of what the brain is doing at a given time to even begin to understand how it is happening.

It is worth noting that introspection, taken in the framework already set out, is merely the analysis of one's own mental processes as though they had come from the senses; the treatment of one's own mind as a set of percepts. Also, there are two whole fields relating to the use of language and the refinement of concepts over time, and also the matter of tying concepts to words and describing them via definitions, which I have not touched. Honestly, I'd rather not, because it isn't an area I'm expert in in any case and moreover, it is definitely an area where religion rules; watching linguists argue is one of the most amazing spectacles of the modern world. There is also the matter of refinement of concepts, which is important, but which would take quite a bit more space to elaborate on; basically, our definitions of concepts can change over time without altering the generally intended referents, in order to accomodate new knowledge and help to increase the degree to which the concept refers to what we intend and not to other things. The failure to do this is in some cases responsible for most amusing failures of various academic theories to correspond to reality in any meaningful way.

Now, for the purpose of all of this. From a programming point of view, this is an insurmountable pile of poorly specified, poorly understood capabilities with unknown interaction complexity, unknown underlying mechanisms, and so on. It is likely that nobody and no group of people currently living could even really start without better cognitive science. However, what is clear is that it is possible to begin work on ways of implementing certain pieces. If you look at the pattern recognition work being done in video and audio, that gives a good start. If we can't find a way, given some basic set of information as a starting point, those pattern recognition algorithms, and a basic structure, to cause the result to learn, then we aren't going anywhere. We don't know exactly what any of those three elements look like in people, but we know they're there, and we know they can be described in a space that's not totally unreasonable for a computer to handle in the next couple of decades; there's lots of information in DNA, but not that ungodly much, and the rate of growth of our information handling capabilities is such that what today is difficult will be easy soon enough. The real question is, how much capacity is required to handle this mind as it grows? We have no idea as yet; any estimate you see is like an estimate of the eventual outcome of the universe; the data changes on a daily basis, and nobody really believes the prediction he makes today, but the media report on it as though it were true anyway.

All this said, if you could link several good pattern searchers and matchers for different input sources together(audio and video are good starts, and I'd add tactile too. Smell is optional, and taste is probably unnecessary for a first crack,) and then begin working on some sort of percept engine, that'd be the starter. This alone, which is not beyond us in theory, could result in a machine capable of reacting roughly as well to its environment as, say, a cat. The percept engine, of course, is a major research project by itself, on top of the pattern research and implementation. Looking beyond that at this point is not wise; if we can't handle perceptual data, we're going nowhere. In truth, looking towards a percept engine is a long range goal; just integrating sensory input devices into a reasonable package and doing pattern recognition work for each of them is an enormous task, made all the moreso by the fact that really, the pattern recognition should extend to enhancement and should improve itself over time and contain adaptive methods.

Yes, we might someday build a real artificial intelligence. It isn't going to be some single masterstroke sci-fi daydream, though. By the way, the problem of a good AI is not nearly as hard as the problem of consciousness. There is no proof that all intelligent things must be self aware and capable of introspection; this capability is common to us, but is it really the linchpin of intelligence, or is it just what we regard as "important" about ourselves? This might lead to a simpler, easier machine intelligence for some purposes.

--
And when you consider that Siggy is second only to trhurler as far as posters whose name at the top of a comment fill me with forboding, that's sayin
What is needed for consiousness? (4.50 / 2) (#378)
by matthijs on Thu May 17, 2001 at 04:25:39 AM EST

All this said, if you could link several good pattern searchers and matchers for different input sources together(audio and video are good starts, and I'd add tactile too. Smell is optional, and taste is probably unnecessary for a first crack,) and then begin working on some sort of percept engine, that'd be the starter. This alone, which is not beyond us in theory, could result in a machine capable of reacting roughly as well to its environment as, say, a cat. The percept engine, of course, is a major research project by itself, on top of the pattern research and implementation. Looking beyond that at this point is not wise; if we can't handle perceptual data, we're going nowhere. In truth, looking towards a percept engine is a long range goal; just integrating sensory input devices into a reasonable package and doing pattern recognition work for each of them is an enormous task, made all the moreso by the fact that really, the pattern recognition should extend to enhancement and should improve itself over time and contain adaptive methods.

Let me first state that my connection with (and knowdledge about) AI is limited to reading Douglas Hofstadter's Godel, Escher, Bach: an eternal golden braid and some other popular works. Aside from that I'm just interested in the topic and like to think about it.

Every time I try to reason on this subject I keep coming to the same conclusion, which was outlined by trhurler above: for consciousness to arise (in a way we can recognise it) it's necessary to be able to interact with your environment in (as) many ways (as possible). Possibly these interactions even need to be similar to the interaction that the `judges of consciousness' experience themselves.

Because we are the `judges' in this case, the only conciousness we are interested in, is the one that is similiar to our own. (Okay, maybe that is trivial, but I wanted to remark it anyway). So maybe computers `experience' something, but we just have no idea what that experience is and they have no way of communicating it to us. Let's go to the thought experiments.

Imagine a baby. Has it conciousness? (Well, that's a can of worms we'd rather not open, so forget I asked it ;-). Imagine the baby being restricted in it's senses. In the extreme case we imagine it laying on it's back on the floor tied down, blindfolded, with a headphone, feeding through a tube. It's not very likely that this baby would ever advance to a state in which we can properly communicate with it. Now suppose that we take away the headphone and make it listen to Japanese (or any other language, but because most people here probably don't speak Japanese so this analogy will work better) audio tapes for years on end. Would you say it's likely the being will ever learn to understand Japanese? What if we take away the blindfold as well and hang a tv above it's head. Would it be able to develop itself to the same level as the average person does, without any guidance of any kind? The (rough) conclusion I want to make is that guidance and interaction is probably very important to our development and thus the lack of ways to interact can make it impossible or very hard for the development of the typical behaviour of a concious being.

It's sure is no proof of any kind, but to me it feels pretty clear that creating conciousness from scratch on a normal computer is not feasible, because the machines are limited in the interactions we are accustomed to and which contributed to our conciousness and thus are a essential part of our understanding of conciousness. It's almost like building a robot without legs and then hoping it will ever learn to run.

A final thought: Imagine the experience of having a sense for electromagnetic fields and waves. Would we understand the experience of a person equipped with such a sense in our world? Can people who lack this feature usefully discuss this experience and how it effect's the way someone percieves their environment? Well, that's it for my take on things.

--
Matthijs

[ Parent ]
Set Theory and your logic.. (4.16 / 6) (#338)
by bearclaw on Wed May 16, 2001 at 03:50:14 PM EST

If I have three sets (A, B, C):
x1 = homo sapiens

Set of All Machines: A = {...,x1,...}

Set of Everything that is Concious (including Machines and everything else in the world): B= {...,x1,...}

Set of All Machines that are Concious: C= {...,x1,...}

So, by your logic:

C = A /\ B (everything that is in both set A and set B)

If x1 is in A and x1 is in B does that mean that there exists an n in A that is in C (where n != x1)? No, I don't think so. Set C (from above) doesn't necessarily prove that there must be an n in A that is in C (where n != x1).

So I think your logic is a bit loose.




-- bearclaw
Needs extra axiom (5.00 / 1) (#393)
by hammy on Thu May 17, 2001 at 09:24:06 AM EST

I think the author forgot to add an extra axiom.

[ Parent ]
Set theory (none / 0) (#560)
by RandomThinker on Thu May 24, 2001 at 02:16:13 AM EST

Hi, I think what the authors, was trying to say is that if we consider Homo Sapiens a machine, then it is possible to build a similar machine which will be concious like homo sapiens, he is nnot saying that there is an n<>x1 just saying it could be done..see we are looking into the future and a SET is not stable in time, i mean a computer was not the A set 60 years ago..we did not have a word for it..we did not even knew how the thing looked like back then so Time, makes sets dynamic, which means maybe in 20,000 years we can build a Homo-Machinus..which thinks, feels, cries...similar to sapiens.. My opinion..if we do it then all we are doing is creating another Homo..not a AI..but LIFE..... bye

[ Parent ]
We can't argue! (4.00 / 4) (#339)
by Steeltoe on Wed May 16, 2001 at 03:54:13 PM EST

If we're going to argue, we absolutely have to agree on the definitions on what we're going to argue about. The problem is however, that we can't agree on that! For what is consciousness, how do you define it so that everyone agrees on it. We can't even agree on intelligence! Some wants to put a number to it, others want to split it up into many different parts.

As long as we can't agree on the definitions, we're just building arguments for our current world-beliefs, which is impossible to deduct logically. They change too as we live on, so what energy we spend today fighting off heretics, we'll have to spend tomorrow to convince them yet again!

So, if you absolutely must argue avout this, please be open-minded, humble and speak what you believe. It's for your own good.

- Steeltoe
Explore the Art of Living

What is consciousness? (2.00 / 1) (#383)
by delmoi on Thu May 17, 2001 at 06:38:24 AM EST

Well, that isn't a very easy thing to define now is it. My plan was to side step the question and say that it was just something that human beings had. And, if human beings had it, and human beings were machines, then by definition some machines could have consciousness.
--
"'argumentation' is not a word, idiot." -- thelizman
[ Parent ]
That's a tautology (5.00 / 1) (#460)
by spiralx on Thu May 17, 2001 at 06:38:22 PM EST

You've defined human beings as machines that are conscious. Hence of course there are some machines that are conscious. That's a tautology and proves nothing.

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

Why settle for less? (3.75 / 4) (#349)
by jjonas on Wed May 16, 2001 at 05:06:44 PM EST

It's an interesting discussion and all and I do see what the author is trying to get at however, I can't help but ask myself... why would we impose the limitations of man on a machine? Other than the "because we can" factor.

Of course someone is going to say "well we should so machines have morals and we should not create an end-of-the-world-terminator" but come on, with a few routines and some careful planning we have little to worry about.

We as intelligent biological beings are capable of creating just about anything we can dream up. We should be concentrating on taking our base set of senses, intuition, logic, and ability to be creative and incorporating (read: ADAPTING) them into a machine that can perform those operations billions of times quicker and more accurately than we. If we empower machines to grow and to think and to evolve then we will have something. Who knows, maybe mankind was created in a similar way by a society or race that is now extinct. Maybe today we're solving problems that previously eluded the mind(s)of our creator(s) and even we are incapable of understanding things like the nature of the universe, gravity, quantum mechanics, faster than light travel, and all of the other things we're just barely beginning to see the surface of today. The only way for "us" to understand those things is for us to evolve into, or create, something that can.

I'm not saying it's completely without risk. The benefits, however, far outweigh them.

Please explain (4.50 / 2) (#530)
by fragnabbit on Mon May 21, 2001 at 10:27:21 AM EST

I'm not saying it's completely without risk. The benefits, however, far outweigh them.

If we the human race create something that evolves past us and causes our extinction (i.e. your risk), how could the benefits of that outweigh the risk?

Our goal is to advance the human cause (whatever that may be). But, if in our drive to advance, we cause our own demise, that cannot be beneficial, at least not in any way that I as a human would consider it so. So I wouldn't say that the benefits outweigh the risks in every situation. I mean the goal of any species is the survival of the species, so it's not of benefit to create something that would (or could) cause the demise of your own species. (At least, not without bringing the religious-all-are-one-enlightment thing into it ;-)

But hey, that's just me...

[ Parent ]

Difference between simulated and real sentience (4.20 / 5) (#350)
by skeezix on Wed May 16, 2001 at 05:08:01 PM EST

I believe that a program could theoretically be able to fool anyone into believing it was sentient...that if it were sophisticated enough, it would appear, by all accounts, to realize that it exists. However, there is a difference between that case, and the case where a being actually is self-aware. This is actually common sense and it can't be analyzed or proven mathematically. Think about yourself for a minute. Realize that you are. This is very different from a program claiming it realizes it exists because it has a sophisticated logic.

A program couldn't prove it's sentience to me. (5.00 / 2) (#351)
by roystgnr on Wed May 16, 2001 at 05:55:25 PM EST

But then again, neither can you.

[ Parent ]
yes, but.... (5.00 / 1) (#408)
by skeezix on Thu May 17, 2001 at 11:26:40 AM EST

That is orthogonal to my point. I never used the "argument" that a program couldn't prove it's sentience to me. My only point in my post was to say that there is a difference between true sentience and appearing by all accounts to an outsider to be setient. A clever and complex program could "claim" to be sentient and simulate a human's interaction. That is different from true sentience. And, incidentally, I never said a program couldn't become truly sentient; I was simply pointing out the difference between simulated and real consciousness.

[ Parent ]
yes, but... what, exactly? (none / 0) (#553)
by MrMikey on Wed May 23, 2001 at 02:58:23 AM EST

You say "A clever and complex program could "claim" to be sentient and simulate a human's interaction. That is different from true sentience. " Ok, tell us... what exactly is the difference? You keep saying that a sentient human is different from "a clever and complex program." How exactly are they different? Maybe we're all "clever and complex programs" running on carbon rather than silicon substrates.

[ Parent ]
Think about yourself (5.00 / 2) (#377)
by Signal seven 11 on Thu May 17, 2001 at 04:07:04 AM EST

Are you composed of physical substance? If we could build a copy of you, would it be sentient? Does that answer you question?

[ Parent ]
physical substance... (5.00 / 1) (#407)
by skeezix on Thu May 17, 2001 at 11:20:36 AM EST

You opened up a can of worms here. I'll bite. If you could build an exact copy of me, it's quite possible it would be sentient. However, you can't. I wasn't talking about copies of human beings. I was talking about computer programs. My entire point was that there is a fundamental difference, and it's a difference that we are very far from even beginning to understand.

[ Parent ]
Missing the point (5.00 / 1) (#427)
by Signal seven 11 on Thu May 17, 2001 at 12:55:00 PM EST

If you could build an exact copy of me, it's quite possible it would be sentient. However, you can't.

That is irrelevant. The point is, whatever makes us sentient is (I believe) entirely physical. An arrangement of particles. While I certainly can't prove, and don't claim to be able to prove, that we will one day be able to build thinking computers, it is certainly plausible that we shall.

If a particle for particle copy of the brain would be sentient, then why not simulation of a brain on computer, or an implementation of the brain in silicon? Of course, we're quite aways from actually implementing either of those ideas. But, that's not the point. The point is that sentience on the part of computers is likely not out of reach.

[ Parent ]

Missing the point, indeed. (5.00 / 1) (#432)
by skeezix on Thu May 17, 2001 at 01:39:36 PM EST

The point in my original post, was not to argue that a computer program cannot be sentient. I leave that to the philosophers and computer scientists to discuss ad nauseum. I made no such claim. I was simply pointing out the fundamental difference between a computer program simulating the actions of a sentient being and the program itself being sentient--having true consciousness. You could certainly write a program of such complexity that it would simulate, by all accounts, the actions, words, etc. of a sentient being, but that would not make it sentient any more than having a program that shouts, "I'm sentient, damnit!" makes it sentient. On a personal note, however, I do believe, with evidence, but no proof, obviously, that there is more to a human beings consciousness than "particles" interacting. No matter how complex a machine, or how many "particles" a system contains interacting, I do not believe you will ever arrive at something able to know it exists, love another being, or have a soul--that feels, defines who it is, cares, and seeks meaning. But that is neither here nor there. Hopefully you see the point of my original post. The rest is peripheral..

[ Parent ]
Once more (5.00 / 1) (#436)
by Signal seven 11 on Thu May 17, 2001 at 02:33:29 PM EST

I'll restate my position one more time, for the record.

A particle for particle copy of a sentient brain would be sentient. An implementation of the brain in silicon that replicate the function of the brain, would be sentient. A program that does the same thing would be sentient. To my mind, there is nothing peculiar about gray matter that allows it to be sentient while preventing a reimplementation in silicon from being sentient.

On a personal note, however, I do believe, with evidence, but no proof, obviously, that there is more to a human beings consciousness than "particles" interacting.

I'd be interested in your "evidence", but I have no further desire to argue. Go ahead and believe what you want to believe.

[ Parent ]

Getting the A out of AI... (5.00 / 1) (#438)
by skeezix on Thu May 17, 2001 at 03:06:04 PM EST

I see your point (and saw it earlier :) ) regarding a "copy" of a mind, whether it be physical or on silicon. I'm not even disagreeing with that. All I was saying is that that is different from simply creating a complex program that acts like a mind and creating an actual mind. Most "artificial intelligence" machines don't even take remotely the same approach that a brain does to solving problems. AI experts can often come up with clever algorithms, hacks, etc. for creating simulated intelligence, but the real mystery is how to think and how to do it like the brain, much less actually simulate what's going on physically in a brain. If computer scientists and neurologists could create a program that actually worked like the brain does...and no, not just a neural net, but actually simulated what goes on in a brain, then that's a different story. Maybe I'll state my point in another way. If you had a program so large that it had a collection of replies and things to say and do for arbitrary stimuli that was so large and complex that it would appear by all acounts to act and talk like a human mind, would that make it sentient? No, it would just be programmed to give one of billions and billions of replies...it's an approach that seems to give the same results as far as an outside observer is concerned, but it isn't any more conscious than "Eliza," the program that runs on my TI-85. Just bigger and full of more hacks to make it appear human. The mystery computer scientists and neurologists must solve in order to understand the soul and counsciousness is how the mind actually works...in other words, get the A out of AI.

[ Parent ]
The Repetition of An Assertion Is Not A Fact (5.00 / 1) (#525)
by MrMikey on Sun May 20, 2001 at 12:39:57 PM EST

You keep repeating: "All I was saying is that that is different from simply creating a complex program that acts like a mind and creating an actual mind." This is your assertion, and I have yet to see any evidence which supports it. Sure, you can write programs like Eliza that give, at first blush, seemingly sentient responses. Is Eliza sentient? I'd say no, but one could argue the position that sentience is a matter of degree rather than of kind. None of this, however, speaks to the question: how do you tell the difference between a "simulation of a mind" and a "mind"? What criteria do you use to distinguish between one and the other? Is there, in fact, a difference? If you can give me evidence that you, yourself, are a "mind" and not a "simulation of a mind", then I'll be impressed.

[ Parent ]
'common sense'? (4.00 / 1) (#382)
by delmoi on Thu May 17, 2001 at 06:35:25 AM EST

This is actually common sense and it can't be analyzed or proven mathematically.

Actually, it seems pretty counterintuitive to me, so it can't be 'common' sense (feelings that everyone has, and are thus 'common').

But anyway, common sense has in the past said the sun went round the earth.
--
"'argumentation' is not a word, idiot." -- thelizman
[ Parent ]
explain.. (5.00 / 1) (#410)
by skeezix on Thu May 17, 2001 at 11:31:30 AM EST

How is this counterintuitive to you? What does your intuition tell you? I'm curious. Incidentally I never said a program couldn't be sentient. I was merely pointing out the difference between the simulated case (the case where a program claims to be sentient and simulates a real conscious being) and real sentience (the case where the entity actually realizes it exists and acts out of that consciousness. Are you saying that your common senses tells you there is no difference? Are you saying that all one would have to do is write a program that appears to act like a human through the use of complex logic and simulation and it would be absolutely no different from a human or other example of a conscious being?

[ Parent ]
perception vs reality (5.00 / 1) (#444)
by pistols on Thu May 17, 2001 at 04:35:37 PM EST

Are you saying that all one would have to do is write a program that appears to act like a human through the use of complex logic and simulation and it would be absolutely no different from a human or other example of a conscious being?

Exactly. Even if someone doesn't believe that computers can be sentient, does their moral system permit them to risk treating a human like a machine? (If they can't tell them apart?) Of course this doesn't prove that machines can *actually* be sentient, but I have yet to see a decent proof that people can be either.

[ Parent ]

Are neurons quantum mechanical devices? (3.50 / 4) (#352)
by jms on Wed May 16, 2001 at 06:02:14 PM EST

One of the most interesting claims that I've ever heard is that certain molecular structures in neurons have features that strongly resemble proposed molecular designs for the construction of quantum computers.

If this turns out to be true, then what we consider to be "consciousness" may be the combined effect of billions of small, quantum computers working together.

Thus, quantum mechanics would be the "missing link" between the physical world, and the "spiritual" world of consciousness. The connection between body and soul.





Quantum computers and brains (5.00 / 1) (#353)
by SIGFPE on Wed May 16, 2001 at 06:18:21 PM EST

One of the most interesting claims that I've ever heard is that certain molecular structures in neurons have features that strongly resemble proposed molecular designs for the construction of quantum computers.
Frankly any structure small enough looks like a design for a quantum computer. I'm serious! After all quantum mechanics is the correct description at the molecular level. But there is only a small group of people who take seriously the hypothesis that neurons perform large scale quantum computation. They are largely made up of a group of neuroscientists who really have no clue about quantum mechanics (frankly there are few neuroscientists who understand enough mathematics to even start a QM textbook) or talented physicists who are hopelessly naive about biology (eg. the otherwise a genius Roger Penrose). Unfortunately they are a noisy group. (Disclaimer: All this is just my personal opinion. Trust me at your peril!)

On the other hand I must say that in general I'm sure that QM has a much bigger role in biology than most biologists realise. For example when biologists model interactions at the molecular level they frequently use classical mechanics. (OK, some may use simulations that they claim are quantum mechanical but that's a highly disputable claim. Anyway - they still use classical forms of reasoning about the results they obtain). I expect that at some time in the next decade or two there'll be an example of a one or two bit quantum computer found in biology. I just think it's crazy to go the whole way and consider the brain to be like that.
SIGFPE
[ Parent ]

Decidability (5.00 / 1) (#359)
by Chuan-kai Lin on Wed May 16, 2001 at 09:10:32 PM EST

However there is no evidence that quantum computers have computation power beyond that of Turing machines (Church-Turing thesis?). So how would quantum effects in our brain change anything about consciousness in classical computers?

[ Parent ]
Collapsing the uncertainty (5.00 / 1) (#391)
by hammy on Thu May 17, 2001 at 09:17:15 AM EST

I think the claim the original poster is refering to is that some theorists (like Penrose if memory serves me correctly) believe that the human mind can collapse quantum uncertainty. It is this ability that gives rise to human cognition. Collapsing uncertainty is not something a computer can do and for this reason a computer cannot emulate human conciousness.

[ Parent ]
Only Awareness is Aware (4.00 / 5) (#361)
by snow