Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

[P]
Minsky's "Programs, Emotions and Common Sense"

By exa in Media
Thu May 31, 2001 at 09:24:20 AM EST
Tags: Science (all tags)
Science

On technetcast, there is an intriguing talk given by Marvin Minsky as a preview of his forthcoming book. Minsky gives us a few reasons why it is year 2001 and we still don't have HAL, along with his hints about how we might achieve it.


Aside from his everlasting wit, Minsky delivers an applaudable presentation; I have literally clapped for him in front of the screen. His criticism of the fallacies of popular AI is something so reminiscient of our not-so-popular discussions at Bilkent CS dept., that I'd like to comment on them. He states that there hasn't been much progress in common sense or general intelligence since mid 1970's.(And yes, his criticisms start with a bashing of FOL, so don't miss it.) Neural networks and genetic algorithms have been so intensely advertised that most people nowadays assume that these are the state-of-the-art in AI. If there would be a HAL, it would certainly be built with such methods. Personally, I think this stems from the desire to have a magic solution to a very hard problem without having any understanding of it and in reality they do not solve majority of hard problems. As Minsky suggests, there are kinds of problems that ANN are good for, however he criticizes genetic algorithms harshly: how can we expect that a superstitious worship of imitating organic evolution will work out for us while it has bugs and takes millions of years to run? Indeed, what genetic algorithms people do is solving mediocre problems with today's powerful machines in a worse way than traditional heuristic search.

The part about statistical methods is a big hit. Although his opinions are a bit too strong on this, he claims that using vectors of numbers in learning is an intellectual dead end. However, the kinds of failures he cites are definitely realistic. You can have 90% accuracy in, say machine translation, but the remaining 10% might be those that require some deep thought. Think about speech recognition for another instance; your language model might cover 95%, but the remaining 5% depends on interaction with semantics and pragmatics, which shows that a large percentage of accuracy does not necessarily mean that you have solved a large part of the problem.

Without suggesting improvements, these criticisms would have looked superfluous, and thus Minsky gives us a a glimpse of his next book "The Emotion Machine".

He goes on to talk about consciousness which has "baffled so many people, especially physicists", and expresses his belief that "there is no so such thing". I agree with him. I don't think that there is a thing such as 'subjective experience' or 'qualia' which is just a romantic way of saying "I have no idea how any of these cognitive phenomena happen". Consciousness, Minsky says, is a suitcase word "we use as a name for a dozen very hard problems about how the brain or the mind works". So, he does have a theory for solving "consciousness" in his new book, which seems to me improving on the ideas of 'Society of Mind'. He refers to two of his theories in particular; one which is an architecture of mind, and "multiple representation". He does not dwell much on the architecture, except by emphasizing that the human mind is a highly evolved and complex distributed architecture and that his architecture has five layers "including theaters which are places where you simulate in one way or another what you think might happen if you were to do something" with no central control. He seems to have been inspired mostly by the architecture of human brain and he seems to have been developing on ideas from his old book.

Minsky touts multiple representation as a new way of looking at AI. He begins by indicating the philosophical problems with a single representation: if you understand something in one way then you won't have understood it at all in another way. He indicates that we falsely assume there is one definition instead of a network of interrelated processes. When we would like an open-ended representation we should be able to maintain all of them, and have them co-operate. Minsky bears a diagram which partitions problems according to the scale of their causes and effects and suggests that differing representations work best for respective parts of the diagram. However, how we should go about making NNs, statistical methods, logic, classical AI, etc. is not very clear. He also indicates that he offers alternative representations (network of k-lines from Society of Mind, etc) which offers the richest possible representations.

If you haven't read or watched this talk, I suggest you to do so. It is a thought provocative and beneficially controversial presentation. Looking forward to your comments.

Sponsors

Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure

Login

Poll
When will we have HAL?
o 2010 8%
o 2050 20%
o 2200 22%
o 2300 10%
o We already have it! 4%
o Never! 34%

Votes: 49
Results | Other Polls

Related Links
o technetcast
o talk
o Marvin Minsky
o Also by exa


Display: Sort:
Minsky's "Programs, Emotions and Common Sense" | 56 comments (49 topical, 7 editorial, 0 hidden)
Puzzling assertion (4.00 / 3) (#2)
by Simon Kinahan on Tue May 29, 2001 at 07:36:01 AM EST

I haven't watched the talk yet, because I'm at work, but I'm always puzzled when people make assertions like this:

He goes on to talk about consciousness which has "baffled so many people, especially physicists", and expresses his belief that "there is no so such thing". I agree with him. I don't think that there is a thing such as 'subjective experience' or 'qualia' which is just a romantic way of saying "I have no idea how any of these cognitive phenomena happen".

Either you and Minsky (and Dennett, for that matter) mean something radically different from what I do when you talk about conciousness, my mind is actually radically different from yours and only our behaviour is similar, or this assertion is disingenuous. Since the third option is not very constructive, and the second still seems very unlikely, I'm going to assume the first.

When I say a person, or animal, is concious, I mean that they are capable of first-person subjective experience. Obviously I only know that for certain for myself, but it seem reasonable to assume it for other people and some animals. What is it that you mean ? I can't see it as possible that you believe, say, rocks are just as capable of first person experience as you or I.

Simon

If you disagree, post, don't moderate

heh (4.00 / 2) (#3)
by lower triangular on Tue May 29, 2001 at 08:24:22 AM EST

Consciousness, Minsky says, is a suitcase word "we use as a name for a dozen very hard problems about how the brain or the mind works".

sure ... except it isn't ... you have to be very clever indeed to forget the meaning of a word to that extent ....

Linux - the ultimate Windows Service Pack!
Windows - the ultimate Linux Productivity Suite!

[ Parent ]

Analogy (3.00 / 1) (#6)
by fatphil on Tue May 29, 2001 at 08:38:19 AM EST

Who said a stone is self-aware? Noone's saying that. You have constructed a straw man, and burnt it down. If you want to play the AI-argument game then at least start with something that people could argue has self-awareness.

I have worked on communication equipment which was able to detect internal problems and report them to a remote machine. (aside - one of the alarms it could raise was called "Dying Gasp" which always made me laugh). If I get pains in my chest I call the doctor. When the rig detects its feeding voltage is low, it calls an engineer. Does that not display self awareness in its simplest form? Or should I say in what way is _our_ behaviour any more sophisticated than the machine's?

FatPhil

[ Parent ]
non-analogy (none / 0) (#7)
by lower triangular on Tue May 29, 2001 at 08:52:57 AM EST

If you want to play the AI-argument game then at least start with something that people could argue has self-awareness.

no, you don't understand him ... Minsky is saying that there's no such thing as consciousness ... so either he doesn't mean the same as most normal users of the word (ie first-person experience), or he's saying that you and me have the same first-person experiences as a rock (ie none).

When the rig detects its feeding voltage is low, it calls an engineer. Does that not display self awareness in its simplest form?

well no ... not in any meaningful sense ... and Simon's post doesn't even mention the words "self-awareness" so maybe you're the one putting up strawmen.

Or should I say in what way is _our_ behaviour any more sophisticated than the machine's?

why do you think that being "sophisticated" has anything to do with being conscious?

Linux - the ultimate Windows Service Pack!
Windows - the ultimate Linux Productivity Suite!

[ Parent ]

Thank you for so completely missing my point (4.50 / 2) (#8)
by Simon Kinahan on Tue May 29, 2001 at 09:08:06 AM EST

Try reading posts before you reply to them in future. I quite a bit of the article giving Minsky's opinion that there is no such thing as conciousness. I was attempting to discover what is meant by this, as it cannot possibly mean what it says to me, that there is no such thing as first person experience, since that would mean either:

1. No object has first person experiences, so the phenomenon really does not exist.
2. All objects have first person experiences, so the word is meaningless.

I quite explicitly said that noone could possibly be arguing for this position. If you'd like to explain what Minsky actually meant, rather than picking on my reductio ad absurdum and claiming it is absurd (which, it is, that being the *point*), I'd love to hear it.

I suspect what is *actually* meant is that since we have no way to detect whether objects have first person experiences, we'd all prefer to just pretend the issue does not exist. Or to put it in politer language, its unfalsifiable, unverifiable, subjective, or what have you. Thats a fine methodological position, but epistemologically its Really Dumb.

As to your comments on self-awareness, I'm not claiming out behaviour is in any way more sophisticated than a machines. Its the same. The difference is that I'm aware of what I'm doing, and the machine almost certainly is not.

Simon

If you disagree, post, don't moderate
[ Parent ]
What he means. (4.50 / 2) (#11)
by priestess on Tue May 29, 2001 at 09:42:22 AM EST

Minsky has a draft of his book on the web, chapter four is about Consciousness, you might want to give it a read if you're really interested in what he means.

One phrase that sticks out in italics there which you may find helpful is "No single principle, power, or force could produce all those mental phenomena!"

So he's saying there is no 'consciousness', there are many phenomena which we confuse together and label as 'consciousness' because nobody has invented a way to think clearly about these things yet.
Pre.........

----
My Mobile Phone Comic-books business
Robots!
[ Parent ]
Thanks for the link (4.50 / 2) (#13)
by Simon Kinahan on Tue May 29, 2001 at 10:31:26 AM EST

It pretty much confirmed what I suspected. Minsky - like Dennett - is trying to reduce the problem out of existence. The trouble is that the problem is irreducible.

Minsky's view appears to be that conciousness is really lots of different things, but the problem is that in making his list, he's listed lots of things you can be concious *of*, but missed out conciousness itself. He has self-awareness, perception, body-monitoring, and so on in his list, but the trouble is that none of these things is a component of what we mean by conciousness, they're all things you can be concious *of*, and there's a difference.

Now it may very well be that being concious of the state of your body, and being aware of your own existence come about by different processes,
but something is happening in those processes that is common to them and not to, say, reflex action, that produces first person awareness.

This is all very convenient if you happen to be an AI researcher. You can build little systems that feed back information from one part to another and claim to be making progress on the only remaining genuine mystery, where in fact you're just writing computer programs. Its not much use as philosophy, as it makes no impact on the question of how first person experience arises. Minsky's comments on how we overrate conciousness are well taken, but they have no impact at all on the central mystery: how come we're able to have first person experiences at all ?

Something happens in human beings that makes first person experience possible, and we have absolutely no idea what that is, except that it is somehow tied up with brain processes. I see no shame in admitting to that, accepting that we can't solve the problem, and continuing with AI and neurology research in the hope that something we come across will shed some light on the matter.

Simon

If you disagree, post, don't moderate
[ Parent ]
undefining consciousness (5.00 / 5) (#12)
by iGrrrl on Tue May 29, 2001 at 09:45:06 AM EST

Either you and Minsky (and Dennett, for that matter) mean something radically different from what I do when you talk about conciousness, my mind is actually radically different from yours and only our behaviour is similar, or this assertion is disingenuous.
Although reductionist neurobiologists tend to avoid conversations on consciousness, for a while I subscribed to the PSYCHE-B and PSYCHE-D mailing lists. Although these are open lists, there are a number of working professionals both in cognitive studies and in philosophy who contribute (Minsky was among them at least while I was on the list), and there is a very high tone of discussion. I unsubscribed about the third time they debated the definition of consciousness.

The professionals can't agree on a definition, and by the third iteration I'd had enough. This ground has been tread by some extremely smart people who have fundamental disagreements. Sometimes it was the old "Well, I know I'm conscious, but I can't prove you are!" (stated in more academic terms, of course), and sometimes it was "Nothing is actually conscious; we're all biologically programmed zombies." All manner of variations of these extremes were placed in between.

One person who knows Minsky well, Jaron Lanier, diagrees with him and with Dennet about the non-existance of consciousness. I put a link to his essay You Can't Argue With a Zombie on my vanity page. It is well worth reading for anyone interested in the subject.

My favorite Minsky quote is the following:

"The old distinctions among emotion, reason, and aesthetics are like the earth, air, and fire of an ancient alchemy. We will need much better concepts than these for a working psychic chemistry."
I agree, but there's a bit more to it than that. Early physicians talked about the "humors" in the body, which we sort of snicker at today. The thing is, they weren't so much wrong as limited by their measuring tools. We refer today to circulating antibodies as humoral immunity and to blood-born hormones as humoral factors because in a sense the guys from the late 1800s and early 1900s were onto something. We've been able to pin down the more molecular details of the observations they made.

Our ability to really measure what the brain does is in many ways quite crude. In a hundred years, I suspect future neuro- and cognitive scientists will look back on our work and find it as quaint as we do the "humors and vapours" of early modern medicine.

Maybe they'll have a definition of consciousness that means something concrete. In the mean time, we don't.

--
You cannot have a reasonable conversation with someone who regards other people as toys to be played with. localroger
remove apostrophe for email.
[ Parent ]

measurement??? (none / 0) (#14)
by lower triangular on Tue May 29, 2001 at 10:36:27 AM EST

The thing is, they weren't so much wrong as limited by their measuring tools.

on the other hand, in talking about our own conscious experiences, we're not limited by measuring tools, because consciousness neither can nor needs to be measured

Our ability to really measure what the brain does is in many ways quite crude.

on the other hand, our ability to measure what the mind does is capable of almost infinite variety, Horatio

Linux - the ultimate Windows Service Pack!
Windows - the ultimate Linux Productivity Suite!

[ Parent ]

That's a little mystical (4.00 / 1) (#15)
by DesiredUsername on Tue May 29, 2001 at 10:49:42 AM EST

"...consciousness neither can nor needs to be measured"

Without a definition of consciousness, a discussion of these two claims is impossible. However, for a "reasonable" definition I don't see how either one is true. In fact, we *already* measure consciousness to some degree--awake/sleep/unconscious/coma. That the measurement is crude and indirect only points out that we need better methods, not that measurement is impossible.

BTW, what's your new nick mean?

Play 囲碁
[ Parent ]
Measuring conciousness (none / 0) (#32)
by ucblockhead on Wed May 30, 2001 at 12:30:38 PM EST

One problem is that all measurements of conciousness are self-measurements that cannot be objectively checked.

Another problem is that conciousness is tied up with memory. For example, consider the dream state. It is well known that dreams fade from memory much faster than real world experiences. It is easy to prove. Just keep a dream diary for a month. It is a fun experiment. Open it up in the evening and you'll often discover that you have absolutely no memory of what you wrote five minutes after waking up.

Is a dream state "conscious"? It certainly has some aspects of it. And if so, given the way dreams work with memory, reporting "unconscious" periods become problematic because it is then unclear as to whether you were not concious, or merely have no memory of it.

So then, the only really trustworthy report is "I am conscious now". But then the question of whether or not a "Zombie" would make the same report becomes interesting...
-----------------------
This is k5. We're all tools - duxup
[ Parent ]

Dream log (none / 0) (#55)
by Steeltoe on Mon Jun 04, 2001 at 10:54:58 AM EST

It is easy to prove. Just keep a dream diary for a month. It is a fun experiment. Open it up in the evening and you'll often discover that you have absolutely no memory of what you wrote five minutes after waking up.


I wouldn't actually say that anything disappears from memory. Only that we unconsciously chose not to let our dreamstate interfere too much with our awakened state (which can be a good thing if you like being a good zombie under your favorite corporation ;*). I am having a dream log myself, and I can remember alot from my dreams I've had, even a year ago. Especially the highlights, but more comes to me when I read the log. Wether I can picture the actual scene or if it's a reconstruction is irrelevant I believe. It'll always be a model in our minds anyways.

Generally, the more you work on your dreams, the more you'll remember of them. That you tend to forget them totally in the beginning, is just temporary. Usually, the dream will stay at least a day if you start consciously thinking about them just after you wake up. This works even without a log, and most people do this quite often without laying too much importance in it.

If you are thinking about experimenting with your sleep, I will recommend this dream-FAQ, or even trying out this FAQ on lucid dreaming. Lucid dreaming is a dreamstate where you are totally aware of experiencing a dream. Usually, people flee from this by dreaming that they are waking up. However, you can train yourself to actively seek out such dreams and gain total or alot of control of your dreams. It's very rewarding when you get it right.

- Steeltoe
Explore the Art of Living

[ Parent ]
Measurement (none / 0) (#35)
by ucblockhead on Wed May 30, 2001 at 12:59:56 PM EST

If you can't measure it, it is just intellectual masturbation. It is like the greeks arguing about the existence of atoms. One of them might accidently hit upon the truth, but it is purely by coincidence, and there's no way of distingiushing between the guy who accidently got it right and the guy who says he's full of shit.
-----------------------
This is k5. We're all tools - duxup
[ Parent ]
Shit (none / 0) (#36)
by ucblockhead on Wed May 30, 2001 at 01:02:27 PM EST

This was meant to be a reply to this, the parent post, not the one it actually is attached to.
-----------------------
This is k5. We're all tools - duxup
[ Parent ]
I am conscious of my own consciousness. (5.00 / 1) (#16)
by marlowe on Tue May 29, 2001 at 12:20:16 PM EST

I refute the positivists thus.

-- The Americans are the Jews of the 21st century. Only we won't go as quietly to the gas chambers. --
[ Parent ]
Cogito ergo sum (nt) (5.00 / 1) (#25)
by John Milton on Wed May 30, 2001 at 01:41:49 AM EST

nt

"When we consider that woman are treated as property, it is degrading to women that we should Treat our children as property to be disposed of as we see fit." -Elizabeth Cady Stanton


[ Parent ]
think about that before mindlessly aping Descartes (none / 0) (#49)
by speek on Fri Jun 01, 2001 at 01:46:24 PM EST

cogito, ergo sum.

I think, therefore I am.

As a premise, you have that X does something. Therefore, you conclude, that X exists. Let me repeat - you posit that "I" think. Or, that this thing called "I", does the activity of "thinking". From that, you grandly conclude that the "I" exists. Huh? If you are going to assume that "I think", then I would expect a conclusion that went further, not backward. Descartes fucked up so royally that it's unbelievable, and we all fell for it heart and soul, because we wanted to. Cogito ergo sum is the biggest stumblingly block in Western culture. It leads to confused science, confused ethics, economics, and politics, not to mention philosophy.

--
al queda is kicking themsleves for not knowing about the levees
[ Parent ]

Consciousness is self-evident (none / 0) (#50)
by rgrow on Sat Jun 02, 2001 at 12:15:09 AM EST

I agree that Descartes makes no sense in this regard.

First, I'll do simply what seems to be a big problem for academics: define consciousness. Consciousness is awareness of reality.

Before worrying about border cases, etc., realize that you are aware of the words I have written that you're reading on this screen.

Consciousness is self-evident. It requires no proof. It is axiomatic.

Everything one utters relies implicitly on the existence of consciousness. If there were no consciousness, there would be no awareness and no utterance.

You can debate the origins of consciousness (mystical or emergent from the physical world), but you cannot debate the existence of it.


[ Parent ]
I like the way you put it (none / 0) (#51)
by speek on Sat Jun 02, 2001 at 11:30:23 AM EST

It's one thing to say "consciousness is". It's another to say, "I am conscious". I can accept your definition of consciousness, though I think consciousness should also contain a self-referential element - consciousness is awareness of self. However, there's a problem when one tries to talk about what a "self" is. You can assume there is a being, separate from consciousness, that exhibits the quality of consciousness, but by doing so, you would make an unwarranted assumption. Perhaps there is only consciousness - no "I" that is "doing" conscious things, but rather just bits of conscious activity that float around, temporarily becoming aware of itself, and calling it an "I", and then dissolving into nothing again shortly thereafter. Most people want to think there's a permanent thing that is conscious, but there is little evidence for that view.

--
al queda is kicking themsleves for not knowing about the levees
[ Parent ]

So you say... (5.00 / 1) (#33)
by ucblockhead on Wed May 30, 2001 at 12:35:25 PM EST

I just invented a self-aware program!!! Here it is:

while(<STDIN>) {
  if( /Are You conscous\?/ ) {
    print "Yes, I am!";
  }
}

-----------------------
This is k5. We're all tools - duxup
[ Parent ]

What gets my goat with AI people (none / 0) (#17)
by leviathan on Tue May 29, 2001 at 02:48:57 PM EST

I think I'm with you on the subject of consciousness. It's the bit that experiences everything. It's the difference between a machanistic clone of ourselves (a zombie) and ourselves. Conscionsness has no effect on the observable actions of an entity. It only has an impact when it comes to ethics. It would matter to turn off a machine if it was conscious; it would experience it, and would know what is happening. Whereas, the fact might register with a non-conscious machine but it wouldn't feel hurt by the fact; it couldn't feel anything.

So when researchers in AI exclaim that consciosness doesn't exist, I just want to tell them that it doesn't matter. It has no influence on your work, provided you can make sure you don't introduce it into a machine. It's resolutely never something you want to try to simulate; it has no purpose to give it to something else.

--
I wish everyone was peaceful. Then I could take over the planet with a butter knife.
- Dogbert
[ Parent ]

Interesting Question in Itself (none / 0) (#18)
by Simon Kinahan on Tue May 29, 2001 at 03:40:00 PM EST

I think we agree on the nature of the experience of conciousness - its the thing that makes experience possible - but you actually raise another interesting question when you say:

. Conscionsness has no effect on the observable actions of an entity

The issue here is called epiphenomenalism: the idea that conciousness has no observable impact. That we can create a zombie, in other words. Jaron Lanier - who I otherwise agree with, and who iGrrrl cites - agrees with you about this. Its my personal opinion that we'll find it to be impossible to create an entity with human like behaviour that is not concious.

From the neurology we do understand, phenomena that do become conscious can often be pinned down to particular parts of the brain. There's no single centre at which all these things come together to become conscious. That makes me suspect that consciousness arises through some property either of the parts of the system, or of all its parts when assembled, where the system might be the brain, or brain-in-body, or body-in-world.

I suspect, therefore, that the very same systems that are conscious, also actually drive the behaviour they're conscious of. Thus consciousness isn't epiphenomenal, but an integral part of whatever it is that makes humans humans. This is admittedly a little more tenuous that our shared belief that there really is such a thing as consciousness that needs to be explained.

Simon

If you disagree, post, don't moderate
[ Parent ]

If consciousness is part of the mechanism (4.00 / 1) (#23)
by leviathan on Tue May 29, 2001 at 07:21:56 PM EST

I'm afraid my argument basically rests on the default. I've not seen humans do anything that couldn't conceivably be done by a sufficiently advanced machine; we seem to be moving (albeit with fits and starts) in the right direction. I don't see a quantum leap (sic) as a necessity, and I assume that current AI systems aren't conscious. By analogy, I therefore propose that a zombie is a possible (eventual) result.

I've heard theories about quantum physics and the uncertainty principle resulting in freedom of action, but I've never seen the link between freedom and consciousness. I suspect you have to have a radically different concept of consciousness for that; does yours encompass it?

Were researchers to discover a mechanism in the field that looked like it could spawn an independent observer at some point in the future then I'll agree with your point about it being an inherent part of a sufficiently advanced system. Otherwise, by simply increasing complexity I can't comprehend how a fresh consciousness could arise.

As to your point about watching consciousness 'go off' in the brain, I don't see any evidence that that is actually consciousness at work; it may simply be a step much further down the chain from the interface between consciousness and brain. Consciousnss could be the eternal soul for all I know ;)

--
I wish everyone was peaceful. Then I could take over the planet with a butter knife.
- Dogbert
[ Parent ]

if it has no effect on the observable actions... (none / 0) (#20)
by sayke on Tue May 29, 2001 at 05:07:05 PM EST

of an entity, why should it have an impact when it comes to ethics? i mean, shit, if we can't tell whether our actions have an effect on a consciousness, why should we care if they do or don't? we wouldn't be able to tell either way, so we would have no idea if our actions were hurting or harming...

i don't see any things that look like zombies. i think consciousness inevitably arises whenever you make something that acts particularly human-like. i think that trying to find one will lead to inevitably to the other, etc... but that's kinda beside the point with respect to my above question. oh well =)


sayke, v2.3.1 /* i am the middle finger of the invisible hand */
[ Parent ]

Good point...but... (none / 0) (#22)
by leviathan on Tue May 29, 2001 at 07:06:21 PM EST

Since we can't prove that any entity but ourselves experiences anything truly, we only assume that other people have consciousness because they're analagous to us. They might be zombies - they'd look just the same (to answer your second point), but I'm assuming they're not. It seems more likely.

Following on from that principle, I know that if I look happy and I am happy, then I feel happy... and likewise with other emotions. I have to assume that other people work on the same mechanism. If you've already accepted the above, it seems a reasonable conjecture.

This type of consciousness does raise a very difficult point about animal rights though. Assuming that atoms and molecules aren't conscious, rocks probably aren't and plants may well not be. Chimps probably are conscious, and other creatures with large brains, but somewhere between the extremes of us and rocks you have to say that consciousness suddenly arises. And if it's got no visible effect, you can't measure it. That, I suspect, is the biggest flaw in the theorey.

--
I wish everyone was peaceful. Then I could take over the planet with a butter knife.
- Dogbert
[ Parent ]

No Effect (none / 0) (#34)
by ucblockhead on Wed May 30, 2001 at 12:50:25 PM EST

The idea that consciousness has no effect on actions is a huge claim that has no grounding, really, and huge implications.

Human beings are the product of evolution, so a claim that consciousness has no effect is to say that it is some sort of "emergent phenomenon" that has no cost to the organization, but rather, just sort of "happens" when you throw enough brains cells together. Maybe so. But that's a very fifties notion of mental power, back when they thought if you just built a big enough computer, it too would be "intelligent". But I find that very hard to believe. Consciousness seems to me to be too complex a thing to just occur without cost. It seems much more likely that consciousness was a trait that was selected for, evolutionarily speaking, and that it has a purpose. If so, then obviously it must be something that has an effect on our actions.

Really, this is even implicit in your post in that if it has an effect on our ethics, it has an effect on our actions in that our ethics drive our actions.

Really, I think this is the only way the question is every going to be cracked. That's the key to figuring out how consciousness works, figuring out what its for.

That's the problem with most (perhaps all) AI work. They ignore consciousness itself and simply try to get machines that act "like" humans. And all too often, they end up with toy programs that seem promising, but don't scale. I think that this is because they are missing something fundamental.
-----------------------
This is k5. We're all tools - duxup
[ Parent ]

My grounding (none / 0) (#37)
by leviathan on Wed May 30, 2001 at 01:12:33 PM EST

My suggestion that consciousness has no effect comes from the fact that you can barely describe it, much less measure it. Granted, the mechanism or process that introduces consciousness might be important to some other cognitive process in the way it works, but it can't be intrinsic to its functionality if consciousness has no measurable effect on the function of the brain overall.

Yes, AI researchers are probably missing something big, and it's possible that something which generates a consciousness is it. Trouble is, if that consciousness is anything like ours then terminating it would be cruel. They need to find a replacement for the consciousness containing bit and make that instead, even if it is much harder.

By the way, when I said that it has an effect on ethics, it's not that I have a consciousness makes me treat you ethically and not flame you for daring to disagree with me, it's me assuming you have one too that would be hurt that stops me doing that. I don't see a problem with teaching robots to take care of people's emotions, even if they don't really empathise.

--
I wish everyone was peaceful. Then I could take over the planet with a butter knife.
- Dogbert
[ Parent ]

"Measurable effect" (none / 0) (#38)
by ucblockhead on Wed May 30, 2001 at 02:14:24 PM EST

That should really be "no currently measurable effect". Just because we have no idea how to measure it doesn't mean that it is not there.

It would be like a 16th century scientist saying that radiation had "no measurable effect" and therefore could be ignored.
-----------------------
This is k5. We're all tools - duxup
[ Parent ]

Oh, sure (none / 0) (#39)
by leviathan on Wed May 30, 2001 at 02:39:40 PM EST

It's not quite alalogous because in philosophy, you always seem to be exactly as far away from answering the big questions whereas science has a clearer progression to it. But as I noted in another reply, my position is such by default. Were researchers to discover some intrinsic part of making a human-like AI which sopntaneously spawed an observer into the universe - or rather were suddenly able to measure it and discover its existence - then we'd know its effect, but until that point noone's been able to convince me how that could possibly happen and hence I assume it doesn't instrinsically need to have an effect.

--
I wish everyone was peaceful. Then I could take over the planet with a butter knife.
- Dogbert
[ Parent ]
Purpose (4.00 / 1) (#40)
by ucblockhead on Wed May 30, 2001 at 04:00:57 PM EST

Hmm....perhaps it is my bio training. Usually you assume that if a trait exists in an organism, there's probably an evolutionary reason for it.

In fact, unless the trait of "consciousness" comes at no significant cost to the organism, there must be a reason for it!
-----------------------
This is k5. We're all tools - duxup
[ Parent ]

I think you are on the right path (none / 0) (#54)
by acronos on Mon Jun 04, 2001 at 04:08:08 AM EST

Yes, AI researchers are probably missing something big, and it's possible that something which generates a consciousness is it. Trouble is, if that consciousness is anything like ours then terminating it would be cruel. They need to find a replacement for the consciousness containing bit and make that instead, even if it is much harder.

I think you are on the right path. If you accept evolution then you know that humans are selected for two primary functions. Our primary desire is for survival and reproduction. I am speaking generally. The Survival selection would make it very painful to a human to be turned off.

Designed machines would not have to have this predisposition. For instance, their primary desire could be first to learn and map the functioning of the universe and environment. Second, to use the knowledge from the first desire to provide good for humans. The subjective judgement for good could be learned through observation of what humans consider good using the first rule.

The above scheme would create an intelligence that could learn and grow and always have something to do as long as the human race survived.

Consciousness in my mind is simply being aware of myself and my environment. A simple camera could perform this function. I do not believe most AI researchers are denying consciousness in the above sense. They are denying it in the sense of some esoteric element that makes us what we are. We are simply the combination of cells that make up our being. Anything else that is human is emergent behavior from those cells. The best AI researchers, including minsky, are very much advocating making a machine self-aware. Even going beyone what a human is capable by making a machine actually able to track it's real thought processes which a human is unable to do. When we see a computer screen we have no idea how our mind formed that image. It is just there. I think most AI researchers are just saying that people are making to big a deal of consciousness. At least that is what I am saying.

It is not very hard to make something self-aware. My stove has a temperature sensor that determines the temperature of the eye. It uses that data to regulate the temperature to the setting I have selected. My stove is self-aware in the sense I am speaking of. A human being just much more complicated with many more forms of feedback. The basic premise is the same.

Most people are using big esoteric concepts like consciousness, emotions, and being self-aware to confuse themselves. Emotions are not hard to create either.

[ Parent ]

About Experience (3.50 / 2) (#30)
by exa on Wed May 30, 2001 at 11:50:29 AM EST

You know, I couldn't be more sympathetic of the wonderful feelings one experiences when one is drinking fine wine hanging out from the ceiling in some weird club.

Now, what I'm saying is that experience can be described as a rigidly mathematical model. That of [the progression of] your brain's biological state during the experience in question. More precisely, we can surely divide into a "log" of sensory input and what biologically (thus physically) happened in your brain. Let me remind that we are living in 21st century and thus *no* form of dualism is admissible in further discussions.

The log of sensory input is trivial to describe. You could record that in a digital medium with a sufficient precision. The inside-the-skull part is more baffling because there we have a complex machine which we do not really know the design of. If we did, we would have replicated that. So, we don't really know what to measure in there. It's a lot like looking at all the circuit boards in a computer and trying to understand how a compiler works. There some people think that some very cool quantum effect in our brain turns us into glorious human beings in possession of great souls. Surely, that is a superstitious and anthropocentric belief. Comparison with the structure of the brains of other species suggest otherwise. We don't have any fundamental capability that other species don't have. Those other species are just so decent that they don't boast about their cognitive abilities like we do!

So, if there is any difference between the cognitive capabilities of a bug and yours, it isn't some physical or low level difference. Rather, it is a difference in the complexity of the nervous system. Now, my guess is that a chimp is at least as conscious as many people I've met and a rat doesn't seem to be wholly unconcerned with itself so I wouldn't really tag it "unconscious". So, you would run into all sorts of philosophical problems when you try to draw a line between the unconscious and the conscious among living things.

Whenever you take consciousness to be the magical feeling of existence one senses in the bathroom, that is one definition that is too artistic and not at all scientific I must emphasize. Make one definition that has *content*.

The truth is that no one has said that a rock is capable of any sort of cognition, thus it would be incorrect to say that it can have consciousness, or can relate to its own existence. It can be easily inferred from the subtext of our discussion: AI. We are trying to build machines as intelligent as humans, not failingly expect some rock to assume intelligence.

Now for something better. What would a complete definition of consciousness look like? What is its relation to say, learning, reasoning, perception, etc.? Is it really an understanding of first-person subjective experience, which is to say that agent A believes that A exists and that A happens to go through situations S1, S2, S3,... in time? Hardly so. I hear all the Searle like love novel digging types crying out "but that formal definition does not capture what we mean by subjective experience" So I must digress. What is it that you mean by subjective experience. I think none of you who speak of the blessed "blah blah experience" phrases have any idea of what you're speaking so I'll try to guess what it really is that you're referring to unconsciously. You're referring to the incredibly complex web of cognitive activity one has about himself and his deeds including thoughts and feelings. So, in a sense, yes, it is *irreducible*. There is no way we can trivialize some poor soul's mourning of losing his lover, yet there must be a scheme to the creation of these activities.

However, in a mature mind it is very hard to separate conscious elements from the non-conscious. Surely, we can be quite certain that the initial perception of pain in our body due to pressure is not really conscious, (since it involves only a simple signal processing) but the whole perception is. Likewise, we could say that much of our behavior is automated through learning. One pilots a vehicle without knowing how he accomplishes that, yet he can reflect on his activity if required.

The meaning of self is also one of those misleading terms. We have not a unified simplex of a self. We are composed of several autonomous selves in competition and that is what makes us diverse and versatile. There is certainly no central point of control or a magical essence to our feeling of "self". It there is the kind of the necessary feeling like "pain" to keep us focused. [For the record, that is what I mean when I say that "There is no consciousness", that is "There is no such thing as consciousness as we know it". It is merely a useful delusion in our minds to say the least.]

Therein lies even greater problems for those uninitiated participants in discussions of "consciousness". When we cannot even define what self is, it is wholly impossible to define what self-experience is. That is referring to "first person experience" is void unless you come up with good conceptualization of self in one's mind, and what experience is. Which as far as I'm concerned are non-existent.

In reply, yes it must be different things that Minsky and Dennett refer to when they talk about consciousness and what you refer to since:

  1. Consciousness is not a well defined entity to refer to. Like one's middle finger.
  2. The definitions the former party give are purposeful and try to give more scientific explanations while the latter party's definitions are void.

In addition, philosophically it would be best if we could abstract away questions pertaining to consciousness and only concentrate on thinking for the questions of "thinking" are discouragingly difficult by themselves.
__
exa a.k.a Eray Ozkural
There is no perfect circle.

[ Parent ]

Right ... (4.00 / 1) (#41)
by Simon Kinahan on Wed May 30, 2001 at 05:25:35 PM EST

Thats an awful lot to deal with in one post. Fortunately I either agree or don't care about a fair bit of it. I want to back up what I said about consciousness, though.

I can't give an operational definition of consciousness, and Minsky can't give one that actually corresponds to the usual use of the word either. What I want to do here is explain why its impossible to define in any operational way in our current state of knowledge, and give some reasons why its OK to talk about it anyway in this kind of discussion. In general its quite OK to use terms for properties we can't define, as long as we can agree on some things those properties necessarily imply. I'll say for my purposes here that consciousness is a property things must have in order to actually experience the world, rather than simply be effected by it.

Lets forget the word consciousness and talk about experience for a minute. An experience necessarily includes two actors: the person doing the experiencing and the object being experienced. Between then, somehow, these gives rise to an experience itself. When I (first actor) look at a leaf (second actor), I experience greeness (an experience). OK so far ?

So, there are two interesting cases here. The first is fantasy, memory or introspection. When I imagine something, I'm experiencing something happening in my own brain, so my mind is both the subject and the object.

The second interesting case is when we look at someone else's brain, as we can, in a fairly primitive way, and might one day be able to do perfectly. The interesting thing here is that we can experience someone else experiencing something.

So, we're now in a position to tacke your bit about recording someone's experiences. I agree with you that we can, in principle at least, create a perfect numerical record of someone's experience. However, this doesn't demonstrate anything. We've got a record of what we experience, through measuring instruments, when someone else experiences something. What they experience, looking at a green leaf, for instance, is not the same as our experience of a collection of numbers, still less of the collection of numbers in and of itself.

You don't need to be a dualist to see things this way. A dualist believes either that there are two separate kinds of stuff, mental and physical, or two different kinds of properties of stuff, that are entirely disjoint. It is quite possible to believe, as I do, that mental phenomena are entirely caused - by some unknown mechanism - by physical ones and yet that the actually experiencing something is not the same as a collection of measurements of what your brain is doing as it does that, because its the actual doing of those things by your brain that creates the mental phenomena.

Now on to definitions. Experiences in general have no definitions. You can't define pain, love, colour, sweet or anger, and yet I hope you would not deny that these exist. Lets take the colour green. There is no way to explain, except by artistic analogy, or by pointing, exactly what green looks like. We can do two things: we can measure the other properties an object must have to appear green (reflect or emit certain wavelengths of EM radiation), or what people's brains do when they see green, but neither of these is the same thing as green itself. Thus the oft-cited fact that you and I can't ever know that we experience green in the same way.

The same applies to consciousness as applies to other experiences. Indeed, I would say consciousness, being the property something must possess to have experiences at all, is actually the root of this strange property of experiences. I can tell you consciousness is what lets you experience things, and hope you get the idea, like an adult saying the names of colors to a child to encourage it to distinguish them, but I can't define it for you any more than I can define "green".

These things - green, and consciousness, and experiences in general - are irreducible. Thats not because they're "blah blah incredibly complex web blah blah" (which is just as meaningless to me as your own experiences apparently are to you, incidentally), but because they simply don't admit of being broken into bits and explained in terms of those bits. In that manner they are different from, say, chairs.

So what effect does all this have on science ? Well, I hope one day we'll be able to given an explanation of consciousness of the same kind we can give for green, by figuring out what properties a system must have in order to support a conscious mind. We won't be able to do that if its been defined out of the discussions on the topic. There are problems with doing that, of course, because unlike green, we can't experience another entities consciousness. Possibly some methodological innovation will arise to work around this difficulty, in a similar way to psychology, I suppose.

In these terms, I think its nonsensical to say things like "consciousness is an illusion". In order to suffer an illusion, it is necessary to be able to experience, and to do that, one must be conscious. Minsky appears to make a similar mistake when he says consciousness is a bucket into which we put self-awareness, perception, etc. My problem with that is that all these things are experiences, and once again that begs the question of how one comes to be able to experience, rather than merely be effected, in the first place. I believe, bizzarrely enough, that the conceptal error you're both making, is due to an excessive reliance on introspection to identify what consciousness is. You then identify it with part of what you're aware of, rather than awareness as a whole, and believe you can reproduce this by writing a program that computes the thing you're aware of. The intentionality that properly belongs to you gets wrongly assigned to some symbols that represent your thoughts.

If by consciousness you and Minsky really do mean something like "a single point of control", rather than the ability to experience, then I ask you for your views on how experience arise instead, as to me that is another name for the sam thing.

I have no problem with - for methodological purposes - saying the consciousness is beyond the scope of scientific enquiry. Right now, I believe that is the case. But there's no good reason here to declare that the phenomenon does not exist. The centres of galaxies are also beyond the scope of enquiry, but thats no reason to suppose they do not exist.



Simon

If you disagree, post, don't moderate
[ Parent ]
Philosophy of Color (1.00 / 1) (#44)
by exa on Thu May 31, 2001 at 06:26:46 PM EST

Well, you're going into some dangereous grounds. Philosophy of Color can get one really confused. Now, the point here is that we are talking about a "mechanistic" theory of mind. Actually, I am not interested in any other kind of theory. So, that's what I really mean by living in 21st century and no-dualism caveat.

Green is the name we give to a color experience. That is true. Green is subjective in that the concept of green is specific to each individual. Though we will be observing lots of common points, such as green evoking a feeling of comfort in many persons, there will always be subtle differences. The reason is primarily that each person has a unique mind. A point on a huge combinatorial space. Therefore, while we can define green in terms of the ranges and magnitudes of sensory input we are unable to do so for the experience of green; it would be asking too much.

The experience of green then is the same thing as the individual's concept of green.

However, it would be very wrong to say that "the concept of green" does not admit division. For then you shall have to define what kinds of concepts are atomic, and therein you will have great many troubles for something that cannot be decomposed is necessarily unknowable or a trivial fact in our case. Which comes to mean that any mechanistic system is open to scientific enquiry, and if you are denying that you are a voodoo priest; not entirely different from what Searle is doing.

In this particular case, we shall have to know what a concept is. Then, we shall have to know how it is represented. And if we are not enlightened enough to suggest a representation, we shall be quiet.

A no-theory holds no value for a philosopher or a scientist.
__
exa a.k.a Eray Ozkural
There is no perfect circle.

[ Parent ]
Concepts and Experience (5.00 / 1) (#47)
by Simon Kinahan on Fri Jun 01, 2001 at 08:45:47 AM EST

Now, the point here is that we are talking about a "mechanistic" theory of mind

Not clear what you mean by that. If you mean you'll only accept a theory that accepts that minds are a product of physical phenomena, thats fine, thought we can legitimately still argue about whether they are identical with such phenomena. If mean you'll only accept a theory that tries to ignore the first person perspective, thats throwing the baby out with the bath water.

The experience of green then is the same thing as the individual's concept of green.

Not so. Since you seem to like reductive arguments, think about the neurology. I can think about the concept of green without visualising actual green - imagine a scientist working wavelengths of light, for instance. What happens in my brain is different when I do that from when I actually see or visualise greenness. The same argument applies introspectively: my concept of green can be active without my experiencing the actual colour green. They're clearly not identical, though they are, of course, related.

However, it would be very wrong to say that "the concept of green" does not admit division. For then you shall have to define what kinds of concepts are atomic, and therein you will have great many troubles for something that cannot be decomposed is necessarily unknowable or a trivial fact in our case.

Whats all this business with definitions ? First of all, I'm not talking about concepts, I'm talking about experiences. All concepts are reducible, as far as I can tell, though I don't much care. See WVO Quine, if you're really interested.

It is perfectly reasonable to make arguments about entities that don't have definitions provided we can agree about some of their features, so for our purposes here we do not need to define what experiences are atomic as long as we can agree that some are. We seem to agree that the experience of green is irreducible, for instance - or I'm assuming we agree until you actually reduce it into something else. The fact this gets your philosophy into trouble because you feel compelled to define whats in front of your eyes rather than accepting it is neither here nor there.

Which comes to mean that any mechanistic system is open to scientific enquiry, and if you are denying that you are a voodoo priest; not entirely different from what Searle is doing.

I don't see how this follows from your previous statement, but I'm not denying that the system is open to scientific enquiry, and nor does Searle, as far as I know. IMO, as I said, we don't have the tools yet to find the causes of conscious experience, but I believe we will one day. Just like green and the wavelengths of light, we can look for material facts which correlate with the existence of consciousness. IIRC, Searle's position on this matter is les optimistic than mine.

I don't know what your remark about "no-theories" is meant to mean either. Neither of us has presented a scientific theory. I've presented at least part of a philosophical theory. You've presented a few reductive arguments which seem to come from a vaguely materialist philosophical point of view, but no theory I can see. Still less do I understand what the representations of concepts have for this debate.

Simon

If you disagree, post, don't moderate
[ Parent ]

That's where it's troublesome (1.00 / 1) (#53)
by exa on Sun Jun 03, 2001 at 09:57:03 PM EST

I'd warned you about philosophy of color which is full of traps for the untrained eye. You may find some decent books on that field if you're interested. (There is a recent book which I was going to investigate) Yes, I like some of Quine's statements, and I do think that we can quine qualia like Dennett does.

However, now that you've tried to separate experience of color from the concept of color you are going into a great circular argument with a good diameter. No-theory refers to your theory that we can talk about "experience" without really defining it. I'd normally refuse to talk about such things unless they have a semi-formal definition. As I've implied before, let's stay in the analytic tradition so that any logical argument can be possible. I don't have the time to do purposeless rhetoric which involves circular arguments.

Your intuition that "seeing a color" can happen without invocation of "concept of color" arises from some simplistic assumptions that you adhere to. First, by concept I don't at all mean a "linguistic" concept. The name is irrelevant, let poor linguists wrestle with trivia of natural language. Nor do I assert that a singleton, uniform and static representation corresponds to a concept. In knowledge representation we usually do that, but that's for a practical study only. By concept, I surely mean the wide meaning of "concept" which is any mental representation that's related to the perception of color. Hence, both the scientific concept of color and 'experienced greenness' are admissible as concept of color. Though, I think it is rather obvious that I refer especially to the latter incarnation of the color concept.

Now if you'll excuse me, I'm equating mental activity related to representation of color to the experience of color for that is the only philosophically and scientifically plausible description we can come up with. If you'd say that we don't need such a description, that's a no-theory and I reckon you'd fail any serious philosophy class but pass in a literature class, if I can get the point across.

As I said before, those theories of 'consciousness' that depend on a vacuous notion of 'subjective experience' fall short of content. We should have a good definition of what experience is to see whether it satisfies 'consciousness' (which I think does not). From now on, I refer no more to the general problem of "consciousness", I'm concerned only with the intriguing "experience of color".

Allow me to reveal more content to my pet theory which I must tentatively unveil. Now that I have made a minimalist definition that experience of color, of the sort that certain people are extremely fond of, is "mental activity related to representation of [the concept of] color", I can offer a somewhat better adjusted treatment. In the following text, experience refers to experience of color.

The whole of experience, as well as the representation, is specific to each individual and thus itself has a 'subjective' value. However, like any other entity which shares our physical reality, can be observed, analyzed, measured, etc. by a third party with sufficiently advanced tools. Note that reasoning about the operation of a human brain is beyond our current abilities and understanding, that is why we usually regard that as impossible. However, for thought-experimental purposes we can imagine a superior being, an advanced extra-terrestrial perhaps, which can understand a person like we understand the operation of simpler machines. A complex machine does not deny all analysis. It is indeed very valuable that each of us can have varying mental activity and representation of color. However, that's all there is to it. We're fallible human beings, and the fact that our brains are not identical bears no theological significance. [And to correct your judgement, I do not hold a vague materialist philosophical point of view: I think that philosophy of mind ought to be scientifically plausible. That's why this post is about AI as it is about philosophy. Except from that, I like discussing metaphysics as long as we remain in the analytic tradition]

We can attack the definition by breaking it up. We can say that the 'mental activity related to ... color' is so wide ranged that we can not define where it ends. As a matter of fact, using a philosophical notion of 'relevance' we might show that a large portion of mind is related to color. And it indeed is, especially those involving higher functions, we cannot beforehand determine which functions experience will not effect. This is I think where, most of the opposing camp, think that experience is irreducible for they cannot explain any further. Nevertheless, that is why we can as well dismiss all the auxiliary mental activity that may be caused by experience for a moment. We could focus on the cause "representation of concept of color". For that is the necessary condition for any experience to occur. Now, it is clear that if we cannot give a good explanation for the mental representation of color then it will be much harder to explain 'wide' experience.

What kind of a representation does color have? Is it second-order logic? Unlikely. We know a few facts about this representation: that it is intimately related to a particular modality, which is vision. We are not interested in the abstract kind of green-ness that cannot be seen.

Thus, we could safely say that a mind that has no capacity of 'seeing' will not experience green. There are many pitfalls in this argument, since the capacity of seeing has so many in the way of sensors and perception mechanisms. For a minimally agreeable ground, let us grant that the type and identity of sensory apparatus is irrelevant, however capacity will be realized in the mind of the agent. Seeing is common-sense seeing.

Then comes an abundance of questions. Is this representation biologically determined or is it learnt? Is it identical in different individuals? Can we compare the representations of color in two individuals? Is the representation dependent on the architecture of the faculty of vision? Does this representation have any semantics outside the vision centre or are color constrained to the modality they are represented in? And so forth.

I suppose we could find answers to some of the possible questions about the nature of representation of color, but would be unable to answer some of the critical ones. Thus, extending this idea to 'sweet pain evoked by the memory of the color of the shirt the loved one used to wear on Sundays' is not interesting until we can have a reasonable theory on the concept/representation of color.

I think my pet theory does a much better job in yielding the difficulty of the matter than yours, so it surpasses yours as a philosophical theory.

Thanks,
__
exa a.k.a Eray Ozkural
There is no perfect circle.

[ Parent ]
rocks are conscious (none / 0) (#43)
by kubalaa on Thu May 31, 2001 at 02:08:30 PM EST

I can't see it as possible that you believe, say, rocks are just as capable of first person experience as you or I.

Actually, I do believe that. But since that won't get us anywhere, let's look for a more useful definition. Consciousness is the nature of perception to create a barrier between self and other. It is what means you "feel" things at the tip of your finger and not in the air around you. This is caused by the way you are physically constructed to channel information; information coming from the tip of your finger is amplified by your nerves and brain, whereas information in the air around you is not.

Compare your body and the air around it to a rock. Every molecule in the rock contributes about equally to the information content of the rock. I'm throwing the term information around losely, which is the biggest problem with this theory, but basically I relate information to meaning and meaning to action, so I mean that each molecule in the rock contributes roughly equally to the future actions of the rock as a whole. If one molecule gets excited and decides to disobey gravity, that won't effect the whole rock much.

Your body in a rock-sized chunk of air is a much different matter. The air around your body has almost no contribution to your future, whereas the molecules inside your body, particularly a large chunk in your head, have a grossly unfair contribution. A few random firings in your brain and your whole body moves. This is what I mean by amplication of information; your body is designed in such a way that a small input creates a large output.

This is related to the way your brain is an entropy engine; distill information, spit out heat. Although I haven't figured out a good way to explain the connection, so it may all be hand-waving. But it is an interesting way of thinking about it. I think that all we have to do to make consciousness is design a system which amplifies information in this way, creating a definite barrier between itself and the outside world. How to create a consciousness which operates with intelligence close enough to our own to be interesting is quite a different matter.

[ Parent ]

I can buy that, actually (none / 0) (#48)
by Simon Kinahan on Fri Jun 01, 2001 at 08:52:19 AM EST

It is at least a consistent position. If we get to a "complete" understanding of neurology and still have no idea what causes consciousness, I'll probably start agreeing with you. Personally, I still think it more likely that there's some special feature of the brain that brings consciousness into existence.



Simon

If you disagree, post, don't moderate
[ Parent ]
I'm a little sick of Minsky (4.45 / 11) (#4)
by DesiredUsername on Tue May 29, 2001 at 08:25:25 AM EST

All I ever hear from him is how the current state of the art (whatever that is, from the 1950's to today) has "got it wrong" but he has a great new idea. Ten years later, everybody is doing what he said a decade ago while he's complaining they are stuck in a rut and need to jump on his current bandwagon. Here's a clue Minsky: until you actually make some progress yourself, shut up and let someone else talk for a while.

Personally, I can't believe that Hofstadter's "Fluid Analogies Research Group" (from the 80's) didn't cause more of a splash. If you've never read the book that came out of those projects, you are missing some very good stuff--CopyCat and TableTop are *real* AI that do actual creative analogy making (not like Cyc, MindPixel and that stupid robot-with-the-eyebrows that TLC finds so fascinating). Furthermore Hoftstadter doesn't suffer from Minsky's "let's solve the whole problem with one project" mindset--he thinks about the issues, finds a deep one, programs it and then thinks about the results. Repeat.

Play 囲碁
Hofstadter vs Minsky (4.00 / 1) (#45)
by exa on Thu May 31, 2001 at 06:40:11 PM EST

I think both researchers are valuable, however I would find Minsky's work much more prestigious as he has made concrete mathematical contributions to computer science, while Hofstadter has not.

I think Hofstadter is a great story teller and well versed in literature; however he is too romantic. I read some of his work in Fluid Analogies [and of course GEB before], and it was quite impressive. The way he described the running of the program... However, I guess a good enough writer could describe running of some of the complex algorithms that I implemented in a more impressive manner.

The trouble is that your statement about Minsky is quite wrong, because nobody really went after implementing ideas from SOM and Minsky never told people to seek more ANN and GAs. So I think Minsky has been quite consistent in what he said.

What Minsky gives us is some good ideas; he pioneers. And it is up to us to prove or disprove those ideas.
__
exa a.k.a Eray Ozkural
There is no perfect circle.

[ Parent ]
HAL (3.00 / 1) (#9)
by Signal 11 on Tue May 29, 2001 at 09:20:54 AM EST

Just some tangentally related information - Arthur C. Clarke, now in Sri Lanka, has been working on creating a realworld HAL for some time.


--
Society needs therapy. It's having
trouble accepting itself.
Why not neural nets? (2.57 / 7) (#21)
by John Milton on Tue May 29, 2001 at 05:31:32 PM EST

Do we know of any other structure in nature that produces conciousness? No. I think people like Minsky just get off on being the one with the uncommon idea. The adulation of being the savant is just too much to resist. Has Minsky ever actually done anything but talk?


"When we consider that woman are treated as property, it is degrading to women that we should Treat our children as property to be disposed of as we see fit." -Elizabeth Cady Stanton


Well.... (none / 0) (#24)
by DesiredUsername on Tue May 29, 2001 at 09:20:56 PM EST

As evidenced by my comment below, I think Minsky long ago lost whatever scientific credibility he had...but I also think he's right that people are focusing at too low a level. Yes, brains are composed of neural nets at a low level. But minds can run "programs" of unknown but very high complexity. Bare neural nets cannot run programs at all. Therefore there must be at least one "layer" between these two things. Identifying and modeling these layers should be a priority now that NN's are fairly well understood.

Play 囲碁
[ Parent ]
hardware and operating system (none / 0) (#26)
by John Milton on Wed May 30, 2001 at 01:52:07 AM EST

I think the difference between biological neural nets and simulated ones is the software. When we're born we have some programming built into us. I think of that as our operating system. We add other cool programs to that base.

Simulated neural nets are just a jumble of disjointed connections. They take time to develop order. I don't know how important the number of neurons is. Whales have larger brains than humans. Maybe their just running Win95. I'm not a computer programmer, so my opinions may be way off base.

Which OS do you think Hitler was running? Mac or Win? :)


"When we consider that woman are treated as property, it is degrading to women that we should Treat our children as property to be disposed of as we see fit." -Elizabeth Cady Stanton


[ Parent ]
This was posted on comp.ai and comp.ai.philosophy (5.00 / 2) (#28)
by exa on Wed May 30, 2001 at 10:02:25 AM EST

By me (Eray Ozkural a.k.a exa), thanks to google.

Have a look at the replies and discussions there.

Here are the links to comp.ai post and comp.ai.philosophy post

. Minsky himself posted a reply, pointing to a half draft of the new book at his website.

Thanks,
__
exa a.k.a Eray Ozkural
There is no perfect circle.

I meant draft of the first half (none / 0) (#29)
by exa on Wed May 30, 2001 at 10:05:08 AM EST

But I think you can resolve such ambiguities. :)
__
exa a.k.a Eray Ozkural
There is no perfect circle.

[ Parent ]
I noticed a reference to Lenat's Cyc project (none / 0) (#46)
by SIGFPE on Thu May 31, 2001 at 08:39:59 PM EST

What is the status of that project? I know a couple of people who worked directly on it and after a short period of time they both concluded that the whole thing was completely bogus and quit. Does anyone know anything else about it? I remember reading about Eurisko and AM as a kid and thought they were the most exciting thing in the computing world. But now it looks like Lenat is nothing but a hypemonger with nothing real to his credit.

Anyone out there have any opinions?
SIGFPE

minsky is going to have his brain cryo-preserved (none / 0) (#52)
by cryon on Sat Jun 02, 2001 at 04:14:14 PM EST

And so therefore he is a very smart man. I am serious.
HTGS75OBEY21IRTYG54564ACCEPT64AUTHORITY41V KKJWQKHD23CONSUME78GJHGYTMNQYRTY74SLEEP38H TYTR32CONFORM12GNIYIPWG64VOTER4APATHY42JLQ TYFGB64MONEY3IS4YOUR7GOD62MGTSB21CONFORM34 SDF53MARRY6AND2REPRODUCE534TYWHJZKJ34OBEY6

Simple: Stop using a loaded word! (none / 0) (#56)
by Steeltoe on Mon Jun 04, 2001 at 12:03:00 PM EST

I think it's important for people to realize the difference between thinking you are conscious and actually being conscious. The former may only depend on data present in memory, while the latter is a theological and philosophical question we might never resolve. While the former is a little useful in AI-research, the latter is a totally useless blind-alley for that science. I think it's rather sad that AI wastes so much effort and energy on consciousness, when they should come up with their own words for what they actually mean to design. The word is just too loaded, and boasting of the latest version of conscious AI will ridicule the entire profession once more. Instead of complaining about everybody having their own version of it, why can't they bloody define their own words?

If a robot told you it was conscious, would you believe it? Sure there are certain feedback processes in the robot that makes it "live" in the world we live in and simulate "thinking" in limited fashion. You can make an AI-brain process its sensory input, make plans, create "fantasies" and act out those inner processes as output channels. But these are pure heuristics, and we have no proof that the robot itself is really experiencing anything. To compare it with a human being becomes meaningless, because any property that makes it "human" is basically unwanted and will usually be programmed away anyways. We want robot servants, robot cooks, robot guards and robot think-tanks. NOT conscious robots that we'll mourn over when they melt-down, robots that will demand three month-vacation-and-repairs or robots that we'll have love-affairs with! (Weeell, there are always exceptions of course ;-)

Why am I so up in arms about this? Simply because it's important for people to believe they are conscious. If they believe they are just a product of genes and circumstances, they are stripped of power over their own lives. Which is only true if you believe so! What we believe makes all the difference on what we can do and how much power we wield. Or think we wield anyways ;-) The other direction is apathy. Now I'm not saying that we are apathetic, but that we can be less so in our lives if we let it happen.

- Steeltoe
Explore the Art of Living

Minsky's "Programs, Emotions and Common Sense" | 56 comments (49 topical, 7 editorial, 0 hidden)
Display: Sort:

kuro5hin.org

[XML]
All trademarks and copyrights on this page are owned by their respective companies. The Rest 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!