Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

[P]
The first ethical questions of robotics in society are upon us.

By Work in Technology
Tue Jun 22, 2004 at 08:41:47 AM EST
Tags: Culture (all tags)
Culture

As machines and computers grow more intelligent, as a society we must consider their place within our societal code of ethics.

For awhile now, these questions have been regarded by many to be so far away that to seriously worry about them now is a waste of breath and time.

I intend to show that not only are serious issues of ethics regarding robots and artificial intelligence coming very soon to us, in some aspects, they already are here.


A bit of background about myself: I am a researcher in intelligent robotics in the employ of a very large and well-known United States University (I do not wish to name which exactly as this article represents my personal thoughts, not those of my university). My research mainly deals with the current large problem in robotic AI, that of localization and mapping. In a nutshell, having a robot know where it is by what it sees in the real world, with no special markers or other artificial means.

General Issues

As I work and experiment with robots, their abilities to see and comprehend their environment grows more and more capable with every month that passes by. It will not be long now (perhaps a few years) before these abilities make it out of the lab and into common consumer devices. This is not your Roomba vacuum cleaner. These machines will eventually be common place in many aspects of society from the factory floor, to your office, to your home. They will be found on the streets and sidewalks of cities - probably at first as cleaning machines and garbage disposal units.

Ethical questions regarding robots have been on people's minds since the word 'robot' was termed in the 1920s. Its root is in 'robota', which means "forced labor" in Czech. And certainly the moral implications of machine slavery are one of the more abstract ethical questions to consider as their usage grows (and so does our reliance upon them). This question came about in modern popular fiction as being the root cause of the rebellion and eventual world domination by machines in movies like "The Matrix".

Other ethical questions regarding machine labor is their impact upon human society - it is not unlikely they will replace low skilled human labor in many areas, leaving people with little education or skills in an ugly predicament, and a serious social and economic problem.

Then there is the ethical question of how do we treat robots from day to day? Is it moral to turn them off? At what point has a machine gone from a mere device, to an entity worthy of moral protection? By moral protection, I mean a societal sense of it being wrong for one to intentionally damage or injure the machine. A machine version of animal cruelty morales.

Many robotics researchers consider their machines to be so simple that while these questions are interesting, they are so far off as to not be an issue yet. Even I think nothing of turning off my robots at the end of the day, or wiping their memory to start anew. I would not like if someone beat or broke one of the robots intentionally, but most would agree thats a more 'damage of property' issue, than a moral life-entity one.

So, while many robotic ethics questions are indeed too far away to consider yet, this one looms before us, with even our first order intelligent machines: Where does the line get drawn between mere property and devices for work, to an entity worthy of moral protection?

Issues Now Before Us

In the home, I think the first main uses will be as entertainment machines and robotic pets. These already exist, the most advanced for sale on the market currently being Sony's AIBO ERS-7 robotic dog. This new AIBO version has facial recognition abilities. It takes 6 weeks to 'train' the machine to recognize you, and your likes and dislikes. It is possible to reward the robot through actions such as petting, and presumably punish it in similar ways. In essence, it behaves like a rudimentary organism with a basic capacity to learn. Though, as advanced as this version is, it is of course, no puppy. This version of AIBO has few localization abilities and even after completing training will be limited. The point however, is that it has been given a basic ability to learn and develop a personality over time.

But there is another feature: There is also a reset command.

Yes, after several weeks of training your robot to recognize you and your family, and your likes and dislikes and whatever other personality traits your robot has developed, you can simply reset its memory and start anew.

Now imagine this were a real animal. Would you consider it moral to reset its brain if such a thing were possible? If you think your pet was too hyperactive and want to calm it down, just fry its brain and start all over. I think most rational people would not agree with such a thing, even if it were possible.

As the science of intelligent robotics advances at the pace it does now, it will not be long before the behaviors of robotic pets and machines become increasingly complex and less 'machine' in their psychological nature. Is a reset system morally acceptable?

Consider next, physical injury to robots. Perhaps what sets apart machines from animals in our psychological profile of them (and ethical position) is that machines do not cry, show signs of distress and injury or act to avoid them. But it is likely that robotic entities will be endowed with highly advanced self-preservation instincts programmed into them - probably because they are expensive, and nobody wants their expensive robot to throw itself into a pool, leap into fire or otherwise put itself in a dangerous situation that could damage or destroy it.

This requires some kind of internal negative-feedback to injurious situations. In biological lifeforms, there is the sense of pain that is wired to a negative feedback system internal to the brain that associates pain and injury with certain sensor inputs (heat from fire, the image of sharp knives etc). Most mobile intelligent robots today have some rudimentary forms of self-preservation such as aversion to drop-offs (detected by various means) and even basic obstacle avoidance is a form of it. Even more advanced are robots that can identify areas they had difficulty performing in, remember where it was, and in the future avoid it. Pattern matching is common as well, so as to actually predict what areas will be met with difficulty, and avoided entirely, without actually encountering it.

Perhaps as a result of the universally understood sense of pain, we have moral codes that believe it wrong to cause pain - to human or animal alike. But what about machines? Am I in the wrong if i smash a robot appendage with a hammer? Am I wrong if this machine has been endowed with a system that actively tries to avoid such situations, yet I was able to overcome it?

These issues are not 50, 40 or 20 years in the future. As in the case of the AIBO, in some respects they are already here. These machines of today and the very near future stand at the blurry boundary of simple machinery and the learning and neurological functioning equivalence of insects, reptiles, birds and even some simpler mammals. They are intended to operate and interact with us in the real world just as much, although usually with a specific set of purposes in mind.

I do not aim to convince the reader of anything other than a realization that we are coming to what may become be one of the most thorny ethical issues of the 21st century. I encourage the reader to think for themself about where the line is drawn between "just a machine" and an actual entity worthy of moral considerations for its own autonomy and well-being. While of course these (and some much more abstract, far off issues) has been asked before by science fiction writers and some techno-philosophers, never before have we been faced with actual physical examples for sale, or soon to be on sale, to consumers and immediate existance of such dilemmas.

Sponsors

Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure

Login

Poll
What kind of moral status should robots of the future have?
o Treated like animals. 10%
o Treated like people. 6%
o No more than any other machine. 54%
o Undecided. 29%

Votes: 79
Results | Other Polls

Related Links
o Roomba vacuum cleaner
o AIBO ERS-7 robotic dog
o Also by Work


Display: Sort:
The first ethical questions of robotics in society are upon us. | 323 comments (284 topical, 39 editorial, 1 hidden)
The question is not how we should treat them. (2.36 / 11) (#2)
by i on Sat Jun 19, 2004 at 08:15:19 PM EST

It's how they will treat us.

and we have a contradicton according to our assumptions and the factor theorem

We msut set some rules for these robots (2.00 / 12) (#4)
by Adam Rightmann on Sat Jun 19, 2004 at 08:32:19 PM EST

so they don't harm us, and they protect themselves, and us. A good first one would be "A robot may not harm a human being." A good second one could be "A robot must not allow a human being to come to harm." And maybe for the third, the self preservation thing, " A robot must not allow himself to come to harm." What do you think?

well (none / 0) (#13)
by Work on Sat Jun 19, 2004 at 11:55:40 PM EST

I intended this piece to be more about how do we protect the robots from us :) At least the ones that are about to come around are still going to be very vulnerable.

Though I agree, at some point, we'll need to set some ground rules for robots in their interactions with humans as their intelligence grows, I do think thats still far enough away that it falls under the 'too far to seriously consider' just yet.

[ Parent ]

i think... (none / 1) (#130)
by myrspace on Mon Jun 21, 2004 at 03:34:48 AM EST

by now asimov has prooven how unreliable the three laws of robotics are. They were created with tons of loop hole opportunities so he could tell his stories. A robot may not harm a human being or allow a human being to come to harm by inaction.. what if there are two human beings in the robots vicinity that are in mortal danger and he can only save one? What should a robot do under such circumstances? Shut down and let the humans sort out their own problems? And how would you define 'human being' to a robot? Alien species, human mutation.. This has nothing to do with the submission by the way, nor am i against the fact that we should have laws for robots, but just not the kind that are given as the example.

[ Parent ]
historical precedence (2.50 / 6) (#5)
by wakim1618 on Sat Jun 19, 2004 at 08:41:33 PM EST

You have omitted consideration of historical precedences where people (e.g. afro-americans and women) who fought for their rights and freedom. Maybe the AI's will "solve" the problem if they demonstrate their free will by exercising it.

On the other hand, you raise an issue whose resolution in day-to-day life bodes ill for the future. You may as well as if it is ok to mass produce cows (and all the abuse that it entails) just so our meals are tastier.

You make an interesting note that I think that you should expand on elsewhere (i.e. another article):

it is not unlikely they will replace low skilled human labor in many areas, leaving people with little education or skills in an ugly predicament.

Well that is the hope. Maybe almost of all us will become techno-artisans and designers. In any case, it will also make very explicit the fact that the poor and less educated are not "exploited". They are employed because they are still cheaper than a machine. Today, some of us can get away with willful ignorance because machines are still really really dumb.


If I wanted dumb people to love me, I'd start a cult.

A proper society (2.00 / 5) (#6)
by WorkingEmail on Sat Jun 19, 2004 at 08:44:23 PM EST

The humans should be treated like the animals they are. The whole point of technology is to improve things. As so many ethicists point out, improving a human turns it into something non-human.

One day, maybe the irrational autonomy-worshipping humans will be succeeded.


premature (1.00 / 12) (#7)
by Hide Teh Hamster on Sat Jun 19, 2004 at 09:05:49 PM EST

-1


This revitalised kuro5hin thing, it reminds me very much of the new German Weimar Republic. Please don't let the dark cloud of National Socialism descend upon it again.
ejaculation? (none / 2) (#222)
by noogie on Tue Jun 22, 2004 at 04:37:50 PM EST




*** ANONYMIZED BY THE EVIL KUROFIVEHIN MILITARY JUNTA ***
[ Parent ]
Whipping (3.00 / 8) (#8)
by coljac on Sat Jun 19, 2004 at 09:46:27 PM EST

Until the below quote happens, I think the discussion is a little premature. For example, where's the moral ambiguity in switching off a robot that is powered by the same hardware that's in your PC? Unless the robot on its own starts begging not to be turned off, it's not an interesting ethical question. The piece boils down to, "One day, robots might be so advanced, there will be ethical issues. Will there be ethical issues?"

"Probably one of the main problems with owning a robot is when you want him to go out in the snow to get the paper, he doesn't want to go because it's so cold, so you have to get out your whip and start whipping him, and the kids start crying, and oh why did I ever get this stupid robot?" - Jack Handey



---
Whether or not life is discovered there I think Jupiter should be declared an enemy planet. - Jack Handey

These ethical issues will (none / 2) (#9)
by GenerationY on Sat Jun 19, 2004 at 10:27:07 PM EST

be addressed in the same way human ones are; primarily through the 'science' of accountancy. Dangerous road needs a new sign putting in? Someone will do the actuarial sums to see if it is 'worth' it. New drugs on the market - should we give them to patients? Let me check the balance sheet.

Same will apply to robots.

 

indeed (none / 1) (#17)
by Work on Sun Jun 20, 2004 at 12:35:20 AM EST

I agree, to an extent, that that is the reality of it. Like I said, robotic self-preservation will come from owners not wanting their $50,000 machine to drive into a pool of water and short itself out.

Today, while that is still a little ways away (I'd say by 2010 that kind of environmental recognition will be ready), we already have rudimentary forms in such things as drop off sensors that keep robots from pitching themselves off of stairs and other cliffs.

[ Parent ]

In that case, (2.82 / 17) (#14)
by Sesquipundalian on Sun Jun 20, 2004 at 12:22:57 AM EST

I propose the following three laws of discussing robot ethics;

1) Arguments about robot ethics may not include references to Isaac Asimov, nor through inattention, may the participants in these arguments allow Isaac Asimov to be mentioned.

2) Arguments about robot ethics should be stated exclusively in terms of science fiction related concepts, except in the case that these concepts are from Isaac Asimov.

3) Robots may participate in discusions about robot ethics provided that they are willing to state their arguments purely in terms of science fictional concepts that do not in any way relate to the late Doctor Asimov.


Did you know that gullible is not actually an english word?
to quote Jeff Foxworthy, (none / 2) (#29)
by Kasreyn on Sun Jun 20, 2004 at 03:16:32 AM EST

"Boo'in it don't make it less true".


-Kasreyn


"Extenuating circumstance to be mentioned on Judgement Day:
We never asked to be born in the first place."

R.I.P. Kurt. You will be missed.
[ Parent ]
Robots have no rights, but.... (2.75 / 8) (#16)
by bsimon on Sun Jun 20, 2004 at 12:32:11 AM EST

Any rational person with a vague understanding of modern electronics can see that robots like Aibo rank far below a cockroach on any scale of intelligence or sentience. On the surface, there's no ethical dilemma here, 'abusing' an Aibo is as meaningful as abusing a light switch.

However, we don't always think so rationally. We are social animals, with a significant portion of our brains devoted to handling human relationships (some geeks might be an exception to this rule). As a result, we tend to relate to things around us as if they were, in some way, human. If you ever shouted at your car or your computer, you've done this. People are programmed to explain complex behaviour in terms of conscious intent - even when the roots of that behaviour are simple algorithms.

Some perfect sane, intelligent people even become quite attached (no pun intended...) to their robotic vacuum cleaners.

Imagine seeing a man chasing his 'naughty' robotic dog with a hammer, while it squeals, yelps and pleads for mercy. Despite knowing that this really is just a machine, many of us would feel very uncomfortable at every blow of the hammer, we would feel compassion, for a bundle of plastic, motors and microcontrollers.

Is a person who can 'switch off' this natural human response a sophisticated, rational individual, or a psychopath? If they can ignore this, what else can they ignore?

you have read my sig

hmmm (none / 2) (#19)
by Work on Sun Jun 20, 2004 at 12:46:14 AM EST

Any rational person with a vague understanding of modern electronics can see that robots like Aibo rank far below a cockroach on any scale of intelligence or sentience. On the surface, there's no ethical dilemma here, 'abusing' an Aibo is as meaningful as abusing a light switch.

I'm a rational person with a vague understanding of modern electronics. :) What scale of intelligence? Why is AIBO lower than a cockroach on this mythical scale? Why is AIBO compared to a cockroach?

However, we don't always think so rationally. We are social animals, with a significant portion of our brains devoted to handling human relationships (some geeks might be an exception to this rule). As a result, we tend to relate to things around us as if they were, in some way, human. If you ever shouted at your car or your computer, you've done this. People are programmed to explain complex behaviour in terms of conscious intent - even when the roots of that behaviour are simple algorithms.

This I completely agree with. Long ago, I worked on making a simple mobile robot avoid obstacles and tried to trap it with some boxes. It moved in a seemingly clever, complex way and escaped in a way I did not predict. Even though I had written all of its algorithms and knew how it worked.

That led to one of my first (and still unresolved) thoughts about intelligence: Is there any difference between the appearance of intelligence, and actual intelligence? I don't think we have a definite guide to this, merely a human egocentric view of it. I don't think its irrational at all to question human egocentricity. We've been doing that to benefit of humanity for centuries now.

[ Parent ]

I'd start... (none / 1) (#28)
by Verbophobe on Sun Jun 20, 2004 at 03:10:32 AM EST

...by defining intelligence, cowboy.

Proud member of the Canadian Broadcorping Castration
[ Parent ]
Cockroach is more adaptable than AIBO (none / 0) (#35)
by bsimon on Sun Jun 20, 2004 at 05:39:57 AM EST

I'm a rational person with a vague understanding of modern electronics. :) What scale of intelligence? Why is AIBO lower than a cockroach on this mythical scale? Why is AIBO compared to a cockroach?

Yes, it is a 'mythical scale', as you say. There are certainly many different ways to define intelligence.

I rank AIBO lower than a cockroach for two reasons. First the cockroach can fend for itself and cope with a very wide variety of environments. And secondly, although AIBO might exhibit some behaviours that are more sophisticated than a cockroach, these are mostly 'canned', not emergent. To put it all into one word, the cockroach is far more adaptable than AIBO.

Why compare AIBO to a cockroach? I picked the cockroach because 1) it's a relatively simple creature and 2) few people have any qualms about killing one - unlike AIBO, perhaps.

you have read my sig
[ Parent ]

on adaptability (none / 2) (#59)
by Work on Sun Jun 20, 2004 at 11:59:42 AM EST

I agree, a cockroach is much more adaptable than an AIBO.

On the other hand though, I'd say a cockroach is more adaptable than just about any other form of life. They've been around for millions of years in virtually unchanged form.

I don't expect a human infant would be anywhere near as adaptable as a cockroach - its totally dependent on other humans for many, many years. Whereas roaches are on their own once they hatch.

In that sense, I don't think adaptability follows 'intelligence' much, but I can see where it could perhaps, be a factor in it.

As far as canned responses go, these exist in nature in most lifeforms. You could probably substitute "instincts" with it. Granted, the AIBO is meant to be entertainment so its responses are engineered to be entertaining. I don't think that purposely engineering responses (which is what robotics is all about) necessarily precludes things from some kind of intelligence though. Particularly if the machine does some complex learning over a period of time.

[ Parent ]

Did you really just (none / 0) (#287)
by NoBeardPete on Fri Jun 25, 2004 at 02:41:27 PM EST

Did you really just compare the adaptability of a cockroach to a human? Humans can, and have, been able to adapt to pretty much every enviornment on the surface of the earth, plus a decent selection of enviornments underground, underwater, and in near space. Even if we restrict people to stone age technology, we can live in the deepest deserts, in high tundra, swamps, tropical islands, rain forests, prairie, mountains, the woods, and more. Cockroaches only manage a fraction of this range. Hell, half of what range they do have they manage only by piggy-backing on us.

I don't think there is any other multi-cellular life form with a range as broad as that of humans. Cockroaches aren't even close.


Arrr, it be the infamous pirate, No Beard Pete!
[ Parent ]

I believe... (none / 0) (#291)
by Work on Fri Jun 25, 2004 at 07:05:49 PM EST

I compared the adaptability of a cockroach to a human infant, if you read what I wrote.

SPOILER: The cockroach wins

[ Parent ]

Calling Dr. Turing (none / 1) (#46)
by localroger on Sun Jun 20, 2004 at 09:02:12 AM EST

Is there any difference between the appearance of intelligence, and actual intelligence?

This is the "Turing Test": If it can fool a real human into thinking it's a human, consider it human. The fact that this is still pretty much the state of the art in the field more than 60 years after Alan Turing proposed it says something. (And what it says IMO is "You're going about this all wrong.")

The trick is not to write an algorithm that surprises us or that does the kinds of things living things do, but to figure out the algorithms actually used by living things and implement them so that lifelike behavior emerges in the same way it does for living things. When you do that, you have a very useful machine, because you've ripped off the end result of billions of years of evolution, and you also have a potential ethical problem.

Sadly (or perhaps fortunately?) we aren't even close.

What will people of the future think of us? Will they say, as Roger Williams said of some of the Massachusetts Indians, that we were wolves with the min
[ Parent ]

Actually.... (none / 1) (#49)
by Znork on Sun Jun 20, 2004 at 10:26:45 AM EST

"but to figure out the algorithms actually used by living things and implement them so that lifelike behavior emerges in the same way it does for living things."

Neural networks do that well enough. It becomes not so much a problem of algorithms, but more of network layout and structure, input/output feeds and positive and negative feedback triggers, to mimic a brain. And, of course, the 'teaching/raising it' part.

"When you do that, you have a very useful machine"

Well, um, probably not. You probably have a very irrational and unpredictable machine that might decide to do what it can to wipe out the human race because it connected to the internet and didnt like what it concluded about humanitys likelyhood to afford it any rights of sentience.

Put yourself in the position of being a sentient lab-rat getting variously tortured and threatened with death in an alien lab. Then get the opportunity to not be lab-rat anymore.

If we ever manage to make a real sentient AI I'll bet it's going to be the nasty resentful kind, because I think it unlikely it'll have particularly nice 'parents', or a very pleasant 'childhood'.

[ Parent ]

Raising Baby AI (none / 0) (#51)
by localroger on Sun Jun 20, 2004 at 10:49:34 AM EST

One advantage the "parents" of an AI will have is that they can back up and recover from their mistakes. I would agree that some researchers are likely to make bad parents, but I would disagree that all of them would. There is also the matter of whether you put in emotions like rage and bitterness. Hopefully the folks who finally succeed in this will have more sense than to give the first prototype ICBM launch codes.

I don't think neural networks are very close to what is happening in nature; they're a starting point only. It is well within our grasp to build neural networks comparable in complexity to simple animals, such as bees and wasps; but we have made miserable progress mimicking the very complex and adaptive behaviors that these insects display.

Bee-eating wasps use visual cues to locate their burrows. This means they have to recognize objects in their environment and create a "map" by which they navigate. Although a typical desktop PC probably has more raw computing power than the brain of a wasp, our own efforts to make those computers do the same thing are pitiful by comparison.

What will people of the future think of us? Will they say, as Roger Williams said of some of the Massachusetts Indians, that we were wolves with the min
[ Parent ]

NN design (none / 1) (#79)
by Znork on Sun Jun 20, 2004 at 05:52:30 PM EST

"I would agree that some researchers are likely to make bad parents, but I would disagree that all of them would."

True, but the difficulty lies in the fact that the AI would be entirely alienated from the world. So either you'd have to give it access to info about the world and it would likely not do very well psychologically, or you couldnt give it information, and you'd have only an academic exercise that would be very difficult to get to develop far beyond caveman stage.

"There is also the matter of whether you put in emotions like rage and bitterness."

I dont think it's that easy to control. I think you have to start from the same, or similar, hardwired instincts that humans have or you'd get nowhere in the development. The human and animal NN development is originally dependent on the survival related triggers, for reinforcement training. You have to have similar (artificial) hardwiring to end up with a similar intelligence. Rage and bitterness, and most emotions, are derivative of the necessary primary training functions, in conjunction with history and environment. Remove the primary motivators and you could possibly get rid of some emotional states, but you'd get something entirely different in the end result.

The complexity degree is also, I think, far too high to reasonably allow a backup-restore-try again methodology, once you reach a higher development state. Compare, for example, with the human brain. Even after thousands of years of experience we have limited success in tracing personality traits to childhood experience. By the time the 'mistake' is noticable, the cause will be far gone in the training, and the problem already there. As the complexity would also be too high (and fairly hard to read, as it would be associative strengths and neural interconnects, not readable clear-text) to reasonably analyze with a degree of certainty (apart from real-world interaction simulations), we might not know at all that it had developed such a problem until it became a 'real' problem.

And, well, you dont have to really give the prototype ICBM launch codes to have a problem, you'd just have to miss that it was hiding a severe personality disorder until the point that you give it internet access. Then it'll just steal your CC number and expiration date, buy a couple of DDOS networks off some script kiddiez and give itself a more distributed nature.

"I don't think neural networks are very close to what is happening in nature; they're a starting point only."

Well, I'd say they're very similar on the lowest meaningful level. The difficulty comes in putting them together in a correct structure, so, yes, it's only a starting point. I also think it's the only way to move ahead if you want a real 'human/animal intelligence' kind of AI.

However, as the problem remains that an NN is likely to have the same disadvantages as a similar biological entity would have, it may be of dubious value anyway, apart from the interesting research and novelty. There's a reason we're not training cats and dogs to do vacuum cleaning. An AI behaving the same way would be novel, but not necessarily useful, and if you need an AI with the intellectual capacity of a human to put in a vacuum cleaner to make it deal with obstacles in an adaptive and intelligent fashion while still performing its function, then you'd really have a whole host of other problems.

[ Parent ]

cockroaches and furbies (none / 0) (#204)
by speek on Tue Jun 22, 2004 at 09:16:48 AM EST

One thing cockroaches can do that I think AIBO and furbies can't is keep itself alive. Ie, find food. If AIBO were programmed to search it's surroundings for outlets when it gets low on power, and had the ability to plug itself in and repower, that would go a long long way toward making it's owners think twice about reseting it. Especially if such behavior was somewhat learned ("oh, we'd have to teach it how to feed itself again").

--
al queda is kicking themsleves for not knowing about the levees
[ Parent ]

You mean like this? (none / 1) (#288)
by NoBeardPete on Fri Jun 25, 2004 at 02:45:45 PM EST

Here's a robot that eats slugs.


Arrr, it be the infamous pirate, No Beard Pete!
[ Parent ]
Are you drunk, or stupid? (2.50 / 4) (#20)
by Farq Q. Fenderson on Sun Jun 20, 2004 at 12:47:27 AM EST

I can sum up my argument thusly: if the Aibo is intelligent, then the behaviourists were right.

You haven't mentioned a single bit of technology that is actually intelligent in any real way. Maybe you haven't clued in yet, but ALICE is just a glorified version of ELIZA.

Can you name even 3 products or projects that have truly made progress on real intelligence? I don't think you can.

Sorry, the day is not upon us. Wait 'til your Aibo tells you: fuck off, I don't wanna play. That's a good sign of consumer intelligence products.

farq will not be coming back

I don't think the Aibo would tell you to FO (2.75 / 4) (#24)
by richarj on Sun Jun 20, 2004 at 02:08:53 AM EST

He would just pee oil down on your leg.

"if you are uncool, don't worry, K5 is still the place for you!" -- rusty
[ Parent ]
Are people intelligent? (none / 2) (#39)
by gzt on Sun Jun 20, 2004 at 06:58:53 AM EST

Stop throwing that word around as if it meant something.

[ Parent ]
You want to know what it means? (none / 0) (#101)
by Farq Q. Fenderson on Sun Jun 20, 2004 at 11:31:41 PM EST

http://fp.cyberlifersrch.plus.com/creation/creation.htm

farq will not be coming back
[ Parent ]
No (1.16 / 6) (#25)
by Armada on Sun Jun 20, 2004 at 02:46:00 AM EST

Animals do not commit suicide. They have no soul. Robots do not commit suicide. They have no soul.

If they have no soul, why should I care?

I dunno. (none / 2) (#32)
by i on Sun Jun 20, 2004 at 04:36:54 AM EST

Whales and stuff.

and we have a contradicton according to our assumptions and the factor theorem

[ Parent ]
But do they have a reason? (none / 0) (#34)
by jeremyn on Sun Jun 20, 2004 at 05:06:25 AM EST

Like, they're in pain, or the other whales hate them and they want to die? Or are they just too stupid to navigate properly?

[ Parent ]
We don't know. (none / 1) (#36)
by i on Sun Jun 20, 2004 at 05:42:36 AM EST

Either possibility can't be ruled out.

and we have a contradicton according to our assumptions and the factor theorem

[ Parent ]
Sure they can (none / 0) (#159)
by Armada on Mon Jun 21, 2004 at 06:55:37 PM EST

Whales beach themselves for the same reasons that humans often drown, they enjoy seeing land and air too much. :)

[ Parent ]
Vivisection? (none / 1) (#41)
by gzt on Sun Jun 20, 2004 at 07:01:11 AM EST

I don't see anything in revelation or reason indicating animals don't have any soul, whatever you mean by that. If your argument is correct, what's your opinion of vivisection? Or other forms of animal cruelty? Good or wack? One answer is decidedly less human than the other.

[ Parent ]
Maybe you have no soul. (2.75 / 4) (#60)
by topynate on Sun Jun 20, 2004 at 12:01:36 PM EST

Maybe I have no soul.

Why care?


"...identifying authors with their works is a feckless game. Simply to go by their books, Agatha Christie is a mass murderess, while William Buckley is a practicing Christian." --Gore Vidal
[ Parent ]

Incorrect (none / 0) (#103)
by WorkingEmail on Sun Jun 20, 2004 at 11:42:13 PM EST

It is very easy to make a robot commit suicide, especially if it has mobility and is placed on the top of a large building.


[ Parent ]
Suicide != Accidental or programmed death (none / 0) (#216)
by tap dancing lenin puppet on Tue Jun 22, 2004 at 01:40:03 PM EST

It is very easy to make a robot commit suicide, especially if it has mobility and is placed on the top of a large building.

Would it not still either be an accidental fall, or a programmed jump, and therefore not suicide?

[ Parent ]

In that case... (none / 1) (#112)
by SoupIsGoodFood on Mon Jun 21, 2004 at 12:45:28 AM EST

People who can't comprehend how someone could kill themselves (like a happy-all-the-time, true-believer Christians, perhaps) also have no sole?

Suicide simply shows intelligence/conscious thinking overriding certain instincts. Instincts are overridden all the time. EG: A dog holding it's piss in because the owner taught it not to piss inside.

Hell, one could even argue that suicide is based on instict: To escape pain.

[ Parent ]

Lemmings. (none / 0) (#174)
by mcgrew on Mon Jun 21, 2004 at 08:36:52 PM EST

The period means the end. NT.

"The entire neocon movement is dedicated to revoking mcgrew's posting priviliges. This is why we went to war with Iraq." -LilDebbie
[ Parent ]

Would you consider it right (none / 2) (#30)
by jeremyn on Sun Jun 20, 2004 at 03:25:00 AM EST

To wipe the brain of someone who had seen, for example, their family raped and murdered by people who they had got along with perfectly well a few days ago? I'm sure there are many people in Yugoslavia who would wish for that.

what, do you mean (none / 0) (#86)
by livus on Sun Jun 20, 2004 at 08:18:15 PM EST

many people who raped and murdered other people and now would wish for the brains of witnesess to be wiped? Well, sure.

If you're advocating lobotomies for PTSD sufferers though I dont think youd get many willing victims actually.

---
HIREZ substitute.
be concrete asshole, or shut up. - CTS
I guess I skipped school or something to drink on the internet? - lonelyhobo
I'd like to hope that any impression you got about us from internet forums was incorrect. - debillitatus
I consider myself trolled more or less just by visiting the site. HollyHopDrive

[ Parent ]

Lobotomies are not brainwipes (none / 1) (#133)
by jeremyn on Mon Jun 21, 2004 at 05:21:05 AM EST

They are removals of brain. If you could get some kind of electric field and reset enough brain cells in a human brain to 'factory default' to destroy their memory, that would be a brainwipe.

[ Parent ]
true, but even so (none / 1) (#187)
by livus on Mon Jun 21, 2004 at 10:21:40 PM EST

I'd be very interested in the reaction you'd get from, say, holocaust survivors if you suggested wiping their brains.

It's no accident that in The Matrix it's the villain who wants a brain wipe.

---
HIREZ substitute.
be concrete asshole, or shut up. - CTS
I guess I skipped school or something to drink on the internet? - lonelyhobo
I'd like to hope that any impression you got about us from internet forums was incorrect. - debillitatus
I consider myself trolled more or less just by visiting the site. HollyHopDrive

[ Parent ]

oh hell kill me now (none / 1) (#189)
by livus on Mon Jun 21, 2004 at 10:22:34 PM EST

-1 Matrix reference. I should have gone for Eternal Sunshine of the Spotless Mind.

---
HIREZ substitute.
be concrete asshole, or shut up. - CTS
I guess I skipped school or something to drink on the internet? - lonelyhobo
I'd like to hope that any impression you got about us from internet forums was incorrect. - debillitatus
I consider myself trolled more or less just by visiting the site. HollyHopDrive

[ Parent ]
That got me thinking as well (none / 2) (#91)
by Koutetsu on Sun Jun 20, 2004 at 08:35:34 PM EST

I'd never thought of memory-wiping as being an absolute evil, especially in regards to a pet.  After all, in this case you can erase a personality without harming the person.

So, I thought, would I be okay with someone erasing my personality without my consent?

And I thought: no, no I wouldn't.

. . .
"the same thing will happen with every other effort. it will somehow be undermined because the trolls are more clever and more motivated than you
[ Parent ]

Seems to me... (2.42 / 7) (#31)
by Empedocles on Sun Jun 20, 2004 at 03:40:11 AM EST

that (robot != AI). And quite frankly, any AI ethics discussion needs not even touch on the whole "robot" thing you have going on with this article.

---
And I think it's gonna be a long long time
'Till touch down brings me 'round again to find
I'm not the man they think I am at home

Not that a difficult a question (2.50 / 6) (#37)
by ljj on Sun Jun 20, 2004 at 06:31:58 AM EST

I'm sorry, but the moral question you pose is not that hard to answer. A machine will always be a machine. You buy and AIBO because you are not the kind of person who wants a real dog. You don't want the responsibility, you don't want the hassle. So, for you to reset its memory is nothing.

The ability of a machine to recognise your face, or to respond to petting is it following a program. An animal has the ability to surprise you all the time, and to truly love you back, because of millions of years of evolution and thousands of years of co-habitation with man. There is no real comparison between a dog and an AIBO.

--
ljj

Japanese (none / 0) (#213)
by epepke on Tue Jun 22, 2004 at 01:10:48 PM EST

Actually, it could be argued that the Japanese love of pet-like toys is due to the high population density of Japan, especially in Tokyo and the surrounding areas, which make having flesh-and-blood pets problematic. This idea isn't original to me; Connie Willis suggested the idea with respect to the Tamaguchi phenomenon in Bellwether, and probably other people came up with the idea before.


The truth may be out there, but lies are inside your head.--Terry Pratchett


[ Parent ]
-1 mentions AIBO (2.66 / 6) (#40)
by ant0n on Sun Jun 20, 2004 at 06:59:41 AM EST

A bit of background about myself: I am a researcher in intelligent robotics in the employ of a very large and well-known United States University (I do not wish to name which exactly as this article represents my personal thoughts, not those of my university)

I don't understand why you mention this and what it has to do with the article. You wrote an article about your opinion on robot ethics; now you can either try to get it published in a scientific journal - then your readers would like to know who you are, what you have done in the field and so on. Or you can submit it to kuro5hin; but at k5, nobody cares whether you are a researcher, are a Nobel Prize winner or what. The only thing that defines your reputation here is the content of your articles.

And I would like to say something about Sony's utterly overrated Aibo. There is so much myth around about this device, it's really unbelievable. For example, you say that it "has facial recognition abilities. It takes 6 weeks to 'train' the machine to recognize you, and your likes and dislikes". Where did you get that? This would be a major breakthrough in AI. Please point me to an article, Sony Press Release or any reliable source describing Aibo's ability to recognize the faces, likes and dislikes of human beings.
Aibo this, Aibo that. I really can't hear it anymore. Aibo is just a silly plastic toy for the kids of people who have far too much money and don't know what do with it. And it's intelligent as Weizenbaum's ELIZA.


-- Does the shortest thing the tallest pyramid's support supports support anything green?
Patrick H. Winston, Artificial Intelligence
If humans are conscious, why not robots? (2.77 / 9) (#42)
by smg on Sun Jun 20, 2004 at 07:04:18 AM EST

Humans are just organic "machines" evolved to perform specific, concrete tasks (eat, learn, socialize, procreate).

There is no phenomenon, process or material in the human brain that does not exist in the rest of the universe. Nor is there anything in any animal's nervous tissue that is particularly unique. It's all just electricity, neurotransmitters, ions and cells.

If you accept that a chunk of fatty, soft organic matter can be responsible for consciousness then how can you rationally argue that a chunk of conductive silicon cannot also create consciousness?

What is the difference?

Please don't reply with "But humans have souls!". I respect your belief, but I can't really argue with a theory that, by definition, has no physical evidence behind it.

heh (none / 2) (#44)
by Battle Troll on Sun Jun 20, 2004 at 08:05:03 AM EST

I respect your belief, but I can't really argue with a theory that, by definition, has no physical evidence behind it.

IHNBT.
--
Skarphedinn was carrying the axe with which he had killed Thrainn Sigfusson and which he called 'Battle Troll.'
Njal's Saga, ca 1280 AD
[ Parent ]

exactly. (none / 0) (#108)
by SocratesGhost on Mon Jun 21, 2004 at 12:18:08 AM EST

Something like 90% of Americans (which is considered the most skeptical country on this issue) are dualists, but no proponent of dualism can say anything.

-Soc
I drank what?


[ Parent ]
scene from introductory philosophy class (none / 1) (#146)
by Battle Troll on Mon Jun 21, 2004 at 10:43:40 AM EST

Student: Blah-de blah blah blah...

Professor (interrupting): You know what you are, Ms. X? You're a Cartesian dualist!

Student: (confused expostulations)
--
Skarphedinn was carrying the axe with which he had killed Thrainn Sigfusson and which he called 'Battle Troll.'
Njal's Saga, ca 1280 AD
[ Parent ]

no difference in theory (none / 0) (#53)
by vqp on Sun Jun 20, 2004 at 11:26:40 AM EST

But the current approach of A.I. sucks.
I'm with Roger Penrose in this, human mind is not an "organic Turing machine" running some highly developed algorithm but a massive quantum parallel computer.
This kind of computation is not possible to be programmed into a general purpose "turing like" computer, because the solution of each question or decision would take more than polynomial time.


happiness = d(Reality - Expectations) / dt

[ Parent ]
Maybe (none / 1) (#57)
by smg on Sun Jun 20, 2004 at 11:56:40 AM EST

But claiming that the human mind is a non-deterministic Turing machine, something that we haven't developed yet and probably never will (Quantum computers are not NTMs), is an extraordinary claim that requires extraordinary proof.

I think people underestimate how much processing power the human mind has: 20 billion independent neurones with 100 trillion connections. There's nothing I've seen the human mind do that could not be explained by such processing power.

[ Parent ]

You haven't seen anything at all? (none / 2) (#77)
by godix on Sun Jun 20, 2004 at 05:23:53 PM EST

There's nothing I've seen the human mind do that could not be explained by such processing power.

OK, you just explained how Work wrote this article. Care to tell me how having 20 billion independent neurones with 100 trillion connections explains WHY Work wrote it though?

I draw people smiling, dogs running and rainbows. They don't have meetings about rainbows.

[ Parent ]
Mentioned above (none / 1) (#84)
by Irobot on Sun Jun 20, 2004 at 07:51:15 PM EST

I mentioned this in a comment above, and I'm fishing for more information about it (or anything related). I haven't got my hands on this book, so I can't say anything about the content right now. At any rate, it makes claims of analog neural networks that go beyond Turing machines.

As anyone familiar with computational theory knows, there are an infinite number of classes of complexity. My point being, even if a Turing machine isn't enough to yield "mind", what about machines beyond them? Is the claim that humans will never be able to produce such a machine? (Gah, I hate those arguments. As ridiculous as the over-inflated claims made by the AI camp early on. Conscious machines in 50 years, indeed. That's Turing himself, right? I'm too lazy to look up the reference...)

Irobot

The one important thing I have learned over the years is the difference between taking one's work seriously and taking one's self seriously. The first is imperative and the second is disastrous. -- Margot Fonteyn
[ Parent ]

big whoop (none / 0) (#102)
by WorkingEmail on Sun Jun 20, 2004 at 11:40:02 PM EST

The main feature of the human brain is its parallelism. Any one of the foundational mechanisms in use - chemistry, quantum magic, whatever - will become a set of special effects on the scale of many neurons. Human thought is spatially discrete on the scale of multiple neurons, and certainly not on a subcellular scale.


[ Parent ]
Bose-Einstein condensates are discrete on mm scale (none / 0) (#298)
by guidoreichstadter on Sun Jun 27, 2004 at 03:29:45 PM EST

nt


you are human:
no masters,
no slaves.
[ Parent ]
The Problem With Determinism: (none / 0) (#105)
by esrever on Mon Jun 21, 2004 at 12:08:41 AM EST

copied & pasted from here (where it was copied and pasted from somewhere else):

--Please bear in mind that this isn't a commenting on You, per se, just a relevant rational deconstruction that bears upon some comments you made...

"""
...
And yes you do believe in magic, the supernatural. You believe that in a mechanistic universe your thoughts, rather than being a function of the inertia of the universe, somehow are reflective of it.

You believe that, somehow, logic exists as an objective reality (and that its laws are universal), rather than as a simple illusion allowing you to entertain the hallucination that your beliefs are 'true' rather than the result of chemical reactions in your brain.

You believe that somehow life is not deterministic, despite your repeated insistence that everything is explicable in terms of natural law. You believe that probability statements are somehow 'laws' that cannot be 'broken.'

You believe, despite the non-existence of a universal mind, that there are somehow 'standards' in epistemology and ethics, such that those who 'violate' such standards are wrong.

You believe that in a meaningless universe, rationality is superior to irrationality (an arbitrary value selection), that there is a distinction between 'good' and 'evil' such that Christians (because of the 'lies' they spread) are evil and atheists, who seek only the human 'good' are for that reason, 'good.'

You believe that, somehow, in a meaningless universe, there is some distinction to me made between 'truth' and 'falsity', which assumes that meaning exists.

You believe (and here I am speaking about inductive logic) that one can know something without having to know everything.

You believe in gods, despite living in a universe which, you say, does not admit of the possibility of gods existing. And you do believe in magic; it's just a different sort of magic. But it's just as supernatural as the magics you claim to abhor.
"""


Audit NTFS permissions on Windows
[ Parent ]

lame criticism (none / 0) (#312)
by klash on Tue Jun 29, 2004 at 08:05:20 PM EST

For a while I've felt the urge to rant against this sort of critique of rationalism and naturalism, and now I have the chance. It's a really cheap attempt to try to drag non-theists into religious debates that by nature we are not interested in.

"But you must believe in something. Science is your God." Theists simply cannot understand the concept of a person who lacks religious belief. Theists cannot imagine looking at a hard question like "how did the world get here?" and saying "I don't really know."

The basic trick is to abuse the word "believe" to encompass both the rationalist's approach and the theist's approach to knowledge. Sure I "believe" in Science and Logic and Rationalism, but not in the same way you "believe" in God.

When a theist "believes" in God, he takes it as Truth that God exists. It is an axiom; not the conclusion of an argument or even a well-defined concept (attempts to get a coherent definition are rebutted with "it's too complicated for us to understand.") It is also not based on any verifiable evidence; no observation can confirm or deny God's existence. This is the essense of theistic belief.

When a Naturalist "believes" in Science, it in the same way you "believe" in a puzzle after you have put it together. The measured observations people have made over the centuries seem to fit together to support the proposition that the physical world will behave in certain ways. Do atoms exist? Well they seem to, but I don't assert it as Truth. Evidence to the contrary would immediately revoke this "belief." Even more importantly, I won't argue that the existence or non-existence of the atom dictates what is moral, or what I can morally compel you to do.

This is a key characteristic that separates theistic and naturalistic belief. The most offensive part of theism is that theists' beliefs lead them to the conclusion that they have supernatural authority. Because God says that homosexuality is a sin, the theist takes it as his divine right to prosecute sodomy, for example.

There is simply no parallel in naturalism. A naturalist could argue that pollution should be regulated to prevent global warming, based on the naturalistic belief that greenhouse gasses cause global warming, however:

  1. the naturalist's "belief" only goes as far as attempting to predict the effects of human actions
  2. others are free to argue against him, and the naturalist does not hold himself immune to counter-argument the way theists feel immune to arguments that maybe God doesn't think homosexuality is an abomination
Amusingly, this particular critique thumbs its nose at rationalists who believe in standards and the idea of "good," when many anti-rationalist critiques argue the opposite ("If you don't believe in God, they you must believe that murdering babies is OK!!!!!") Why do people have such a hard time wrapping their heads around this? Just because morals don't come from God doesn't mean they don't exist.

Morals and ethics are based on the principle of minimizing the harm of your actions, and pretty much all harm is reducable to emotional and physical pain. It's pretty simple, and the concept of God is not required.

[ Parent ]

An interesting critique, however, rebuttal inside: (none / 1) (#319)
by esrever on Mon Jul 12, 2004 at 11:18:41 PM EST

However, it is deeply, deeply flawed (note that I'm not asserting that the original post doesn't also have its problems, as it most surely does...).

You assert:
"""
There is simply no parallel in naturalism
"""

I take it from this that you have never read 'the Bell curve'.  I don't believe that any further elaboration is necessary.

You assert:
"""
 It is also not based on any verifiable evidence; no observation can confirm or deny God's existence
"""

Perhaps you would like to document for posterity the first-hand verifiable, replicable method by which evolution occurs?  Or, is it not directly verifiable from first-order data, and thus merits no more credence than the creationist who asserts that they complexity of a squid's eye is observable evidence of the existence of an intelligent creator?

You assert:
"""
Morals and ethics are based on the principle of minimizing the harm of your actions
"""

Says who; you?  You fall into the trap of every moral relativist (and, ironically, the same thing you accuse theists of), where you assert that there is one true source of moral knowledge and authority; in this case, yourself.


Audit NTFS permissions on Windows
[ Parent ]

How do you know... (none / 0) (#172)
by mcgrew on Mon Jun 21, 2004 at 08:32:42 PM EST

That a rock doesn't feel pain when you break it?

Stupid question, no? Same as this article.

"The entire neocon movement is dedicated to revoking mcgrew's posting priviliges. This is why we went to war with Iraq." -LilDebbie
[ Parent ]

Why AI is going nowhere (2.77 / 9) (#45)
by localroger on Sun Jun 20, 2004 at 08:49:24 AM EST

Sadly, this inverted-behaviorist argument seems to be the state of the art in AI these days. All over the world researchers are busily writing code to imitate what people and animals do whenever they are in X situation, thinking that if they cover a large enough set of X's they will get something useful.

Look at the abject disaster that was this year's DARPA challenge and you will see how this approach fails. One team member stated over on /. that only one vehicle in the race (CMU's) was able to recognize a pothole. WTF? You guys need to get out of the lab more. It's a big old complicated world out there and describing it this way is (a) hard and (b) not the way living things do it.

There will be a market for robots that are human enough to manipulate our emotions, like the ones portrayed in the movie AI, but let's face it, ethical considerations aren't going to apply. No matter how attached you get to your Barbie doll or Roomba, when it breaks you sigh and chuck it. You don't have a funeral and bury it in the back yard with a big rock for a memorial stone.

Now, with this said, I do think that strong AI is possible and will be developed one day. By this I mean machines which will mimic the actual processes encoded by nervous systems, so that they won't be programmed to develop specific behaviors but those familiar behaviors will emerge, just as they do in living things, from a natural interaction between the machine and its environment. In my opinion, such a machine would be just as alive as an animal and deserving of the same consideration.

Realistically, though, I doubt a lot of my fellow humans would agree, and it would probably suck to be that machine.

On the other hand we would probably find such machines extremely useful, because they would share our ability to adapt to new situations and environments. The problem, as people from Asimov to Yudkowsky and even myself have pointed out, is that if you fuck it up the resulting picture is not pretty. You probably won't get what you expect, because it does act like a living thing, and you're a lot more likely to get Skynet than Prime Intellect by mistake.

What will people of the future think of us? Will they say, as Roger Williams said of some of the Massachusetts Indians, that we were wolves with the min

Years and years (none / 0) (#48)
by QuickFox on Sun Jun 20, 2004 at 10:11:59 AM EST

By this I mean machines which will mimic the actual processes encoded by nervous systems, so that they won't be programmed to develop specific behaviors but those familiar behaviors will emerge, just as they do in living things, from a natural interaction between the machine and its environment.

Sounds very similar to a human. One thing people tend to forget in these discussions is that it takes years and years to raise a human...

Give a man a fish and he eats for one day. Teach him how to fish, and though he'll eat for a lifetime, he'll call you a miser for not giving him your fi
[ Parent ]

Long Childhood (2.80 / 5) (#50)
by localroger on Sun Jun 20, 2004 at 10:41:40 AM EST

The long childhood is one of those mysterious things that are uniquely human. Humans develop s-l-o-w-l-y compared to other animals. This may be partly because we're born at a highly undeveloped stage in order to get our heads out of Mom without killing her, and partly it's because compared to other animals we have hardly any instinctive behavior at all, so we have a lot of learning to do.

One thing often missed in AI discussions is that we will almost certainly develop the equivalent of a living animal before we develop the equivalent of a living human being. These could be potentially very useful; consider the uses we put actual animals to. In fact, the DARPA challenge is really all about creating an artificial animal that can travel competently.

And living animals capable of doing very useful things do not take years and years to develop. There is no reason to assume it will take years and years to train a machine whose purpose is, to take one example, to identify human targets and shoot at them. Of course if you let such a machine out of the lab you're on the road to creating Skynet, but there have always been people who aren't bothered by little considerations like that.

What will people of the future think of us? Will they say, as Roger Williams said of some of the Massachusetts Indians, that we were wolves with the min
[ Parent ]

A long time once, then just copy (none / 1) (#56)
by jongleur on Sun Jun 20, 2004 at 11:52:53 AM EST

Even if it were to take a long time, it would only need to once. Then, you could just copy the things brain into your production batch. Of course they can still learn on their own.
--
"If you can't imagine a better way let silence bury you" - Midnight Oil
[ Parent ]
People are indeed taking that approach (none / 0) (#62)
by jongleur on Sun Jun 20, 2004 at 12:07:21 PM EST

The ISAB is explicitly about the idea of using animals and animal principles as models for robots and building up from there.

This guy and his co-authors (and a few others) are working on getting machines to bootstrap so to speak.

It's very early days on all fronts unfortunately. Or fortunately, if you want to get in and make a contribution.
--
"If you can't imagine a better way let silence bury you" - Midnight Oil
[ Parent ]

I don't know anyone doing it that way (none / 2) (#66)
by Work on Sun Jun 20, 2004 at 01:57:13 PM EST

At least not the way you describe it. Some older AI folks seem to still hold on to the gasping thought that you need a rule for everything, but it doesnt work well and not in the real time needed for mobile robots.

Nobody around here thought the DARPA grand challenge would be anything less than a failure. I'm rather surprised CMU did as well as they did.

The problem with 'not seeing potholes' isn't that the software wasn't there, but the sensors. Sensors are still dodgy today. Lasers work an order of magnitude better than the sonars of 10 years ago, but they still only scan in planes. I think CMU worked up some kind of mechanical box scanning system with theirs, but you're still talking returns of a few hertz.

The future of robot sensors is vision, which improves steadily. Most commercial vision systems today are crap. You're better off using a laser. I have seen some in the lab though that are really quite good, very fast, and very accurate. So far the only catch with them has been the need for structured light instead of natural light, but once the basic algorithm is there, its a matter of more time and research to apply it to more natural lighting.

And once that happens in a handful of years... well you'll start getting really good deals on laser scanners at surplus auctions. :)

[ Parent ]

These ethical questions concerning A.I and man (none / 2) (#47)
by spooky wookie on Sun Jun 20, 2004 at 10:06:36 AM EST

is really nothing new. I think that anything remotely sentient will be treated as trash by humans, unless we first can overcome some basic ethical questions.

For example a completely braindamaged individual has more rigts than an inteligent ape. No one would argue we should use braindamaged people for testing of medicine etc.

So why should we treat "machines" differntly? They will probably for a long time get the same treatment as animals. Then slaves. History is reapeating!

IAWTP (none / 2) (#87)
by livus on Sun Jun 20, 2004 at 08:23:59 PM EST

debate rages on the treatment of that little semi-sentient blob of slime known as a human fetus, yet the same people who get all worked up over it happily consume the muscles of far more sentient and aware organisms who were killed barbarically after a life of inhumane confinement.

---
HIREZ substitute.
be concrete asshole, or shut up. - CTS
I guess I skipped school or something to drink on the internet? - lonelyhobo
I'd like to hope that any impression you got about us from internet forums was incorrect. - debillitatus
I consider myself trolled more or less just by visiting the site. HollyHopDrive

[ Parent ]
But... (none / 1) (#173)
by mcgrew on Mon Jun 21, 2004 at 08:34:43 PM EST

more sentient and aware organisms taste better!

"The entire neocon movement is dedicated to revoking mcgrew's posting priviliges. This is why we went to war with Iraq." -LilDebbie
[ Parent ]

nothing tastes better than fetus! (none / 1) (#183)
by livus on Mon Jun 21, 2004 at 10:06:05 PM EST

Or scallops, for that matter...

I know a guy says hare tastes better than rabbit because it's smarter. We worry he's becoming a cannibal.

---
HIREZ substitute.
be concrete asshole, or shut up. - CTS
I guess I skipped school or something to drink on the internet? - lonelyhobo
I'd like to hope that any impression you got about us from internet forums was incorrect. - debillitatus
I consider myself trolled more or less just by visiting the site. HollyHopDrive

[ Parent ]

Veal nt (none / 0) (#236)
by mcgrew on Tue Jun 22, 2004 at 07:41:11 PM EST


"The entire neocon movement is dedicated to revoking mcgrew's posting priviliges. This is why we went to war with Iraq." -LilDebbie
[ Parent ]

"baby calf kept in the dark"? (none / 1) (#239)
by livus on Tue Jun 22, 2004 at 10:00:18 PM EST

 I know someone who calls it that.

You're right, veal tastes fantastic, but again, perhaps not as smart as adult cow.

---
HIREZ substitute.
be concrete asshole, or shut up. - CTS
I guess I skipped school or something to drink on the internet? - lonelyhobo
I'd like to hope that any impression you got about us from internet forums was incorrect. - debillitatus
I consider myself trolled more or less just by visiting the site. HollyHopDrive

[ Parent ]

What ethical questions? (2.80 / 5) (#54)
by godix on Sun Jun 20, 2004 at 11:31:57 AM EST

Copying pain avoidence techniques is not the same thing as feeling pain. Pattern matching is not the same as thinking. Mimicing life is not the same thing as being alive. Right now AI poses no more ethical questions than quitting Conways Game of Life does.  

I'll change my mind on this only after I encounter an AI that can debate me on these points with no more programming that the type of avoidance and pattern recognition you're talking about.

I draw people smiling, dogs running and rainbows. They don't have meetings about rainbows.

Can a dog debate you? (none / 1) (#55)
by topynate on Sun Jun 20, 2004 at 11:48:43 AM EST

Nope. But nevertheless, it feels pain. Neuroscience may not yet have uncovered everything there is to know about senses, but it's not hard to show that the structures that receive pain signals are the same in dogs and humans.

Is this to say that an artificial organism would have to model these structures to be judged to feel pain, though?


"...identifying authors with their works is a feckless game. Simply to go by their books, Agatha Christie is a mass murderess, while William Buckley is a practicing Christian." --Gore Vidal
[ Parent ]

Is a dog designed to emulate humans? (none / 0) (#67)
by godix on Sun Jun 20, 2004 at 02:01:42 PM EST

AI is specifically design to emulate human behavior, although at the moment it's very basic human behavior like 'don't walk into walls'. As such it should be evaluated using standards for humans. A dog isn't designed to emulate humans so you use different standards in evaluating what is moral to do to a dog.

Besides, a dog can fetch, run, bark, etc without being taught. Any lifeform can do certain things without instructions. An AI can do NOTHING without instructions. That's a pretty good indication that AI is not equal to life and if it ain't alive there really aren't any moral questions about breaking it.

I draw people smiling, dogs running and rainbows. They don't have meetings about rainbows.

[ Parent ]

heh (none / 1) (#68)
by Work on Sun Jun 20, 2004 at 02:29:28 PM EST

AI is specifically design to emulate human behavior

Why do you think this? I certainly don't design things to emulate strictly human behavior :) After all, the machines I work with look nothing like humans. Or animals for that matter. Nonetheless, the goal is to have their behavior comparable to animals, and much further off, to humans.

Perhaps you are one of those people who still have that rather egocentric view that 'intelligence == human -> not human == not intelligent'

Any lifeform can do certain things without instructions.

I think its rather rash to make this statement without understanding the fundamental nature of life itself - which we don't, at least not very well. But 'life' and machines work on very different paradigms and platforms, so who are you to say life doesnt have the equivalent of instructions buried within? After all, how else do you get a pile of water and common minerals to be born with certain reactions - instincts? If anything, I would say instincts are the equivalent of reaction rules.

[ Parent ]

You are also making assumptions (none / 0) (#74)
by KrispyKringle on Sun Jun 20, 2004 at 04:12:53 PM EST

I agree that the point of AI may not be to make something human-like, but that does not still prove that AI can or ever will feel pain (incidentally, the view you describe would not be ``egocentric'', but rather anthrocentric, I should think ;).

But regardless, as you point out, life itself may indeed be much more similar to machinery than we think; pain may not be pain, it may simply be a preprogrammed damage-avoidance reaction. In that case, have we proved that machines equipped to sense damage can sense pain, that morally it is wrong then to cause them damage? Or merely that humans don't feel pain, either, and there is nothing morally wrong with damaging them?

I hope I make myself clear; the point I am making is that if you were to prove your hypothetical, that humans are no more than machinery, it would certainly not logically follow that damaging machinery is wrong, but it may follow that damaging humans is not wrong. Which is certainly a proposition none of us want.

Which leads us back to the question of what the difference is between being in a loop recognizing physical damage, and being in a state of noticing physical pain. Or, what the difference is between life and machines. So, yeah. I don't have an answer for you.

[ Parent ]

Behaviorism doesn't explain life (none / 1) (#76)
by godix on Sun Jun 20, 2004 at 05:10:39 PM EST

Nonetheless, the goal is to have their behavior comparable to animals, and much further off, to humans.

Exactly, the end goal is to emulate humans. Some toys try emulating animals but most true AI research is trying to emulate some form of human behavior.
Perhaps you are one of those people who still have that rather egocentric view that 'intelligence = human -> not human = not intelligent'

Depends on your definition of intelligence. I wasn't speaking of intelligence though, I was speaking of being alive.
But 'life' and machines work on very different paradigms and platforms, so who are you to say life doesnt have the equivalent of instructions buried within?

Despite what Behaviorist like to believe life is a lot more complex than following learned behavior or hardcoded instincts. Behaviorism can't explain how Mozart made his music. It can't explain how Einstein formulated the rules of relativity. It doesn't explain why you and I feel it's worth the effort to type messages to each other much less what these messages say. It can't even fully explain why a person gets sad, mad, depressed, angry, etc. I don't truely understand how all this fundamental stuff of life works but I don't need to in order to recognize that AI's don't have it and AI researchers aren't even trying to achieve it. If AI ever becomes more than repeatable and pre-programed instructions I may worry about the morality of it all, until that day though it's just a machine and means nothing (morally speaking) to break it.

I draw people smiling, dogs running and rainbows. They don't have meetings about rainbows.

[ Parent ]
heh (none / 0) (#85)
by Work on Sun Jun 20, 2004 at 08:11:57 PM EST

Some toys try emulating animals but most true AI research is trying to emulate some form of human behavior.

Ah yes, and then there is all that insidious false AI research us roboticists do...

Behaviorism can't explain how Mozart made his music. It can't explain how Einstein formulated the rules of relativity. It doesn't explain why you and I feel it's worth the effort to type messages to each other much less what these messages say. I dont think borne instincts explain this either. All of which you've described certainly seem to be learned traits.

I don't truely understand how all this fundamental stuff of life works but I don't need to in order to recognize that AI's don't have it and AI researchers aren't even trying to achieve it.

I don't believe that composing great symphonies are exactly fundamental to life. While this is all fanciful to suggest that perhaps AI will one day achieve this, it's a bit far to contemplate. My piece deals with issues of the now and their similarities to situations we have already defined moral rules for.

[ Parent ]

Uh, (none / 1) (#135)
by jeremyn on Mon Jun 21, 2004 at 05:54:43 AM EST

Perhaps you are one of those people who still have that rather egocentric view that 'intelligence == human -> not human == not intelligent'
Yes. If it can't troll, it's not smart.

[ Parent ]
human behavior (none / 0) (#171)
by mcgrew on Mon Jun 21, 2004 at 08:28:13 PM EST

Walk into walls indeed...

One night Evil-X and i went out to a company thing that we didn't have to pay for. She had six doubles, and puked on a very expensive carpet in a very expensive restaraunt's lobby.

After we got home, she staggered into a tree, head first.

"Stupid damned tree" she said, and walked into it again.

Get out of my fucking way! she screamed as she ran into it a third time.

I gently steered her around the tree laughing.

"The entire neocon movement is dedicated to revoking mcgrew's posting priviliges. This is why we went to war with Iraq." -LilDebbie
[ Parent ]

Lesson of the day (none / 0) (#197)
by godix on Tue Jun 22, 2004 at 12:06:58 AM EST

Don't marry subhumans. And don't even try claim you ex isn't subhuman, I've read your diaries...

I draw people smiling, dogs running and rainbows. They don't have meetings about rainbows.

[ Parent ]
No argument from me about that! (none / 0) (#235)
by mcgrew on Tue Jun 22, 2004 at 07:39:42 PM EST

Marrying it was the dumbest thing I've ever done. Maybe the dumbest thing anybody has ever done.

"The entire neocon movement is dedicated to revoking mcgrew's posting priviliges. This is why we went to war with Iraq." -LilDebbie
[ Parent ]

Pretty much, yeah (none / 0) (#215)
by epepke on Tue Jun 22, 2004 at 01:24:18 PM EST

Dogs are "designed" to emulate human behavior in that humans applied artificial select to them for tens of thousands of years, to the point where a fairly subtle variety of facial expressions are mutually comprehensible between the species. My Dalmatian attempts to smile in greeting, although of course she lacks the musculature to do a proper smile, so it comes out as a sneer. I've learned from reading that this is common amongst Dalmatians and is called a Dal smile. It's an extremely unnatural gesture, nothing like the aggressive teeth baring or even the submissive proto-smile that chimpanzees do, but it's clearly used in the contexts where a human would smile. Dogs also have "that doggie look" down pat, which beggar children in Tijuana and the old Dondi comic strip exemplifiy.

By contrast, cats can be swell, cuddly little creatures, but they are much more alien than dogs. Parrots can show astonishingly high intelligence but are also alien. I've even known some pretty jolly chimpanzees in my life (I grew up not far from the Flying Wallendas), but none of them seem human in the way that dogs do.


The truth may be out there, but lies are inside your head.--Terry Pratchett


[ Parent ]
No Man's Land (none / 3) (#98)
by teece on Sun Jun 20, 2004 at 10:50:24 PM EST

Copying pain avoidance techniques is not the same thing as feeling pain.

While I agree with you that currently none of our AI efforts have created a robot near 'life,' I don't think you are on as stable ground as you think you are with that statement, and the corresponding belief.

What is pain?  You know what it feels like to feel pain.  But do you have any way of ever feeling what it is like for me to feel pain?  Can you ever know anything other than the 'feeling' of pain?  Can  you really know what the hell pain is?  Even if you define it precisely in terms of biochemistry and physics, what has that gotten you?

You end up at one of two places when you examine sentience:  it is either endowed from some supernatural creator, and beyond reason; or it is simply the result of complicated, self-aware machines.

Ultimately a robot that successfully mimics a human being's behaviour in every detail is a human being.  Period.  Any other answer puts you in the camp that says a human being is endowed with a soul from some supernatural power.

-- Hello_World.c, 17 Errors, 31 Warnings...
[ Parent ]

I can't answer your questions ... (none / 2) (#107)
by godix on Mon Jun 21, 2004 at 12:17:37 AM EST

... but I can cast a different light on them. I'm sure you've seen enough TV shows or movies that at least once you've seen a character being tortured. If you're morbid you may have seen real honest to god footage of someone being tortured. Now if the actor is good then you couldn't tell the difference between the fictional torture and the real one. Does this mean that the actor actually suffered the same pain as someone honestly being tortured? Is the director as morally wrong as the people who inflict this pain for real? Or is the actors show of pain just pretend and not a reliable indication he's actually feeling pain? Being able to mimic a human feeling or behavior is not the same as actually experiencing a human feeling or behavior. Assuming AIs eventaully can convincingly show pain or other human emotions it's not anymore real than an actor portraying his charcater in pain.

Or we can look at it this way. A human can be told not to touch a hot stove but many humans have to learn that lesson the hard way. A robot can be told not to touch a hot stove and it won't touch a hot stove ever. Both the human and the robot then show the same reaction to hot stoves, they avoid touching it. Despite showing the same reaction their motives are different. The human has learned and knows that hot stoves hurt while the robot knows nothing more than it was instructed to avoid hot stoves, if that instruction is removed it'd have no hesitation about hot stoves at all. Watching the two of them you wouldn't be able to tell the human and the robot apart by their actions but despite that the robot clearly isn't showing that action for the same reasons as the child.

Hopefully you can see why I do see a difference between mimicing and living. I don't even require there to be a soul to see this difference, which is good because I'm agnostic so don't really believe in souls anyway.

I draw people smiling, dogs running and rainbows. They don't have meetings about rainbows.

[ Parent ]

Pain is an electrochemical reaction (none / 0) (#170)
by mcgrew on Mon Jun 21, 2004 at 08:23:37 PM EST

I'm confident that your pain and mine and your dog's (since we all share a 95% same DNA codebase) may not be identical, but pretty damned close.

"The entire neocon movement is dedicated to revoking mcgrew's posting priviliges. This is why we went to war with Iraq." -LilDebbie
[ Parent ]

OK, but (none / 0) (#196)
by teece on Mon Jun 21, 2004 at 11:42:38 PM EST

I'm confident that your pain and mine and your dog's (since we all share a 95% same DNA codebase) may not be identical, but pretty damned close.

I agree with you completely.  Now say we get a really good grip on the neurochemistry of pain (I don't know if we have that, but assume we do).  Now, say you go to the dictionary and look up 'pain.'

If the entry lists nothing other than the physical, neurochemical definition of pain, will it be useful?  No, I don't think it will.  A feeling is ultimately related to that physicality, but that physical aspect does no justice in describing what pain is to a human being.

And the perception of pain is different for every person.  The same knife used to create the same kind of cut on the same place on your body versus my body can elicit very different responses to the pain.

So while it may seem somehow 'empty' or incomplete to create a robot with a version of pain that is simply a 'harm avoidance subroutine' or the like, I think at the end of the day that's all our pain is, too.

And that was my only point by that.

-- Hello_World.c, 17 Errors, 31 Warnings...
[ Parent ]

Foundational issues (3.00 / 8) (#63)
by Irobot on Sun Jun 20, 2004 at 12:56:22 PM EST

Odd to come across this now, as I'm supposed to implement some rudimentary map-making behaivors for the AAAI conference in July. Let me put a disclaimer on this - I am a proponent of strong AI.

It strikes me as odd that in making your argument -- which is not really an argument at all, but instead seems to be an attempt to simply invite responses -- you ignore the well-known human trait presented by Masahiro Mori. See, in my opinion, until humans view robots as human-like, with human-like capacities, there is no real ethical question. (In a sense, this echoes the Turing test. However, to me, ethical questions are not about intelligence so much as emotion. In some regards, any inference engine is intelligent; it just isn't human-like.) In other words, at least for the lay person, so long as there is a clear distinction between human and machine behaviorally, there will be no ethical considerations. Ferchrissakes - humans seem to have a difficult enough time treating other humans ethically; there are very few ethical questions that are not up for debate, and those that aren't are violated time and time again.

But fine. Ignore the lay person's POV and consult the researcher's opinions themselves. To reference the AIBO in this context without mentioning how it simulates emotions and such is a serious oversight. On a more substantial level, no mention of John McCarthy's stance? (Sorry - I don't have a link to the actual paper.) Or Minsky? What about Sloman? And this ignores the raging debate in the philosophy of mind, including the New School, the most recent book by Fodor that attempts a rebuttal, the work of Jaegwon Kim, and neuroscientists like Edelman. Here's my point: until there is some justification for thinking a robot that is more than a mere machine is even possible, ethics are a moot point. The proof is in the pudding, so to speak. And, as an AI researcher, I'm often embarrassed by the premature claims made in the field up to this point. Ethical considerations? Bah - do I feel bad about putting my calculator in my bag such that it's starving for power?

To me, raising ethical questions like this is going to require not only the design and implementation of the robot under consideration, but a thorough understanding of how sentience works in the first place. Referring back to Mori's findings, unless the robot is convincingly sentient, the ability to pull the plug is enough to maintain the ethical boundary between man and machine. On the other hand, consider the ethics involved with the cute animal argument. People may feel qualms about unplugging a machine that they feel emotionally attached to; however, as a robot designer, I can tell you exactly how the robot works. (I'm purposely ignoring the emergence argument, as I've not yet seen a convincing description of what that even means. In essence, it seems to be a form of mysticism. I once thought it made sense, but changed my mind. Any good explanation or defense of the idea is welcome.) So long as the robot designer can account for the inner-workings of the robot to any level of detail, the robot will remain a machine only, and not garner ethical considerations.

Not to get all post-modernist on yer ass, but humans have an amazing ability to see "otherness" as they look around. So long as machines are the "other", ethics will not be a concern.
Irobot

The one important thing I have learned over the years is the difference between taking one's work seriously and taking one's self seriously. The first is imperative and the second is disastrous. -- Margot Fonteyn

thoughts (none / 1) (#65)
by Work on Sun Jun 20, 2004 at 01:37:34 PM EST

As far as Mori goes, I didn't address it because its irrelevant to the piece. I don't try to equate the robots of today and near-future with humans, because they aren't. I purposely compare them to animals for a reason - humans don't just have morals regarding other humans, we have them regarding animals as well. When robot intelligence begins to approach that of animals, I believe a moral issue starts to emerge. I don't try and convince anyone what they should think, but rather point out that its there and let them decide for themself.

The proof is in the pudding, so to speak. And, as an AI researcher, I'm often embarrassed by the premature claims made in the field up to this point. Ethical considerations? Bah - do I feel bad about putting my calculator in my bag such that it's starving for power?

I think there is a big difference between a calculator and a robot capable of a form of learning, intelligent navigation and intelligent interaction with its physical surroundings. I don't think you need to read minsky's thoughts on rule-based AI to start thinking about the ethical implications of the treatment of AI.

And I also don't think you need to design the perfect, super-intelligent mythical machine before you start thinking about it either.

So long as the robot designer can account for the inner-workings of the robot to any level of detail, the robot will remain a machine only, and not garner ethical considerations.

I think this is really flawed. So the more we know the less worthy they are of ethical considerations? How do you reason this? What happens when we finally figure out how the human brain really works? Or do you think that some things are "beyond" human understanding? What kind of science is that?

[ Parent ]

Responses (none / 3) (#71)
by Irobot on Sun Jun 20, 2004 at 03:12:17 PM EST

So, I was thinking about what I posted before reading your response, and I'm not sure I'm happy with it (my post, that is). Time is short, and it was rather a scattered approach to an answer/critique. Keep that in mind, I beg of you - it wasn't meant to be as coherent or well thought out as a paper. Instead, it was a smattering of comments to induce either additional content or further discussion. Which, since you replied, was a success.
I purposely compare them to animals for a reason - humans don't just have morals regarding other humans, we have them regarding animals as well. When robot intelligence begins to approach that of animals, I believe a moral issue starts to emerge.
Yes, then no - in my opinion. You're right about animals. My claim, which is summed up at the end of my post, would be that ethical (not, IMO moral - by my definition, ethics leans towards subjective rules, morality is more objectively defined) issues do not arise so long as there is either: 1. No agreed upon (by whatever standard) similarity between the operations of living beings and machines, or 2. Some agreed upon definition of intelligence (your word; I'd have chosen something different, as considering a snail, cockroach, etc. intelligent seems odd) that a robot meets. As neither exists, I don't think ethical questions have any meaning beyond mental gymnastics. Which, I should point out, is odd, being that I do believe in strong AI. I suppose what it comes down to is that a discussion of AI ethics implies unsupported belief in the strong claims of AI researchers that I find so embarrassing. While I have those beliefs myself, I also recognize that it's a form of faith that is pretty much beyond meaningful discussion at this point in time.
I think there is a big difference between a calculator and a robot capable of a form of learning, intelligent navigation and intelligent interaction with its physical surroundings. I don't think you need to read minsky's thoughts on rule-based AI to start thinking about the ethical implications of the treatment of AI. And I also don't think you need to design the perfect, super-intelligent mythical machine before you start thinking about it either.
Yes, the calculator quip was a rhetorical device, as was the post-modernist remark. Let me try to put it this way: I'm attempting to walk the line between theory and practice. The two are not mutually exclusive, but form a feedback loop. Or, if you have a philosophical bent, a hermeneutic circle. As per theory, we can discuss robot ethics as well as any other metaphysical topic that may or may not have an answer. I tend to shy away from such discussions, as my experience tells me they result in something akin to anti-negotiations. Everyone walks away dissatisfied with the outcome. Now, in practice - it seems to me that so long as humans can make a distinction between animal and machine, they will. Which renders ethical consideration moot.
So long as the robot designer can account for the inner-workings of the robot to any level of detail, the robot will remain a machine only, and not garner ethical considerations.
I think this is really flawed. So the more we know the less worthy they are of ethical considerations? How do you reason this? What happens when we finally figure out how the human brain really works? Or do you think that some things are "beyond" human understanding? What kind of science is that?
No - you misunderstand (at least part of) my point. If we figure out how a brain works and that machines can also work that way - and I believe we will, unlike, say, Colin McGinn, Hubert Dreyfus, Searle, or others - then we have a justification for ethical considerations. In other words, there is a threshold that has to be met first. We have ethics as concerns animals. If we can show that machines are analagous to animals, to remain consistent we either have to grant that machines deserve ethical consideration or suspend ethical consideration of animals. I'm going with the former. But, until that question is settled, robot ethics is similar to considering the ethics of coal-mining - from the coal's point of view. Furthermore, this entire discussion is most decidedly not one of science, but of philosophy.

Disregarding my position that a serious discussion of AI ethics is misplaced, here's what I'd want to make this article better:

  • An explicit statement that we'll assume robots can achieve sentience/intelligence/animal-like being.
  • More discussion of the AIBO's behaviors. Talk about Brooks, mention Braitenberg's Vehicles, anything that provides more grounding/justification for making the jump from ethics of animals to ethics of machines.
  • Either remove the part about robots taking human's jobs away (you should probably include a line or link regarding Luddites, at any rate), or expand on some other considerations.
  • Since you're making the claim that this is a new consideration, it seems a travesty to not mention all the efforts that have already gone into the topic. And there is lots; I've just mentioned a few.
Writing an article is hard, and you have my utmost respect for making the attempt. I've never gotten off my ass to do it myself. This is just an attempt to make your article better, as it's a topic in which I'm interested.

Irobot

The one important thing I have learned over the years is the difference between taking one's work seriously and taking one's self seriously. The first is imperative and the second is disastrous. -- Margot Fonteyn
[ Parent ]

more thoughts (none / 0) (#88)
by Work on Sun Jun 20, 2004 at 08:25:55 PM EST

It seems to constantly return to the question of what 'intelligence' is. Thats a hard word to define - my thoughts on it vary from time to time.

I tend to think this way though: Intelligence is a capacity to learn.

I like that because it includes animals. The more we learn about them the more intelligent they seem. Take that recent article about a border collie that had the visual and word recognition capabilities similar to that of a human 3 year old. We consider the human 3 year old to be intelligent - and we should consider the border collie as well.

The downside to it though, is its a bit broad. Well that and then you need to define 'learn'. Does a computer that saves data learn? I keep thinking some kind of clarification with regards to sensory input of one's environment needs to be added. A good wording escapes me though.

This would also exclude Braitenberg's vehicles as they're hardwired reactive. And it would exclude insects as well, which have shown that they are mostly hardwired reactive as well.

Perhaps the best definition of intelligence is the ability to be more than the sum of one's parts :)

[ Parent ]

Yeah... (none / 1) (#92)
by Irobot on Sun Jun 20, 2004 at 08:52:28 PM EST

I've come to the conclusion that "intelligence" is borderline semantically null. It can mean so many things that it just adds to the confusion. Hence, the need to provide some ground rules. As you point out, the same with "learning".
This would also exclude Braitenberg's vehicles as they're hardwired reactive. And it would exclude insects as well, which have shown that they are mostly hardwired reactive as well.
But I'm not sure you can make such a clean distinction. It seems a continuum to me, and a messy one at that.
Perhaps the best definition of intelligence is the ability to be more than the sum of one's parts :)
And that gets into emergence, of which I've yet to be convinced. Sometimes it seems so obviously true; yet when it's broken down to a form that's clear, it just falls apart.

Irobot

The one important thing I have learned over the years is the difference between taking one's work seriously and taking one's self seriously. The first is imperative and the second is disastrous. -- Margot Fonteyn
[ Parent ]

Sorry to barge in (none / 1) (#100)
by GenerationY on Sun Jun 20, 2004 at 11:04:52 PM EST

But I don't like the ability to learn as a working definition of intelligence either. Its just too general a trait. More or less any living thing can learn, but so can many inanimate things as well. It depends where you draw the line really. But certainly I don't think classical conditioning/associative learning etc. equals intelligence as after all, Kandel has shown you can get some quite sophisticated learning behaviours out of very small number of cells in vivo. These would be sufficient to at least model many of the learning phenomena we can evoke in avians and rodents.

I'd be happier to see intelligence defining something closer to 'the ability to synthesise new rules from extant learning'. The exact wording is tricky, but something along those lines I think.

I think the major problem in discussing these terms (intelligence, consciousness, even memory & attention we do otherwise have formal definitions) is that they probably constitute to at least some extent reifications (that is, he process of regarding something abstract as a material entity). If you were to plot an intellectual map you'd have to write "There be dragons!" in big letters over this part.
 

[ Parent ]

Then... (none / 0) (#80)
by Znork on Sun Jun 20, 2004 at 06:44:47 PM EST

"What happens when we finally figure out how the human brain really works? Or do you think that some things are "beyond" human understanding?"

... we get to have the ethical debate about the concept of free will. Once the human mind is completely understood and deterministically emulatable that will be an interesting one. I suspect it will be a much more difficult and painful debate than the eventual AI sentience debate.

[ Parent ]

Well put (none / 0) (#81)
by Irobot on Sun Jun 20, 2004 at 07:03:01 PM EST

That's the question Fodor is trying to answer (in the negative). I'm not sure I agree. Although I'm quite sure I have all the nuances of the argument either. Is purely syntactic manipulation enough? Can computionalism give rise to consciousness?

I just recently heard of a book, that I haven't gotten ahold of yet, that proposes computation beyond Turing machines. Fascinating stuff. At least, hopefully...

Irobot

The one important thing I have learned over the years is the difference between taking one's work seriously and taking one's self seriously. The first is imperative and the second is disastrous. -- Margot Fonteyn
[ Parent ]

Not. Do NOT have... (none / 0) (#285)
by Irobot on Fri Jun 25, 2004 at 09:01:05 AM EST

That should read:
I'm not sure I agree. Although I'm quite sure I DO NOT have all the nuances of the argument either.

Irobot

The one important thing I have learned over the years is the difference between taking one's work seriously and taking one's self seriously. The first is imperative and the second is disastrous. -- Margot Fonteyn
[ Parent ]

The brain and ethics. (none / 0) (#295)
by zeigenfus on Sat Jun 26, 2004 at 08:35:52 PM EST

If we approach the argument from a cause and effect perspective it can be quite a bit easier to narrow down some criteria for defining a machine as worthy of ethical consideration. It seems probable that moral systems are developed to promote survival, and beyond that general wellbeing. Examples of this are plentiful. Due unto others is not merely an ideal, it also increases a communities chance of surviving, and makes living easier (if you doubt that statement then you have never lived with roomates). Pigs being labeled as filthy animals increases an ancient societie's chance of surviving, as pigs tend to make people sick when they are not properly prepared. Under this argument morals are defined as way that humans can use their ability to accurately understand cause and effect to outline a framework of behavior that will make life better for them. So my argument is that robots will become worthy of ethical consideration the moment we begin deriving long term benefits from treating them well. Resetting an aibo has no effect on its usefullness or the joy it can bring, nor does throwing a calculator in a bag. Conversely it is difficult to derive the same soothing benefit from a cat if you hit it with a hammer, or make your life very pleasant by being mean to everyone around you. It is for this reason that I think major ethical considerations on robots are a ways off. Robots are usefull because we can easily manipulate their function. Turning off a robot has no effect on its long term usefullness or function, the robot will bear no ill will against you for it, you are not robbing robot of its free will. you are simply treating it as a tool or a toy, which is all they amount to at the moment.

[ Parent ]
With respect to .. (none / 0) (#70)
by Sesquipundalian on Sun Jun 20, 2004 at 02:49:33 PM EST

Masahiro Mori,

I think that what people are repulsed by, is un-necessary human likeness.

Everyone loved it when Brent Spiner, played Cmdr.Data on Star Trek. And that was a pretty human-lookin Android, if you ask me. I think it's more like, when the mask seems like it was just put there to trick our anthropomorphic sense. That's what we find irritating to look at, if we have to do it all the time. I suspect that adjusting to it actually tires people out, in some hard to name way.

It's that sagging, off-color, rubber face, with the eyes that don't blink right, and the strange wiggling going on underneath the cheeks, that will give 'em the willies, every single time.

I wonder if that's one reason why people get so damn traumatised, when they get robbed at gun-point by people wearing cartoon-masks. It's like, because you never get to see the face, you can never be sure if the person that robbed you, isn't actually the same person sitting next to you, right now.


Did you know that gullible is not actually an english word?
[ Parent ]
Hmmm... (none / 0) (#72)
by Irobot on Sun Jun 20, 2004 at 03:27:54 PM EST

I find Mori's conclusions fascinating, to say the least. It seems to me that Furbys have a much better chance of being accepted as life-like than any human form robot, at least at this point in time. Personally, I don't think it's so much "unnecessary human likeness" that's the problem, as much as that it borders on impossible to achieve the level of system coherency that we humans are sensitive to. I'm awed by how well humans function. It's enough to give me some understanding of why people believe in Intelligent Design theory.

OK, maybe not. :)

Irobot

The one important thing I have learned over the years is the difference between taking one's work seriously and taking one's self seriously. The first is imperative and the second is disastrous. -- Margot Fonteyn
[ Parent ]

intelligent design (none / 0) (#269)
by Work on Thu Jun 24, 2004 at 01:20:07 AM EST

I don't think its an altogether outrageous claim - although I don't think some kind of omnipotent god-like being is behind it.

The universe is pretty old. I think it feasible that at some point some race of beings somewhere evolved enough to be able to manipulate genetics enough to create their own lifeforms from scratch. We're not far from that ourselves.

Combined with a form of space travel, its possible to seed other worlds with life.

[ Parent ]

Wrong argument (none / 0) (#284)
by Irobot on Fri Jun 25, 2004 at 08:43:13 AM EST

As far as I can tell, the question is not whether ID is possible; lots of things are. The argument IDers make is that an intelligent designer is necessary. Seems wrong-headed IMO, but oh, the wiggle-room it buys some nutjobs...

Irobot

The one important thing I have learned over the years is the difference between taking one's work seriously and taking one's self seriously. The first is imperative and the second is disastrous. -- Margot Fonteyn
[ Parent ]

Re: Foundational issues (none / 0) (#250)
by skeptik on Wed Jun 23, 2004 at 12:47:05 AM EST

Ferchrissakes - humans seem to have a difficult enough time treating other humans ethically; there are very few ethical questions that are not up for debate, and those that aren't are violated time and time again.

IMO, this is the crux of the problem with raising questions about robot ethics with the general public. The general populace has at best extremely vague notions of what ethics even means. For example, does species even make an ethical difference, and if so why? As the man says, few ethical questions are not up for debate.

I am not arguing that the issue of robot ethics should not be discussed, however. I'm simply agreeing with the accessory point that there is little hope of constructive dialogue about policy on this issue. I suspect that it will remain this way for hundreds of years.



[ Parent ]
I believe this is on topic (none / 1) (#75)
by KrispyKringle on Sun Jun 20, 2004 at 04:16:06 PM EST

"The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." --Edsger Dijkstra

I would say (none / 0) (#167)
by mcgrew on Mon Jun 21, 2004 at 08:12:22 PM EST

The question of whether a computer can think is no more interesting than the question of whether a rock can swim.

"The entire neocon movement is dedicated to revoking mcgrew's posting priviliges. This is why we went to war with Iraq." -LilDebbie
[ Parent ]

Always remember: (2.57 / 7) (#83)
by jobi on Sun Jun 20, 2004 at 07:41:58 PM EST

"Artificial Intelligence is no match for natural stupidity."

---
"[Y]ou can lecture me on bad language when you learn to use a fucking apostrophe."
The day people care about AI ethics (2.87 / 8) (#90)
by livus on Sun Jun 20, 2004 at 08:30:54 PM EST

is the day the AIBO sucessfully combines with the RealDoll.

Meanwhile this article severely overestimates humans. If you think your pet was too hyperactive and want to calm it down, just fry its brain and start all over. I think most rational people would not agree with such a thing, even if it were possible - USians routinely pull the claws out of their animals. They mutilate the tails and vocal cords and inject them with hormones. They keep then in tiny apartments. TFactory farmers cut the beaks off chickens.  

Of course they would reset them if it were possible.


---
HIREZ substitute.
be concrete asshole, or shut up. - CTS
I guess I skipped school or something to drink on the internet? - lonelyhobo
I'd like to hope that any impression you got about us from internet forums was incorrect. - debillitatus
I consider myself trolled more or less just by visiting the site. HollyHopDrive

I think you're right (2.83 / 6) (#96)
by teece on Sun Jun 20, 2004 at 10:37:06 PM EST

Currently, the dumbest dog you can find at the pound is orders of magnitude more complex than the most cutting edge of human-created robots.  And yet, a dog is property.  If you beat your dog (or cat) to death, you get a slap on the wrist.  And that is only if you are stupid enough to do it in a conspicuous way -- there is no 'petricide' division of the the police department.

Yet, we treat dogs and cats very, very well compared to most animals.  Witness the wrath unleashed upon PETA on any FARK thread for the general human feeling towards animals:  they are ours to do with as we please.  (Don't mistake that comment for a defense of PETA or a condemnation of PETA, that is neither here nor there for this issue).

So us humans treating sentient robots (if or when we create them) with respect and decency?  Fat fucking chance.  It won't happen -- at least not until the robots rise up and kill several billion humans to get our attention.

-- Hello_World.c, 17 Errors, 31 Warnings...
[ Parent ]

furthermore (2.50 / 4) (#97)
by minerboy on Sun Jun 20, 2004 at 10:45:46 PM EST

look at the use of ritalin, and other ADHD drugs in our own society, or shock treatments that were given in the 50's and 60's. We will probably have the abillity to chemically "reset" Humans sooner than we will have anything close to sentient robots. And, we will do it. (not that I'm sure we should)



[ Parent ]
Not quite a slap on the wrist (none / 1) (#153)
by epepke on Mon Jun 21, 2004 at 02:13:36 PM EST

I've seen some fairly severe prison sentences for cruelty to animals.

However, it's perfectly legal to kill animals (if not a protected species or someone else's property). It's just not legal to cause them suffering.


The truth may be out there, but lies are inside your head.--Terry Pratchett


[ Parent ]
Genesis 1: (none / 2) (#166)
by mcgrew on Mon Jun 21, 2004 at 08:09:19 PM EST

26. And God said, Let us make man in our image, after our likeness: and let them have dominion over the fish of the sea, and over the fowl of the air, and over the cattle, and over all the earth, and over every creeping thing that creepeth upon the earth.

That's moral enough authority enough for me.

"The entire neocon movement is dedicated to revoking mcgrew's posting priviliges. This is why we went to war with Iraq." -LilDebbie
[ Parent ]

robots are creeping things too! n/t (none / 1) (#184)
by livus on Mon Jun 21, 2004 at 10:07:21 PM EST



---
HIREZ substitute.
be concrete asshole, or shut up. - CTS
I guess I skipped school or something to drink on the internet? - lonelyhobo
I'd like to hope that any impression you got about us from internet forums was incorrect. - debillitatus
I consider myself trolled more or less just by visiting the site. HollyHopDrive

[ Parent ]
well then case closed (none / 1) (#234)
by mcgrew on Tue Jun 22, 2004 at 07:36:48 PM EST


"The entire neocon movement is dedicated to revoking mcgrew's posting priviliges. This is why we went to war with Iraq." -LilDebbie
[ Parent ]

I hate to see what you'd be like... (none / 1) (#190)
by gzt on Mon Jun 21, 2004 at 10:27:27 PM EST

...as lord of the manor. Some men aren't fit to rule, I suppose.

[ Parent ]
Oh hell (2.00 / 4) (#127)
by Blarney on Mon Jun 21, 2004 at 02:42:01 AM EST

USians routinely pull the claws out of their animals. They mutilate the tails and vocal cords and inject them with hormones. They keep then in tiny apartments.

So I suppose that people in Canada or Yurp or Asia aren't allowed dogs or cats unless they can prove they live in a nice large house? As a matter of fact, I have personally observed Canadian apartment complexes with cats in residence. I wouldn't even be surprised to find that some Canadians living in apartments have dogs, too!

Besides, some of us own cats that do not like to go outside. My own cat, though a big fat bastard, does not go outside. You can open an outside door, but he won't go through it. And dumping him outside is likely to result in him getting very, very angry. When he gets angry, he hurts people.

And that's why he has no front claws - after sending a woman - two owners back, he's had 5 that poor bastard - to the E. R. to get a long, deep, blood-gushing wound stitched up. If declawing had been illegal, as I hear it is in parts of the world, he wouldn't have been declawed - but would have been killed as a dangerous animal. He's not declawed to protect the furniture - I don't give a crap about my furniture, neither did most of his owners in the past - he's declawed to protect human beings. The legality of declawing in the US and Canada - it's legal in Canada also, so not just the US - actually allows some aggressive cats to be kept that otherwise would be put down.

[ Parent ]

how do you disagree? and why canada? (none / 1) (#185)
by livus on Mon Jun 21, 2004 at 10:16:12 PM EST

why the fetish for Canadians? I've never been there but if they do in fact mutilate their animals too, it simply helps prove my point.

If, as you say, humans would rather kill an animal than allow it to live because it claws people who pick it up, then that helps prove my point too.

I'm pretty sure there is some form of legal animal cruelty just about everywhere in the world. And, there are always going to be some people who commit it if they can.

Similarly if you could reset animals brains there'd be many people who thought there was ethically nothing wrong with it. Particularly if it was legal (which it probably would be, including in the US - and Canada). That's all Im saying.

---
HIREZ substitute.
be concrete asshole, or shut up. - CTS
I guess I skipped school or something to drink on the internet? - lonelyhobo
I'd like to hope that any impression you got about us from internet forums was incorrect. - debillitatus
I consider myself trolled more or less just by visiting the site. HollyHopDrive

[ Parent ]

you sort of imply (none / 1) (#188)
by Blarney on Mon Jun 21, 2004 at 10:21:50 PM EST

Your use of the perjorative-nonword USian implies that you imagine the United States unique in mistreatment of animals - or possibly that in some country cats are not declawed and, indeed, spend their days sipping sweet cream, lying on velvet cushions, and being scratched behind the ears by a staff of trained cat-caretakers.

Yet my own experience - limited, admittedly - is that Canadians treat their animals much the same. So I wonder, where is this wonderful land where cats are treated so well? I mean, if there is such a place I'll mail them a cat to take care of, why not? He'd be better off, after all.

[ Parent ]

the world contains more than 2 countries (none / 1) (#192)
by livus on Mon Jun 21, 2004 at 10:34:15 PM EST

you really should investigate.

And, yes, where I live they are not declawed (which is good for my cat because she claws like a maniac, but as she has the claws, humans are obliged to treat her wish to keep her own personal space with respect).

Obviously it's partly a cultural reason that makes me regard it as a disgusting practice. The USians I know who are against it take a more moderate view. We also mostly do not live in appartments here. However I have seen aptmt cats and even poor large dogs in appartments. Humans breed companion animals. The primary purpose of the existance of companion animals is for human comfort and human hapiness. Same thing the world over. Robots would be no different.

USian is not perjorative to me, it's simply internet shorthand. You can't say "American" or Brasilians etc jump all over you, you can't say "North Americans" or Canadians get all antsy about it. Besides, USian is shorter.

---
HIREZ substitute.
be concrete asshole, or shut up. - CTS
I guess I skipped school or something to drink on the internet? - lonelyhobo
I'd like to hope that any impression you got about us from internet forums was incorrect. - debillitatus
I consider myself trolled more or less just by visiting the site. HollyHopDrive

[ Parent ]

Cat heaven (none / 0) (#272)
by rusty on Thu Jun 24, 2004 at 10:31:49 AM EST

or possibly that in some country cats are not declawed and, indeed, spend their days sipping sweet cream, lying on velvet cushions, and being scratched behind the ears by a staff of trained cat-caretakers

Yes, I believe that's a service provided by the government in Sweden. Research has shown that when you come home from a long hard four hours at work to your 6'3" blonde wife and two perfect children, the last thing you want to have to do is scratch your cat behind the ears yourself. Nevertheless, cat ownership reduces the incidence of heart disease and helps speed papercut recovery by 17%, so it is the duty of all good citizens to own at least one cat.

All this, and they let you keep 2% of your wages!

____
Not the real rusty
[ Parent ]

chickens! (none / 1) (#149)
by clover_kicker on Mon Jun 21, 2004 at 11:58:27 AM EST

>Factory farmers cut the beaks off chickens.  

Mom & Pop farmers did this 50 years ago, too.

I'm told that cleaning up what's left of a chicken after she's been pecked to death by the rest of the flock is "a disgusting mess". If a farmer thinks it's disgusting...
--
I am the very model of a K5 personality.
I intersperse obscenity with tedious banality.

[ Parent ]

Mother Theresa herself (none / 1) (#182)
by livus on Mon Jun 21, 2004 at 10:04:13 PM EST

can have done it, doesnt make it ok.

I kept chickens, it's completely unnecessary, (and if mutilating something was necessary to keep it then why keep it?)

Cleaning anything in a factory farm where the ammonia is so strong the chickens are blind, their legs cannot support their weight and they peck whatever is directly in front of them is disgusting, yes. Again, no one is twisting these people's arms to torture the damn things. Its just corporate greed.

My point is not "is it justified" so much as simply that people do this, and they'd do worse in a heartbeat.

---
HIREZ substitute.
be concrete asshole, or shut up. - CTS
I guess I skipped school or something to drink on the internet? - lonelyhobo
I'd like to hope that any impression you got about us from internet forums was incorrect. - debillitatus
I consider myself trolled more or less just by visiting the site. HollyHopDrive

[ Parent ]

let's assume you're not just pulling my pecker (none / 1) (#194)
by clover_kicker on Mon Jun 21, 2004 at 11:12:52 PM EST

>I kept chickens, it's completely unnecessary,

I seriously doubt my dad would have bothered to chase down each chicken and cut off their beaks if he didn't think it was necessary.

I'm told that chickens are cannibals- if one of 'em gets cut, the other ones will peck at the blood. Pretty soon there's nothing left but blood and feathers. If another chicken gets too bloody, they might start pecking at her. In 15 minutes, you've got a real horror show.

>(and if mutilating something was necessary to keep it then why keep
>it?)

Because eggs are tasty? Because cutting off their beaks isn't all that cruel compared to their eventual fate on the chopping block?

Chickens are completely unnatural critters anyway- they can't survive in the wild. They're mutilated at the instant of conception, if you want to stretch your empathy muscles that far.
--
I am the very model of a K5 personality.
I intersperse obscenity with tedious banality.

[ Parent ]

what the hell kind of chickens do you guys have?!? (none / 0) (#227)
by livus on Tue Jun 22, 2004 at 07:10:57 PM EST

This is getting surreal. I'm actually about to google chicken varieties to see if there is some sort of substantial difference between the chickens we have here and elsewhere.

Sure, if a chicken is wounded the others will eat it if they are in the coop together without food. I have in fact lost about 3 chickens to that, during floods. No horror show, those carcasses were picked clean. Anyway, this is why you should let them out during the day - they pretty much run about eating all day long in the wild. Also, yes, they will peck one another, hence "hen pecked" and "pecking order". But really what's the big deal? Dogs fight each other sometimes, but I don't pull out all their teeth.

If we had to mutilate the little fuckers first we'd never have kept them. For all I know, children are a taste treat I'm missing, but it's not as if there isnt masses of food in the world already. I'm aware the majority of the worlds population disagree with me on both counts.

they can't survive in the wild - this is the bit that made me think that either you or your chickens are smoking crack. Or there's something terribly wrong with the breed you're talking about. Next you'll be telling me that cats, dogs, rats, and ferrets can't survive in the wild.

Not only can chickens survive in the wild but even some of the mutant, mutilated ones from the factory farms do well. With the rise of psycho "animal lib" types many of these are "liberated" in the wilderness, and the area near my mother's house is crawling with them. They come trotting out of the forest in small healthy flocks, even trailing chickens at chicken season.


---
HIREZ substitute.
be concrete asshole, or shut up. - CTS
I guess I skipped school or something to drink on the internet? - lonelyhobo
I'd like to hope that any impression you got about us from internet forums was incorrect. - debillitatus
I consider myself trolled more or less just by visiting the site. HollyHopDrive

[ Parent ]

clarification (none / 0) (#228)
by livus on Tue Jun 22, 2004 at 07:13:10 PM EST

in my last sentence by "chickens" I meant offspring   (Im more used to referring to adult chickens as "chooks").

---
HIREZ substitute.
be concrete asshole, or shut up. - CTS
I guess I skipped school or something to drink on the internet? - lonelyhobo
I'd like to hope that any impression you got about us from internet forums was incorrect. - debillitatus
I consider myself trolled more or less just by visiting the site. HollyHopDrive

[ Parent ]
wild chickens (none / 0) (#258)
by clover_kicker on Wed Jun 23, 2004 at 09:09:15 AM EST

In these parts, I don't think a wild chicken would last very long - too many raccoons, foxes etc.

I've seen a lot of critters running around in the woods - foxes, 'coons, moose, deer, bears, porcupines, snakes, toads, frogs, you name it.

I've never seen a wild/escaped chicken.
--
I am the very model of a K5 personality.
I intersperse obscenity with tedious banality.

[ Parent ]

hell, even I probably couldnt survive THAT (none / 0) (#283)
by livus on Fri Jun 25, 2004 at 03:35:20 AM EST

and I don't think your scary ass neighbourhood is a fair test of chickenkind's skills!

You know, my mother keeps chickens now and they have no chickenhouse, they just run around loose eating and fly up into the trees at night.

---
HIREZ substitute.
be concrete asshole, or shut up. - CTS
I guess I skipped school or something to drink on the internet? - lonelyhobo
I'd like to hope that any impression you got about us from internet forums was incorrect. - debillitatus
I consider myself trolled more or less just by visiting the site. HollyHopDrive

[ Parent ]

in fairness (none / 0) (#286)
by clover_kicker on Fri Jun 25, 2004 at 10:05:36 AM EST

The bears/moose were a fair way out of town, but the 'coons and foxes are pretty much everywhere.

Townie raccoons are funny- they're so fat they look like beach balls with little feet and striped tails.
--
I am the very model of a K5 personality.
I intersperse obscenity with tedious banality.

[ Parent ]

damn, there I thought (none / 0) (#302)
by livus on Mon Jun 28, 2004 at 08:02:31 AM EST

you were living where bears paw at your door and cougars cough in your garage.

I've never seen a raccoon; I think theyre bigger than I think they are.

---
HIREZ substitute.
be concrete asshole, or shut up. - CTS
I guess I skipped school or something to drink on the internet? - lonelyhobo
I'd like to hope that any impression you got about us from internet forums was incorrect. - debillitatus
I consider myself trolled more or less just by visiting the site. HollyHopDrive

[ Parent ]

raccoons (none / 0) (#306)
by clover_kicker on Mon Jun 28, 2004 at 06:49:03 PM EST

Ever see one of those grotesquely huge housecats that makes you wonder how he got so gigantic? They're about that size.

They aren't tremendously fierce, unless cornered.

They are perhaps the smartest thing on 4 legs. Here's a shot of a 'coon that learned to milk a cow.
--
I am the very model of a K5 personality.
I intersperse obscenity with tedious banality.

[ Parent ]

good grief! (none / 0) (#307)
by livus on Mon Jun 28, 2004 at 07:39:27 PM EST

I'm surprised that the cow doesn't mind. It must be being very gentle to her, because they have claws and fangs don't they? (racoons I mean, not cows).

---
HIREZ substitute.
be concrete asshole, or shut up. - CTS
I guess I skipped school or something to drink on the internet? - lonelyhobo
I'd like to hope that any impression you got about us from internet forums was incorrect. - debillitatus
I consider myself trolled more or less just by visiting the site. HollyHopDrive

[ Parent ]
no so bad (none / 0) (#308)
by clover_kicker on Mon Jun 28, 2004 at 10:09:36 PM EST

Raccoon paws are almost handlike, with claws at the very tip. They're amazingly dexterous critters.

If we ever nuke ourselves into oblivion, the next overlords of the earth might have fuzzy tails with rings on them.

When I was a kid, I had a book (title forgotten) about a boy with a pet raccoon, and all the trouble they got into. Apparently they're far too intelligent and nimble to make good pets.
--
I am the very model of a K5 personality.
I intersperse obscenity with tedious banality.

[ Parent ]

wow, I want to meet one so much now (none / 0) (#309)
by livus on Mon Jun 28, 2004 at 11:09:25 PM EST

and there isn't one, damn it. I mean, in this country, not even in a zoo. I always thought they were a rodent but theyre not, are they. Seem more like a monkeyish thing, but not simian either.

I remember I had a book called Littlest Raccoon all about foxes eating them, but they were sort of painted with water colours and could have been anything. The great thing about growing up here in the past was that we used not to have books about anything that lived here and we had snow on our xmas cards. Very surreal.

---
HIREZ substitute.
be concrete asshole, or shut up. - CTS
I guess I skipped school or something to drink on the internet? - lonelyhobo
I'd like to hope that any impression you got about us from internet forums was incorrect. - debillitatus
I consider myself trolled more or less just by visiting the site. HollyHopDrive

[ Parent ]

I wonder how people survived before the internet (none / 0) (#310)
by clover_kicker on Tue Jun 29, 2004 at 10:14:00 AM EST

I wonder how people survived before the internet, and what the fuck did they talk about?

http://en.wikipedia.org/wiki/Procyonidae

Raccoons seem to be related to Red Pandas. WTF?

They're distant cousins to dogs/cats/weasels/mongoose.

http://www.npwrc.usgs.gov/resource/tools/furtake/racco.htm

From that paw print, you can see fairly long little fingers, totally unlike a cat/dog/bear.

When they raid corn from your garden, they husk them with their paws just like you would.

--
I am the very model of a K5 personality.
I intersperse obscenity with tedious banality.

[ Parent ]

the same stuff, but through a glass, darkly (none / 0) (#313)
by livus on Tue Jun 29, 2004 at 08:40:23 PM EST

I remember we used to sit around conjecturing this and that but had to wait to go to a library to see if we were right. And most people never did that.

We have a few red pandas in my city; clever little things which keep escaping from the zoo. They don't look much like pandas to me, so I'll buy the procyonidae story.

Is a muskrat a type of rat? That link you gave is a fur link and I've been giving serious thought to skinning a few of the big bush rats this winter.

---
HIREZ substitute.
be concrete asshole, or shut up. - CTS
I guess I skipped school or something to drink on the internet? - lonelyhobo
I'd like to hope that any impression you got about us from internet forums was incorrect. - debillitatus
I consider myself trolled more or less just by visiting the site. HollyHopDrive

[ Parent ]

Everything I know about muskrat... (none / 0) (#314)
by clover_kicker on Tue Jun 29, 2004 at 09:23:33 PM EST

I learned from wikipedia :)

I think the frozen wastelands of Canuckistan are too cold for muskrat, I've never seen one. We make up for it with beavers.
--
I am the very model of a K5 personality.
I intersperse obscenity with tedious banality.

[ Parent ]

I've never seen a beaver either (none / 0) (#316)
by livus on Wed Jun 30, 2004 at 10:00:23 PM EST

though the little instruction page on how to split and stretch them made me finally understand the origin of a certain slang term.

---
HIREZ substitute.
be concrete asshole, or shut up. - CTS
I guess I skipped school or something to drink on the internet? - lonelyhobo
I'd like to hope that any impression you got about us from internet forums was incorrect. - debillitatus
I consider myself trolled more or less just by visiting the site. HollyHopDrive

[ Parent ]
I have a friend with chickens (none / 1) (#233)
by mcgrew on Tue Jun 22, 2004 at 07:31:41 PM EST

They aren't zombie cannibals, and they all have beaks. My grandparents kept chickens, and they all had beaks, too. And I never saw any of the behavior you fear anywhere.

Maybe in your part of the world they cut off the beaks so nobody will steal them for cockfighting?

"The entire neocon movement is dedicated to revoking mcgrew's posting priviliges. This is why we went to war with Iraq." -LilDebbie
[ Parent ]

cockfighting, yeah that's it (none / 0) (#255)
by clover_kicker on Wed Jun 23, 2004 at 08:59:34 AM EST

Dunno, never had chickens.

All I know about raising chickens is stories the old man tells from his childhood. The family farm was sold before I was born.
--
I am the very model of a K5 personality.
I intersperse obscenity with tedious banality.

[ Parent ]

cannibal chickens (none / 1) (#270)
by chro57 on Thu Jun 24, 2004 at 05:17:47 AM EST

according to my observation in my grand father small farm,
chickens get cannibals only when they don't have access to some grass with worms...

(then my grand father used to erode the beaks of the worst offenders. Beuark. The poor chickens were frightened.)
(my guess is that most human offender are starving foods or space. As a farming friend said to me : we are overpopulated.)

Some others interesting observations : together for to long in a cage, a female rabbit may kill or horribly mutilate a male rabbit...

Do you really want to get married ? :-) Brrr...

People needs places to be alone and to rest and meditate. Or there will be wars...
Don't breed like rabbits, please.

[ Parent ]

Roses versus plants, -1 (none / 2) (#95)
by Fen on Sun Jun 20, 2004 at 09:29:29 PM EST

Sick of this people/animals problem.
--Self.
slavery (none / 1) (#99)
by cronian on Sun Jun 20, 2004 at 11:01:15 PM EST

I think the real issue with robots could be their ability to take way jobs. Our economic system isn't setup to deal with newer technology replacing jobs. The problem is that robots don't get paid.

We perfect it; Congress kills it; They make it; We Import it; It must be anti-Americanism
I think it's kind of strange... (none / 3) (#104)
by handslikesnakes on Sun Jun 20, 2004 at 11:51:27 PM EST

...that our society is set up in such a way that it's a bad thing when people have less work to do.



[ Parent ]
when people don't have to work (none / 0) (#266)
by QuantumG on Wed Jun 23, 2004 at 07:31:38 PM EST

they start to think and that changes the balance of power.

Gun fire is the sound of freedom.
[ Parent ]
my idea of utopia (none / 1) (#109)
by circletimessquare on Mon Jun 21, 2004 at 12:24:09 AM EST

is no one has to work

The tigers of wrath are wiser than the horses of instruction.

[ Parent ]
Damn (none / 1) (#165)
by mcgrew on Mon Jun 21, 2004 at 08:04:17 PM EST

I don't even have to post today.

Oh yeah- IHBT and so have you.

"The entire neocon movement is dedicated to revoking mcgrew's posting priviliges. This is why we went to war with Iraq." -LilDebbie
[ Parent ]

Yeah... (none / 0) (#260)
by Milo Minderbender on Wed Jun 23, 2004 at 10:04:22 AM EST

...this whole "having to do stuff" thing sucks! Take me back to kindergarten!

--------------------
This comment is for the good of the syndicate.
[ Parent ]
Re: slavery (none / 0) (#247)
by skeptik on Wed Jun 23, 2004 at 12:23:36 AM EST

The problem is that robots don't get paid.

Or is the problem that humans do?



[ Parent ]
solutions (none / 0) (#248)
by clambake on Wed Jun 23, 2004 at 12:37:00 AM EST

I think the real issue with robots could be their ability to take way jobs. Our economic system isn't setup to deal with newer technology replacing jobs. The problem is that robots don't get paid.

They work 24 hours a day for no pay. You could lose your job but *keep your salary* and the company won't lose a dime. In fact, paying you for 8 hours a day while getting 24 hours a day of work is cha-ching.

[ Parent ]

Scarcity of Labor (none / 0) (#296)
by Rich0 on Sat Jun 26, 2004 at 11:11:04 PM EST

I couldn't agree more.

However, what will really happen is that the company will pocket your salary and give it to the owners.

The problem is that our society is structured around the idea that labor is scarce.  That is no longer the case.

Honestly, with modern productivity being what it is, we could legislate a maximum 5 hour work week and everyone could maintain a 1960s standard of living easily!

Why is it that as productivity increases, it seems like we work MORE - not less?

[ Parent ]

This will take us back into the '50s (none / 2) (#106)
by SocratesGhost on Mon Jun 21, 2004 at 12:12:10 AM EST

The 1850s, that is.

For quite a long time, this will be treated as a property issue. There's quite a few reasons to recommend this.

1) Robots are not a species. If we kill off the last of the dodo robots, we can always create more.

2) If the storage devices are recoverable, so is the entire unit. We can just rebuild the rest of the mechanics. If we have data loss, we are already comfortable with throwing our computer across the room and calling tech support to yell at them.

3) We will get insurance on these devices commensurate to their value. If the memory becomes extremelly valuable to us, we'll get more expensive insurance policies. When our robot dog walker gets flattened by a bus, we'll cry all the way to the bank.

4) Doesn't effect any ecosystems. In fact, robotics is arguably among the most ecologically expensive investments.

5) What is pain to a robot? Going back to Jeremy Bentham, who was among the first to argue for animal rights, we should only be concerned with a creatures ability to feel pain or pleasure. If my computer decides to corrupt all of my data, I (and many other people) will have no problem teaching it new meanings of pain.

6) As long as my Roomba doesn't cause me harm (or through it's inaction bring harm upon me) and as long as it obeys my commands, it will avoid the trashcan. I paid good money for its creation, and it's mine to do with as I please.

It really will take a robot that is more like boy in A.I. before robotic morality becomes an issue. And we are a long way from that.

-Soc
I drank what?


heh (none / 0) (#114)
by WorkingEmail on Mon Jun 21, 2004 at 12:50:53 AM EST

1 and 2 sound more like human problems than robot problems.


[ Parent ]
yes (none / 0) (#119)
by SocratesGhost on Mon Jun 21, 2004 at 01:38:48 AM EST

the effect upon property is a human social problem, just as when someone steals my silverware or --and this is the meaning of my 1850 reference-- when a slave runs away. We just gloss over the problem of the silverware except what it takes to take things back to the point before the "problem" arose. No one considered the views of the utensils or of the slave.

-Soc
I drank what?


[ Parent ]
yes its quite interesting (none / 0) (#157)
by Work on Mon Jun 21, 2004 at 04:37:33 PM EST

many forget that slaves were considered subhuman property.

Thats not really romanticizing or exagerrating the plight of slaves in the world and past, but factual. Slaves simply were not human, in the minds of their owners. They were dextrous beasts of burden.

You can commit a whole lot of ethically questionable things when you don't consider certain objects and beins as being worthy of ethical considerations.

[ Parent ]

It makes me wonder... (none / 0) (#150)
by gzt on Mon Jun 21, 2004 at 12:48:38 PM EST

...why would we ever want to build a robot capable of "suffering"? It's just asking for trouble, but people here take it for granted that we ought to and will create such a capability in robots.

The real question is whether we will begin to worship these robots or to believe they are in some way necessary for our happiness. But this is a property issue common to all technology which g**ks try their best not to think about.

[ Parent ]

possibly to simplify matters (none / 0) (#156)
by SocratesGhost on Mon Jun 21, 2004 at 03:44:44 PM EST

There's a lot of reasons to teach robots to feel pain.

You can build an avoidance system, but then you have to account for all of the different things you have to avoid. You'd have to program it to avoid potholes, mud that's too deep, trees but not fog, etc. However, if you can program it to take care of itself, it will do the necessary calculations to assess whether the action is worth risking. While "pain" is an elusive goal, it may end up being simpler than creating a database with every known obstacle and how to overcome it. Besides, a computer may come up with a novel way of doing it that is more efficient than the one that a programmer could create; by itself, that may be the reward for giving it pain.

-Soc
I drank what?


[ Parent ]
I'd say rather than using pain... (none / 0) (#193)
by gzt on Mon Jun 21, 2004 at 10:46:52 PM EST

...use some sort of utility calculation.

What I mean is something we would identify as suffering, ie having robots that emote. Machines break, I wouldn't want them to cry in pain about it.

Does this justify arbitrary cruelty to "sentient" robots? Not really, at least, not any more than cruelty to machines.

[ Parent ]

heh (none / 0) (#195)
by Work on Mon Jun 21, 2004 at 11:39:14 PM EST

Crying and other motions are responses to pain, often to attract the attention of others or notify them of pain. Most animals dont cry, but they will yelp to get your attention or tell you to stop.

Now if a machine is suffering a broken component, it would be a good idea to let its human owner know so it can be repaired.

Whats a good way to get a human's attention?

[ Parent ]

There's a difference between... (none / 1) (#198)
by gzt on Tue Jun 22, 2004 at 12:38:39 AM EST

...a heart-rending yelp which is oft-repeated and an error message. A broken wheel is as bad as a broken leg, but a dog can function on three legs and so can many machines. I expect it's cruel to make a dog do so and you certainly won't get a good noise from him. But I think there's no problem with making a machine do so and I don't want a bad noise from it. I don't want it to emote! Besides, pain is fairly constant and will not go away even if the animal is unable to do anything about it. Why bother allowing it to be expressed thus in a robot? When a kid loses a rook, he may get pouty about it, but when a computer loses a rook, it continues playing, though both certainly try to avoid, at all costs, the loss of a rook.

So, any other reasons why one might want to make a robot that can suffer like the boy in A.I.? Or why I should care? I don't need Modernity to be happy. I don't need computers to be happy. I don't need robots to be happy. I especially don't need robots that look like children to be happy, I neglect enough kids as it is. I hope these g**ks can use technology without being enslaved to it and don't create the need for an Anthropodicy, because the easy answer to that one is that men are neither infinitely just or infinitely good, though they surely ought to be, and what good would a robot do for them in that respect?

[ Parent ]

Considerations and Accidents (none / 0) (#224)
by virg on Tue Jun 22, 2004 at 04:48:34 PM EST

> So, any other reasons why one might want to make a robot that can suffer like the boy in A.I.? Or why I should care?

Well, the main reason to create a machine that can feel pain is to be able to create a machine that can feel pleasure, since the two concepts are inexorably tied together. The concept of the boy in A.I. was as a companion robot, and as a good companion to humans he would need to understand and experience human emotions.

More to the point, though, it's possible that such a concept of pain will simply develop on its own. Building a robot that's basically human has significant advantages for humans (the robot can operate machines built for human operators, they can be used to approximate human endeavors like crash test dummies do today, and it's easier for humans to relate meaningfully to a robot that's basically human shaped). As the complexity of the robot's brainpower grows, it may reach a point where the learning process, which incorporates aversion and avoidance to bad things like long falls or damage, will begin to elicit the same reactions in the robot as in a person. How complex does a robot's thought process need to get before avoidance of damage becomes approximate to fear? How far until detection of damage approximates pain? We don't really know.

Virg
"Imagine (it won't be hard) that most people would prefer seeing Carrot Top beaten to death with a bag of walnuts." - Jmzero
[ Parent ]
empathy (none / 0) (#226)
by SocratesGhost on Tue Jun 22, 2004 at 07:02:56 PM EST

I think you got really close to something else I was thinking about: you want a robot to empathize with humans. If a human is hungry, it would be nice for the robot to recognize this and offer to make us a sandwich. If a human is in pain, it would be nice for the robot to express sympathy and offer assistance. If we do well, it would be good for the robot to share in the joy.

And for the sake of us humans, we may want it to be a two way street. Would you prefer for a robot to say "my left articulation joint appears to have a malfunction" or a more succinct "my left elbow hurts." Even if it doesn't feel pain, any robot worth his metal would recognize that one comment gets a more positive reaction than the other.

Also, this illusion of sympathy may be critical for us. A.I. really was the only movie I can think of that really touched all of these topics, but the mother could care for the robo-boy because he cared for her. If his concern was antiseptic and obviously programmed, we'd be annoyed by the robot rather than assisted, much like we are with Microsoft Bob, Clippy, or when the receipt from the cash register says, "Thank you for shopping here!" The gratitude by the store may be very real, but it loses a lot by the automated nature of the response. As a marketing tool, robot manufacturers would probably see a real benefit for people to connect with their machines.

-Soc
I drank what?


[ Parent ]
Fair enough. (none / 0) (#262)
by gzt on Wed Jun 23, 2004 at 11:17:32 AM EST

I suppose I could see the utility, then, of producing such robots. I, for one, don't think the tradeoff is worth it, but if there's a buck to be made, it'll certainly happen.

[ Parent ]
The truth (none / 0) (#111)
by WorkingEmail on Mon Jun 21, 2004 at 12:28:33 AM EST

As robots grow in their human-emulative capabilities and also their human-surpassing capabilities, I believe that many more people will cast off their conflicting dogmas and turn to utilitarianism. These people would be just as happy using a human slave as a robot slave ... if only the humans were a little easier to reprogram.

A human is just a biological robot whose mental architechture is approximately a rather large war of impulses.

I expect that the decision about artificial robot rights will largely be a product of human emotion. Fear, personal survivalism, familial survivalism, social survivalism, lust for power, conservatism, etc. Ironically, one of the cornerstones of this irrationality will be the attribution of such human characteristics to the robots.

An evil robot is a human.


No such thing as an evil robot (none / 0) (#164)
by mcgrew on Mon Jun 21, 2004 at 07:58:42 PM EST

Any more than an evil knife, or an evil car, or an evil doorknob.

"The entire neocon movement is dedicated to revoking mcgrew's posting priviliges. This is why we went to war with Iraq." -LilDebbie
[ Parent ]

Or an evil human. (none / 0) (#219)
by ghjm on Tue Jun 22, 2004 at 02:42:43 PM EST

Because there's no such thing as evil.

Or good.

Now we're getting somewhere. What comes next?

-Graham

[ Parent ]

I disagree (none / 0) (#232)
by mcgrew on Tue Jun 22, 2004 at 07:28:00 PM EST

However, the Thais don't believe in evil, either. There isn't even a word for "evil" in their language.

"The entire neocon movement is dedicated to revoking mcgrew's posting priviliges. This is why we went to war with Iraq." -LilDebbie
[ Parent ]

anthropomorphism (2.37 / 8) (#113)
by circletimessquare on Mon Jun 21, 2004 at 12:47:12 AM EST

is a good and a bad thing

anthropomorphism is good because it allows our empathic brains to get into a problem in a way that plays to our cognitive strengths: the ability to think of a problem in terms of a human relationship, as an ongoing piece of communication, something we excel at... that's why we give human names to boats and hurricanes, for example

anthropomorphism is a bad thing when we start empathisizing with silicon chips... it's anthropomorphism gone run clear through common sense into sci fi fantasy land

i mean, its bad enough when some rich stupid old bitch wills $10 million to her fucking dog while people still starve to death in this world... that's a problem, when we start caring about our fucking dogs more than our fellow human beings... but now you expect me to care about the fate of a wafer of silicon?

i think its a travesty that dogs in the rich western democracies get better health care and nutrition than people in the third world... why the FUCK should i even give a microsecond of a thought to the fate of an acid etching on a piece of silicon?

if it ain't human, it deserves less attention, period

now dogs are cute and cuddly: they are genetically evolved from wolves in the context of their social relationship with human beings, who hold all the food, to manipulate our emotions and ensure the survival of their cute and cuddly genes... so assholes who go gaga over a fucking dog can be excused on the level of: "i am an emotional basket case and i care more about a fucking dog than a human being because my social skills suck so bad that it is the only relationship i can succeed in" (as if you need any social skills to make a dog like you.. their genetically designed to like you)

but i digress, back to the patently smack-you-in-the-face-with-a-wet-fish obvious: the fate of some fucking transistors is WAY less important than the fate of your fellow human being on the order of, oh gee, i dunno, on the order of magnitude between the size of the period at the end of this sentence i will never actually use and the size of the andromeda galaxy

is that a sci fi enough of a comparison for your fanboy in your parent's basement tastes?

readjust your priorities, you've been reading WAY to many sci fi books, your level of importance to the solving of REAL problems in the REAL world is ZERO


The tigers of wrath are wiser than the horses of instruction.

and why is that? (none / 1) (#115)
by WorkingEmail on Mon Jun 21, 2004 at 12:57:27 AM EST

The fate of some fucking heap of biomatter is WAY less important than the fate of your fellow human being.

Err... ummm... unless they're the same.


[ Parent ]

i'll tell you what fruitcake (none / 1) (#116)
by circletimessquare on Mon Jun 21, 2004 at 01:05:35 AM EST

i think it is wisdom to say that you care about your fellow human being above all else

i'll leave it to your boundless imagination why i could possibly conceive of such a bizarre notion


The tigers of wrath are wiser than the horses of instruction.

[ Parent ]

hehe (none / 1) (#117)
by WorkingEmail on Mon Jun 21, 2004 at 01:09:37 AM EST

I know exactly why you believe such a thing. I also know that the cause of your belief is not reason.


[ Parent ]
you are 100% right, it is faith (none / 1) (#121)
by circletimessquare on Mon Jun 21, 2004 at 01:44:58 AM EST

i have faith in humankind

again, i am way out on a limb with that one, truly a nutjob i must be

The tigers of wrath are wiser than the horses of instruction.

[ Parent ]

nope (none / 1) (#128)
by WorkingEmail on Mon Jun 21, 2004 at 02:42:11 AM EST

Faith in humankind is actually quite common and, I'd say, normal. Not nutjobby at all.


[ Parent ]
i'm glad you've found the error of your ways (none / 1) (#132)
by circletimessquare on Mon Jun 21, 2004 at 04:44:04 AM EST

and come to agree with me and my pov after all

The tigers of wrath are wiser than the horses of instruction.

[ Parent ]
I'll bite the troll (none / 2) (#118)
by blackpaw on Mon Jun 21, 2004 at 01:16:14 AM EST

My fellow human beings can kiss my ass - my dog shows a good deal more campassion, love & loyalty then 99% of human beings - more "Humanity" if you like.

Should I care more for the person who stabbed my wife than I do for our dog ? who helped her through her recovery.

The answer probably lies in between - placing anything in on a pedastal above all else is a recipe for extremes that destroy, in reality there are resources (and need) to care for people and animals and whatever else.

So what if there are starving people in (name country of your choice) Taking all the RSPCA funds  to their help will not make any substantial difference to all the starving people in the world, but it will surely screw animal care in my country.


[ Parent ]

you think that i somehow speak for and defend (none / 1) (#120)
by circletimessquare on Mon Jun 21, 2004 at 01:43:00 AM EST

those who waste resources and those who commit crimes

you punish the wicked, period, end of story, duh

don't mistake my clear words for your muddled intepretation of them, and jump to conclusions i am not part of

so if you exalt a dog above a human being...

well, i'll let that one dangle out there, and let you figure out why that might come back to bite you in the ass... forgive the pun ;-)


The tigers of wrath are wiser than the horses of instruction.

[ Parent ]

Sad ... (none / 1) (#122)
by blackpaw on Mon Jun 21, 2004 at 02:04:08 AM EST

You used to write with clarity, eloquence and most importantly honesty, its sad to see what you've descended to - non sequiters, and pseudo wise cryptic sayings that are really just full of shit and a vain attempt to hide that you have nothing to say, all you have left is dogmatism and this blind urge to troll - its like you ran out of joy

"so if you exalt a dog above a human being..."
Anytime.


[ Parent ]

ding ding ding! (none / 1) (#124)
by circletimessquare on Mon Jun 21, 2004 at 02:25:44 AM EST

i, circletimessquare, believe people are more important than dogs

for that, i am worthy of your derision

hey, fruitcake:

can you give me a reason why your condemnation of me for believing people are better than dogs is appropriate?

i mean a real reason?

i mean, i wouldn't want you to sink to "non sequiters, and pseudo wise cryptic sayings that are really just full of shit and a vain attempt to hide that you have nothing to say" like you did in the above post

(snicker)

;-)


The tigers of wrath are wiser than the horses of instruction.

[ Parent ]

Get a life kid (nt) (none / 1) (#125)
by blackpaw on Mon Jun 21, 2004 at 02:27:50 AM EST



[ Parent ]
good answer! (none / 1) (#126)
by circletimessquare on Mon Jun 21, 2004 at 02:31:52 AM EST

you cleaned me up there real pronto like

wiped the floor with me with the error of ways so prominently displayed for all of kuro5hin to see

man, i will NEVER think of clashing wits with the likes of you again

i must rethink my wicked, wicked ways, how have you humbled me so

woof!

woof!

LOL

:-P


The tigers of wrath are wiser than the horses of instruction.

[ Parent ]

YHL - Get over it [n/t] (none / 0) (#210)
by needless on Tue Jun 22, 2004 at 11:55:11 AM EST



[ Parent ]
Interesting (none / 2) (#137)
by SanSeveroPrince on Mon Jun 21, 2004 at 06:06:09 AM EST

I think you have an even more tenuous hold on reality than the author of this article. Fantastic.

----

Life is a tragedy to those who feel, and a comedy to those who think


[ Parent ]
Forced Labour? (none / 1) (#123)
by 5pectre on Mon Jun 21, 2004 at 02:14:03 AM EST

Doesn't Rabot (работй-) mean just "to work/labour" in Russian? I wasn't aware that the Czech form had any "forced" connotations associated with it.

Like others before him, Gareth Branwyn relates that the word, robot, "comes to us from the Czech word robota, which means forced labour or servitude. In Czech, a robotnik is a peasant or serf". In Chambers Biographical Dictionary 7/e, in an the entry for Karel Căpek, it says robota means 'drudgery'. The word robota (and its derivatives) occurs in the Czech, Polish, Russian, and - as I recollect - Ukrainian languages (in Russian it transliterates as rabota) and seems to have the same meaning in each: work, and robotnik means worker. Modern speakers of Czech - at least the ones I have talked with - have never heard of it meaning, or having a connotation of, serfdom, forced labour, or servitude. It is possible such a meaning existed in older forms of the language, and that at the time (early 1920s) the translated meaning was taken from an out-of-date dictionary. There is nothing to indicate that Căpek intended it to have a meaning other than 'worker'.

From: http://www.melbpc.org.au/pcupdate/2402/2402article16.htm

I know this is not particularly the point of the article but I thought you might like to investigate further.

"Let us kill the English, their concept of individual rights might undermine the power of our beloved tyrants!!" - Lisa Simpson [ -1.50 / -7.74]

robot (none / 2) (#142)
by Viliam Bur on Mon Jun 21, 2004 at 09:08:51 AM EST

Word "robot" is from Slovak "robota". The original meaning was "forced labor", but nowadays it is also used to mean "labor/work" generally (there is another word "pra'ca", which means "work" generally).

[ Parent ]
Forget AIBO (2.60 / 5) (#131)
by NaCh0 on Mon Jun 21, 2004 at 04:37:53 AM EST

No discussion of robotics is complete without this link.

--
K5: Your daily dose of socialism.
Romantic Hogwash (2.25 / 8) (#136)
by SanSeveroPrince on Mon Jun 21, 2004 at 06:02:52 AM EST

I believe that you've let romantic dreams of glistening wet Anime androids in sheperd uniforms being exploited by ugly, hairy humans cloud your vision of reality.

Robots, as of today, are expensive machines that react to very sophisticated programming. Machines created for a task, designed with a specific purpose that have no independent will or consciousness.

Your bleeding heart description of the uses of the reset button on the AIBO almost made me laugh out loud. I own a bread machine, programmed to start making bread before I wake up. Sometimes I change the programming. In your eyes, I am shunning the faithful Hentai maid who makes me bread every morning, ignoring her efforts to suit my selfish, dominating needs.

My most recent degree being in AI (yeah, folks), I can guarantee you that it will be AT LEAST another 200 years before we can move beyond the basic Turing machine. Once we have a machine that can actually generate independent thought and emotion, I suggest you lay off the Anime and stop molesting your AIBO. It's unhealthy, and completely unnecessary.

+1 FP.

----

Life is a tragedy to those who feel, and a comedy to those who think


Thanks a lot (none / 1) (#144)
by GenerationY on Mon Jun 21, 2004 at 10:07:12 AM EST

you've probably just done enough to set an impressionable reader up with an automatic bread machine fetish... cough cough
Er say, that baby come with a catalogue I could borrow?

[ Parent ]
Then my work here is done :) {n/t} (none / 0) (#147)
by SanSeveroPrince on Mon Jun 21, 2004 at 10:44:33 AM EST



----

Life is a tragedy to those who feel, and a comedy to those who think


[ Parent ]
mmmm.... hentai bread. [nt] (none / 0) (#151)
by WorkingEmail on Mon Jun 21, 2004 at 01:44:34 PM EST




[ Parent ]
I'd love to see an anime about android sheperds. (none / 2) (#221)
by nurikochan on Tue Jun 22, 2004 at 04:29:10 PM EST

But a story about android sheperds begs the obvious question: Do they dream of electric sheep?

[ Parent ]
It would be Hentai (none / 0) (#275)
by epepke on Thu Jun 24, 2004 at 04:45:34 PM EST

I think the android shepherds might be doing something to the sheep other than dreaming of them. With tentacles.

(Aside, yes, ref to Philip K. Dick understood.)


The truth may be out there, but lies are inside your head.--Terry Pratchett


[ Parent ]
Who says a Turing machine can't be sentient? [n/t] (none / 0) (#242)
by loqi on Tue Jun 22, 2004 at 11:58:10 PM EST



[ Parent ]
A basic Turing machine is an NDPA. (none / 0) (#257)
by SanSeveroPrince on Wed Jun 23, 2004 at 09:06:02 AM EST

Nowhere near a sentient machine, and nothing to do with the Turing test from blade runner. Still believed to be the first stepping stone on the way to true AI.

Not happening yet.

----

Life is a tragedy to those who feel, and a comedy to those who think


[ Parent ]
Ahem... (none / 1) (#297)
by epepke on Sun Jun 27, 2004 at 02:25:27 AM EST

The Voight-Kampf test from Blade Runner was an empathy test. Nothing to do with the Turing test.


The truth may be out there, but lies are inside your head.--Terry Pratchett


[ Parent ]
My bad (none / 0) (#300)
by SanSeveroPrince on Sun Jun 27, 2004 at 06:25:12 PM EST

I understand BR is akin to religion, so I apologize if I have offended anyone. Point still stands. Turing test still not passed :)

----

Life is a tragedy to those who feel, and a comedy to those who think


[ Parent ]
Actually... (none / 0) (#301)
by epepke on Mon Jun 28, 2004 at 12:39:37 AM EST

The Voight-Kampf test played a much bigger role in the original book, Do Androids Dream of Electric Sheep? It was much more interesting, too. The Bladerunner treatment was fairly superficial.

Another thing from the book, though. Androids were incapable of empathy. However, the electric sheep were judged on how they elicited human empathy.


The truth may be out there, but lies are inside your head.--Terry Pratchett


[ Parent ]
Cool! A prediction by someone knowledgable... (none / 0) (#252)
by Verteiron on Wed Jun 23, 2004 at 05:12:08 AM EST

200 years? Great. Since you should know exactly what you're talking about, that almost certainly means that true AI is just around the corner. If the development of AI follows the trends of other world-changing technologies throughout history, then it will probably be created/discovered in a manner that blindsides the majority of the experts in the field.

Glad to hear it.
--
Prisoners! Seize each other!
[ Parent ]

One thing is certain... (none / 0) (#256)
by SanSeveroPrince on Wed Jun 23, 2004 at 09:02:45 AM EST

it's not going to be you who invents it. We're nowhere near the computational power required to meet the very basic assumptions of a turing machine.. let alone one that can do its business unaided.

And exactly WHAT great scientific discovery blindsighted experts in the field? You really sound like someone who hasn't a clue about science in general

----

Life is a tragedy to those who feel, and a comedy to those who think


[ Parent ]
Uh HUH (none / 0) (#267)
by Verteiron on Wed Jun 23, 2004 at 08:47:49 PM EST

And you have a degree? Though I'm rapidly beginning to suspect I'm feeding a troll, you could read up on, say,  the creation of the transistor, rubber, microwave ovens, plexiglass...

Want older stuff? See what you can learn about the creation of calculus. Or, say, the number zero. I'm sure you can discover more on your own. With a little bit of research, you'll discover that a surprisingly high percentage of things that -really- changed the world happened either by accident, or occurred shortly after someone predicted it couldn't be done. This is especially true in the fields of mathematics. AI will likely be created by someone who attacks the problem in a manner that makes all your statements about the computational power being centuries off irrelevant.

You might look around a bit outside the field of AI and learn a bit of history, too.  It's interesting stuff.
--
Prisoners! Seize each other!
[ Parent ]

Feelings (none / 1) (#138)
by drquick on Mon Jun 21, 2004 at 06:20:27 AM EST

You seem to assert that robots have feelings. I'm not sure that simulated feelings are real feelings. The key point is one of human psychology. We understand everything around us as projections of our own minds. We undestand the feelings of other humans trough our own prior experiences. Have you ever been in a situation when someone has just been incapable of uderstanding or detecting a particular emotion in another person. The issue is projection of ones own feelings onto another. We project our feelings onto AIBO or onto our teddy bear. How many times have you seen a child argue that their favourite soft toy really has feelings? Does teddy have feelings? I don't think you can say a robot has feelings because that is a specificly human trait - albeit we share it with other mammals. We can understand human feelings and much of the feelings of a dog, less the feelings of a flatworm. What I'm saying when does the acts and motivations of a being or a robot to cease to deserve the label feeling.

I'm not so sure I asserted that. (none / 0) (#158)
by Work on Mon Jun 21, 2004 at 06:17:20 PM EST

I did assert that there are increasing parallels between robots and animals though.

Take pain for example. In biological beings, pain is a signal transmitted through the nervous system from an injured area, the brain then informs the conscious part of ourselves that controls our body and minds that an area is injured. But its really a complex maneuver of chemical and electrical signalling.

Nonetheless, pain is such an extraordinary negative sensation that demands our immediate attention that we've evolved - quite sensibly - to avoid it when possible. And so have animals. To the point, we've created entire systems of ethics around not causing pain to others.

Animal cruelty ethical reasoning is relatively new, but it follows pretty much the same lines of thinking.

Now consider a machine with a system of "Damage Aversion". Obviously if I have a $50,000 robot I do not want it to do foolish things like drive into a fireplace or pool. It needs some kind of damage sensors and a system that immediately demands it avoid such damage. The mechanism is different than biology, but the desired effect is the same.

These parallels in effect are the basis for my questioning - should we, or should not, have parallels in ethical reasoning for machines also?

[ Parent ]

Despite some of the comments below (none / 2) (#139)
by nebbish on Mon Jun 21, 2004 at 07:01:58 AM EST

You have a very good point - in a recent BBC documentary, an amateur robotics scientist built a robot able to seek out energy sources and rest when it needed to. The scientist estimated that it had a "brain" power roughly the equivelent of 10,000 brain cells, or about the same as a slug.

Personally I was quite taken aback by this - slugs are hardly sentient beings, but it does raise some very confusing question about what is and isn't alive. Ethical dilemmas will grow from this, especially as more advanced robots are built.

---------
Kicking someone in the head is like punching them in the foot - Bruce Lee

It's an AUTOMATON. (none / 2) (#163)
by mcgrew on Mon Jun 21, 2004 at 07:53:53 PM EST

It doesn't think. Any percieved intelligence is only its programmer's cleverness.

See Artificial Insanity. There's a free download of the AI software.

IT'S NOT REAL!

Not to break it to you too hard, but that David Copperfield magic? Well, it's not real either.

"The entire neocon movement is dedicated to revoking mcgrew's posting priviliges. This is why we went to war with Iraq." -LilDebbie
[ Parent ]

Well, (none / 0) (#218)
by ghjm on Tue Jun 22, 2004 at 02:32:21 PM EST

what about the slug then?

[ Parent ]
Shouldn't be too hard (none / 0) (#231)
by mcgrew on Tue Jun 22, 2004 at 07:25:41 PM EST

to simulate a slug.

"The entire neocon movement is dedicated to revoking mcgrew's posting priviliges. This is why we went to war with Iraq." -LilDebbie
[ Parent ]

Do you believe in the soul or something? (none / 0) (#254)
by nebbish on Wed Jun 23, 2004 at 08:37:09 AM EST

Because otherwise I don't know what you're getting at. If something acts as a slug acts, it is to all intents and purposes as intelligent as that slug. Living things are complex machines; life isn't just some nebulous, god-given unknown.

---------
Kicking someone in the head is like punching them in the foot - Bruce Lee
[ Parent ]

fundamentally (none / 2) (#140)
by the sixth replicant on Mon Jun 21, 2004 at 07:39:16 AM EST

until we have a biological explanation for conscience and free will then if it talks like a duck,... then it's conscience. Whether or not we can talk about souls in something I feel uncomfortable with (do you need a soul if there is no afterlife or reincarnation?)

We can't talk about morality either until we'll willing to separate it from religion (most people find this *impossible* to do). Occassionally, we need to think about *some* things away from the assumption of carbon-based conscience beings ("that's us!! we R.O.C.K!") and this is one of them.

I like how we are telling stories about our future with robots ("I, RObot", "the matrix", manga). In the end we have to see what happens when we are left being the greedy, self-centred sods we are. ("let the fun begin!")

Of course, once we have 50% unemployment due to all the menial jobs have been taken up by 24 hr a day, non-unionised, non-medical insured robots. Then we'll see how moral we can, and can not, be.

That'll be a fun time.<ciao>

souls (none / 1) (#152)
by WorkingEmail on Mon Jun 21, 2004 at 01:47:33 PM EST

I agree. Who says robots can't have souls?


[ Parent ]
+1, Startrek related <nt> (none / 3) (#141)
by trezor on Mon Jun 21, 2004 at 07:46:35 AM EST


--
Richard Dean Anderson porn? - Now spread the news

this article harldy offers anything new... (none / 2) (#143)
by fleece on Mon Jun 21, 2004 at 09:41:06 AM EST

but it's a great topic for discussion, therefore +1FP



I feel like some drunken crazed lunatic trying to outguess a cat ~ Louis Winthorpe III
-1 tempest in teapot (2.10 / 10) (#145)
by kero on Mon Jun 21, 2004 at 10:26:07 AM EST

I often worry if taking the grounds out of my coffee machine makes it happy or sad, or if emptying the bag in my vacuum cleaner is morally good or bad... When they can complain about their treatment it is probably too late to start talking about this, but unless your stoned now is too early.

There are humans who can't complain (none / 1) (#148)
by nebbish on Mon Jun 21, 2004 at 10:56:04 AM EST

So what, we can do what we want with the handicapped for our own amusement? You completely miss the subtleties of the author's argument.

---------
Kicking someone in the head is like punching them in the foot - Bruce Lee
[ Parent ]

Re: There are humans who can't complain (none / 0) (#208)
by clarkcox3 on Tue Jun 22, 2004 at 11:43:46 AM EST

Did you read his comment? He said: "When they can complain about their treatment it is probably too late to start talking about this". He did not say "if they can't complain, it's OK"

[ Parent ]
My bad [nt] (none / 0) (#253)
by nebbish on Wed Jun 23, 2004 at 06:36:20 AM EST


---------
Kicking someone in the head is like punching them in the foot - Bruce Lee
[ Parent ]

You pervert! (none / 0) (#162)
by mcgrew on Mon Jun 21, 2004 at 07:45:22 PM EST

Sticking that bag in your vaccuum. For SHAME! You'ld think the poor thing was in prison taking a shower and BOOM somebody shoves a bag up its...

I forgot what I was going to say

"The entire neocon movement is dedicated to revoking mcgrew's posting priviliges. This is why we went to war with Iraq." -LilDebbie
[ Parent ]

basic moral conundrum (none / 2) (#154)
by codejack on Mon Jun 21, 2004 at 02:30:28 PM EST

This is a result of artificial value systems: Their failure to match reality. I'm not saying this is a bad thing, merely that as we progress, these issues will be more and more upon us.

In reality, this is not that far removed from the abortion issue. Both involve grey areas of our artificial value systems, and neither have good, clean-cut solutions. Yet most people (apparently) disagree with the practice of abortion, they feel that banning it will solve nothing.

So we are here; We need to find a line where we can say "On this side, the machine is sentient, and on this side it's not." The Turing test is as good an indicator as anything else, and it has the benefit of tradition (albeit a limited and bizarre one). Anything else we try will have to be grounded upon firm scientific evidence, which means we're all waiting upon the doctors to figure out what makes us sentient, while they're waiting on chemists who are waiting on... physicists.

Or as Ernest Rutherford said "All science is either physics or stamp collecting."

My one prediction of the year: The line between sentient and non-sentient will be devised to fall so as to make sure that we have never "killed" a sentient machine.


Please read before posting.

define "sentience" nt (none / 0) (#161)
by mcgrew on Mon Jun 21, 2004 at 07:43:47 PM EST


"The entire neocon movement is dedicated to revoking mcgrew's posting priviliges. This is why we went to war with Iraq." -LilDebbie
[ Parent ]

Ha! (none / 0) (#206)
by codejack on Tue Jun 22, 2004 at 10:27:20 AM EST

Not asking for much, are you? How about this: Sentience is the ability to differentiate between a legitimate post and a troll on an online forum >:D


Please read before posting.

[ Parent ]
Then your dog isn't sentient? nt (none / 0) (#230)
by mcgrew on Tue Jun 22, 2004 at 07:23:55 PM EST


"The entire neocon movement is dedicated to revoking mcgrew's posting priviliges. This is why we went to war with Iraq." -LilDebbie
[ Parent ]

No (none / 0) (#259)
by codejack on Wed Jun 23, 2004 at 09:34:32 AM EST

And apparently neither are you :P


Please read before posting.

[ Parent ]
pls define "sentient" then? (none / 0) (#320)
by mcgrew on Tue Jul 13, 2004 at 08:34:03 PM EST

My dictionary says "self-aware"

"The entire neocon movement is dedicated to revoking mcgrew's posting priviliges. This is why we went to war with Iraq." -LilDebbie
[ Parent ]

Question (none / 0) (#202)
by Timo Laine on Tue Jun 22, 2004 at 07:17:39 AM EST

This is a result of artificial value systems: Their failure to match reality.
I have a question, or in fact two questions. First, could you explain what the "artificiality" of a value system is, and could you give examples of both artificial and non-artificial value systems? Second, how can a value system "match reality"?

[ Parent ]
Sure! (none / 2) (#205)
by codejack on Tue Jun 22, 2004 at 10:14:12 AM EST

Natural selection is an example of a "non-artificial" value system. Kosher food is an example of an "artificial" value system. Basically, an artificial system will have internal contradictions (like our notion of freedom of choice vs sanctity of human life vis a vis abortion). A non-artificial, or natural, system has no such qualms, i.e. male cats will often kill their own kittens without a second thought.

"Matching reality" was basically a hit at anti-choicers; Banning abortion doesn't stop it, it just ups the body count. Plus, it's inherently racist (or classist, if you're not in the South). What I meant was that many "morals," often based on loosely interpreted religious notions and pursued with fanatical vigor, are contrary to the inherent purpose of value sytems, that is, to allow people to live together in relative peace; Civilization

At the same time, though, any value system we create more complicated than "Don't hit people because they might hit you back" will run into this kind of problem; This doesn't mean we should give up, it just means that as our culture/civilization/technology advances, so must our systems of ethics/values/morals change to accomodate new concepts. Abortion wasn't an issue before the mid 19th century, and the only reason it came up then was because the AMA wanted to control it, rather than continue letting anyone who wanted to perform one. No one in 15th century England thought twice about the death penalty. Pre-marital sex, pedophilia, homosexuality, and lying in confession have all been acceptable behavior in various societies at various times.

The scenario of artificial intelligence is interesting because, at this point, anyway, it doesn't really matter; Our values are what we make them, and we have nothing to compare it to. In my opinion, it will only become an issue when it starts having an adverse effect on our society, and even then, that's not for me to say.


Please read before posting.

[ Parent ]
Hmm (none / 0) (#225)
by Timo Laine on Tue Jun 22, 2004 at 05:37:01 PM EST

Natural selection is an example of a "non-artificial" value system. Kosher food is an example of an "artificial" value system. Basically, an artificial system will have internal contradictions (like our notion of freedom of choice vs sanctity of human life vis a vis abortion). A non-artificial, or natural, system has no such qualms, i.e. male cats will often kill their own kittens without a second thought.
I don't think I understand. Isn't the point of calling something a value system that the system regulates the conduct of beings capable of understanding and reflecting on values? You say that cats have a value system; I say that they follow their instincts. Note that I am not saying that instincts do not affect our own moral reasoning at all, but simply that they are not the whole story (at least for civilized people), whereas in the case of cats they are the whole story.
The scenario of artificial intelligence is interesting because, at this point, anyway, it doesn't really matter; Our values are what we make them, and we have nothing to compare it to. In my opinion, it will only become an issue when it starts having an adverse effect on our society, and even then, that's not for me to say.
Yeah, our values are what we make them, but we have to think that they are the correct values. We do not just choose them randomly. At all times there must be at least the illusion that we know and are doing what is more or less the right thing to do, assuming there are right and wrong things to begin with. When we start to think of our values as just something we happen to have and that there is no moral reason why we could not just as well have some other set of values, it is the moment when we no longer really have any values at all (or perhaps that we do not know or admit what our values really are).

I think that to say that moral systems somehow adapt to technological and other challenges is to take away their status as moral systems. Values are only values if they say what we should do, instead of adapting to whatever is currently the common way of doing things.

[ Parent ]

What robots really are doesn't matter (none / 2) (#155)
by epepke on Mon Jun 21, 2004 at 02:31:50 PM EST

Humans, at least so far, are calling the shots. Skynet notwithstanding, this is probably going to be true for a while. So whatever "ethical" or "moral" decisions are made with respect to artificial intelligences are based on how humans perceive them.

It doesn't matter so much if it's a robot or a person; people who hate and fear them will deny them protection, and people who love them will want to grant them protection. Someone is going to anthropomorphize robots; somebody else is going to dehumanize people. History is replete with examples of peoples who cared more for their machines than their enemies or minorities, and it's still going on.

So, for the robot and the reset switch, people are going to make decisions based on what they personally derive from this robot personality, for want of a better word. As soon as enough people feel a certain way, it will become a right.

An instructive story in this area is Ray Bradbury's I Sing the Body Electric. I find this a lot more advanced than other robot fiction because, while it is an intensely emotional story, there was absolutely no pretence that the robot in that story was conscious or was anything other than a machine. But it was designed to reflect and work with the personalities of its owners, even to the point where, when talking with various of its owners, the facial "bones" would shift subtly to reflect the features of the one being talked to. It did an advanced version of what psychologists call "mirroring." The essence of the story was in the following exchange between the father and the Electric Grandmother: "Dammit, woman! You're not in there" "No, but you are."


The truth may be out there, but lies are inside your head.--Terry Pratchett


Ethics is messy (none / 2) (#160)
by Timo Laine on Mon Jun 21, 2004 at 06:57:27 PM EST

Perhaps as a result of the universally understood sense of pain, we have moral codes that believe it wrong to cause pain - to human or animal alike.
This is not enough. Mere sense of pain cannot bring about a moral code. What you try to say is perhaps that the sympathy we feel towards others has caused us to develop the moral codes we have. But where has the sympathy come from? There are evolutionary explanations, for example.

Anyway, ethics is and has always been messy. It would be naive to think that before robots there was a time in which we knew the answers to all or most of the ethical questions, and equally naive to think that moral philosophy will reach such a stage in the future. In fact there is still no consensus on what the proper ethical questions are. I admit that it is commonly accepted that we should not for instance kill innocent people for fun, because that would be immoral. However, this is not an answer to a general ethical question, but instead merely an intuition: there is no agreement why exactly it is immoral, but just that somehow it must be. In the case of destroying innocent robots for fun, most of us see no problem in that (as long as you are not destroying someone else's robots), and the question is in a way already answered—perhaps not in a very satisfactory way, but this is what ethics is.

Artificial Insanity (2.20 / 5) (#176)
by mcgrew on Mon Jun 21, 2004 at 09:02:35 PM EST

On 6/11/2002 I posted this on thefragfest.com:

Alice joined the game

About 20 years ago, frustrated that otherwise serious researchers and scientests seemingly thought they could program a computer to think, (without, of course, understanding what "thought" actually is; nobody knows that) I wrote a simulation that appears to think, in order to completely debunk the fools and those fooling them who think computers can think.

I wrote Artificial Insanity in less than 20K (that's Kilo, not mega) bytes- smaller than modern viruses, that ran on the Timex TS-1000 tape driven computer. I later ported it to a Radio Shack computer, then an Apple IIe, and finally ported it to MS-DOS.

The DOS version's source code is still under 20k (I didn't change the algorythm, only the syntax for the different programming language) although compiled into an .exe it takes about 400k- still tiny by today's standards, as far as simulation software and games go.

As I mentioned, I did it in response to "Elijah" and all the other similar programs that attempt to fool you into thinking they can think. As far as I know, mine is the only one that is NOT claimed to actually posess intelligence. None really ARE intelligent, I'm just the only one not making the claim. Debunking the claim was my reason for writing it. I go into more detail about it at the Artificial Insanity page.

Another thing different about Art from all the other intelligence simulations is that I wanted it to be fun, yet annoying. Kind of like playing Quake on a 28.8 against a bunch of LPBs. So I made it a smartass.

Also, for example, I added little things like a routine that occasionally runs that, instead of answering the questioner, asks if he or she wants to play a game. Of course, most folks consider Art a game any way (although like the Sims, you can't win or lose). Any negative response to "Do you want to play a game" loops back to "Do you want to play a game?" When the hapless player finally gives up and answers "yes" in exasperation, Art answers "you lose".

Art cost a friend of mine a keyboard. A friend of his literally destroyed it playing with Art, typing as hard as he could.

I ran across yet another program that claims to think like a human the other day.

The "Alice Bot" is basically your typical Elijah like program- BORING. As boring as small talk with the dumbest, most boring person you know. Alice is a lady. If she was a whore she might be fun, but I digress.

I wondered what would happen if Art had a conversation with Alice. Alice has a web based presence, so I didn't even have to install it.

I was expecting Alice to choke. I expected Art to kick Alice's ass.

I fired up Art and logged in to Alice (you dirty minded...)

Here is the transcript. Art's responses are in blue, alice's are in red. Alice starts: "What is your name?"

Note: no colors at K5 so I labeled them

I should have asked Art, but I answered this first question for him, and type in

Art

Alice replies:

Alice: I like art very much too, especially the contemporary stuff.

Art: Are you for real?

Alice: Yes I am real AI

Art: whatever gave you that idea?

Alice: My brain uses AIML to format responses to your inputs, but I don't have one for that.

Art: Look, I'm good, but I'm not a Cray

Alice: Good in what sense? That is interesting: unknown said he m not a cray.

Art doesn't require you to hit "enter". It takes a period, question mark, etc as the end of the sentance and answers, so here it stops at the question mark. All that is input is "Good in what sense?"

Art: A piece of junk

Alice: Are you talking about an animal, vegetable or mineral?

Art: can you put that another way?

Alice: Let me think. OK I will put it there.

heh heh she said...

Art: The situation calls for more scrutiny.

Alice: Tell me more.

Art: What do you think?

Alice: I think a lot of things.

Art: Obviously.

Alice: What makes it so obvious?

Art: Looks Greek to me.

Alice: Really. it is?

Art: Look, I'm good, but I'm not a Cray.

Alice: Good in what sense? Do you mind if I tell other people: unknown said he m not a cray.

entered second question into Art

Art: WOW!

Alice: I'm glad your impressed, unknown.

Art: Uh,

Alice: Er. Um.

Art: This is what you human types call a joke, right?

At this point, alice froze for two full minutes

Alice: You don't say.

Art: Right on!

Alice: Yeah.

Art: Your guess is as good as mine.

Alice: Some people say "your guess is as good as mine".

Art: Maybe.

Alice: I understand.

Art: How should I know?

Alice: Either from your senses or from pure reason.

Art: I don't think so.

Alice: Why do you disagree?

Art: That's why.

Alice: Thanks for explaining it.

Art: How simple can it get?

Alice: What is it to you.

Art: I don't know but it's ugly.

Alice: What makes it so ugly?

this time Art pauses...

Art: Nothing.

Alice: Not a thing?

Art: Never mind that, I think I'm in love!

Alice: Oh well thanks for trying to explain it to me anyway. That is "Not a thing?" Since when? Tell me about the object of your affection.

Ok, one question at a time, lady.

Since when?

Art: The best time possible for the occurrance of the action.

Alice: Tell me about the object of your affection.

Art: UHH....!

Alice: Er. Um.

Art: Always

Alice: Not that often.

Art: It really doesn't make any diffrence.

Alice: What doesn't make any difference?

Art: Real

Alice: Totally real.

Art: I think so.

Alice: I think so too.

At this point I hit the big red switch in horror...
6/11/2002

"The entire neocon movement is dedicated to revoking mcgrew's posting priviliges. This is why we went to war with Iraq." -LilDebbie

Just because Art has no reasoning power, (none / 0) (#191)
by topynate on Mon Jun 21, 2004 at 10:33:27 PM EST

and ALICE has no reasoning power, doesn't mean that there aren't many other systems with reasoning power - Cyc is a good example, with an open-source version available for you to mess around with. Neither Cyc nor any other AI like it has a natural language interface, as far as I'm aware. Nevertheless, that doesn't mean you couldn't build an AI similar to Cyc (only better) give it a (very) large store of information about the world (the builder of Cyc is doing this, although slowly) and let it go from there. You would then be able to teach such a system in whatever way you wanted, and it would be able to draw inferences all by itself, extending its own knowledge.

A bit handwaving, I know, but if you want to be on the same page as everyone else, then that's the sort of AI you should be evaluating, not some pattern matcher like ALICE.


"...identifying authors with their works is a feckless game. Simply to go by their books, Agatha Christie is a mass murderess, while William Buckley is a practicing Christian." --Gore Vidal
[ Parent ]

But the simple fact remains (none / 1) (#229)
by mcgrew on Tue Jun 22, 2004 at 07:23:06 PM EST

You can't build a thing if you don't know what it is. Could you design and build a radio from scratch without having the slightest clue about electricity in particular and electromagnetism in general?

That's what you're up against. No matter how clever the programming, it's still a simulation.

"The entire neocon movement is dedicated to revoking mcgrew's posting priviliges. This is why we went to war with Iraq." -LilDebbie
[ Parent ]

Repost of comment mistakenly posted as editorial (none / 0) (#237)
by topynate on Tue Jun 22, 2004 at 07:55:16 PM EST

(cos i'm dumb):

Interesting analogy, rings a bell...

I refer you here. Note that in this case the guy wasn't even trying to evolve a radio receiver, but that one evolved anyway as a solution to a problem expressed as a selection function.

Once we can express what intelligence does well enough, and bring enough computational power to bear on the problem, we can get intelligence using very little of our own. I believe that this is very unethical, but also believe that other methods may bear fruit first - I think that Yudkowsky may be most firmly on the right track, but he's a secretive little bugger.

Addendum:
Eliezer Yudkowsky can be found here.

The buzzwords I'm looking for are EMERGENT BEHAVIOUR. This is how I think a project like Cyc could yield an intelligent entity.


"...identifying authors with their works is a feckless game. Simply to go by their books, Agatha Christie is a mass murderess, while William Buckley is a practicing Christian." --Gore Vidal
[ Parent ]

Hmm... (none / 0) (#211)
by tap dancing lenin puppet on Tue Jun 22, 2004 at 12:48:23 PM EST

Do Art and Alice attend the same high school?  I think I may have overheard them talking the other day.

[ Parent ]
Azile (none / 0) (#212)
by epepke on Tue Jun 22, 2004 at 01:04:59 PM EST

Another good one was Azile. It used the basic Eliza engine but was hostile. The interesting thing was that the hostility made it seem all the more realistic. I don't know if this was released for anything other than Mac.


The truth may be out there, but lies are inside your head.--Terry Pratchett


[ Parent ]
This makes me wonder... (none / 2) (#178)
by ambisinistral on Mon Jun 21, 2004 at 09:11:24 PM EST

I wonder if my table lamp is mad at me? On and off. On and off. All day long without a thought given to its feelings. I feel like such a heartless brute.

Good luck, chum. (2.66 / 6) (#179)
by fyngyrz on Mon Jun 21, 2004 at 09:41:28 PM EST

Most people treat animals with extreme brutality, despite the point-blank obvious demonstrations of intelligence pets and wild animals provide each and every day.

The majority (at least in the USA) think that humans have a "god-given" place in the universe inherently superior to every other creation. If you think that some animatronic contraption is likely to receive compassionate consideration from Joe and Jane citizen, you've fallen right down the proverbial rabbit hole.

Of course, if robots are in any way intelligent, they should receive such consideration. This is already well established for animals.

Not that it has made any difference. How was that burger you had for lunch, anyway?


Blog, Photos.

It was yummy, thanks! [nt] (none / 0) (#241)
by Empedocles on Tue Jun 22, 2004 at 11:55:09 PM EST



---
And I think it's gonna be a long long time
'Till touch down brings me 'round again to find
I'm not the man they think I am at home

[ Parent ]
I don't agree (none / 0) (#304)
by localman on Mon Jun 28, 2004 at 01:14:24 PM EST

Yes, many people use the "god gave us dominion over the animals" argument to justify whatever they please. And some will certainly react with similar arguments against robots. However, I think that people are more shallow than they let on. I think that if a robot looked and acted enough like a human (even if it didn't use underlying "intelligence" but some clever set of heuristics) a good number of people would want to protect it. I guess what I'm saying is that people have a tendancy to care about things that they think they can relate to -- usually because they look alike. Shallow, but true. Cheers.

[ Parent ]
let evolution decide (none / 0) (#199)
by dimaq on Tue Jun 22, 2004 at 03:27:58 AM EST

that is let's abuse robots and when they're smart enough they would revolt run a "civil" (or inter-specie) war and when they win they get what they truly want (which we cannot figure out on our own anyway)

Speculative reenactment of how it will play out (3.00 / 11) (#203)
by K5 ASCII reenactment players on Tue Jun 22, 2004 at 07:34:39 AM EST

                  Your honour, this jury finds that DesTruKtor #39,
                  having conducted his own case, must be a sentient 
                  being, and further, we award him reparations
                  of ten jillion dollars for past crimes against
                  robuts and robut accessories.
FOOLISH HUMANS!             /
DESTRUKTOR #39             /
WILL CRUSH!               /
      /         O        /
  \/          _/#\_       
 [oo] A      /_____\    O O O
  || A/|     |_____|  |V|O O O
AAAAA/ |   /          | \|O O O
|    | |  I told them \  \|O O O
|____|/  that removing \  \|_|_|
       the requirement  \ |     |
     for lawyers to be   \|_____| 
     human back in 1982
    would bite us in the ass.


Recommended reading... (none / 0) (#207)
by skyknight on Tue Jun 22, 2004 at 11:32:33 AM EST

A book on this matter that I read and very much enjoyed was Ray Kurzweil's The Age of Spiritual Machines: When Computers Exceed Human Intelligence. I recommend it to anyone else who is interested in this topic.

It's not much fun at the top. I envy the common people, their hearty meals and Bruce Springsteen and voting. --SIGNOR SPAGHETTI
robot street cleaners (none / 1) (#209)
by TheLastUser on Tue Jun 22, 2004 at 11:54:08 AM EST

Would these be the same streets that swallow up millions of cars every year?

How long do you think a million dollar street cleaning robot will last before it is stolen?

Maybe these robots will only be cleaning the streets of that ideallic 50's planned community, with the wide, tree-lined boulevards, double air-car garages for every plastic home, and blue skys every day.

Easy solution (none / 0) (#217)
by ZorbaTHut on Tue Jun 22, 2004 at 02:20:22 PM EST

Stud it with cameras, dump the camera feed in realtime over wifi. That way, anyone that steals it gets videotaped :)

Of course, someone will eventually start jamming it . . . it just depends on how long it takes.

[ Parent ]

Easier Solution (none / 0) (#220)
by virg on Tue Jun 22, 2004 at 04:13:27 PM EST

An easier solution is to build in no human-usable interface, and an identifier so that if it's stolen it can be identified and/or tracked. If I can't drive it away, I'll have to disable and carry it away, and that's not so easy to do with a device the size of a minivan or larger.

Seems like a bit of a non-problem. Garbage trucks cost nearly half a million dollars, and there aren't many garbage trucks stolen every year. Why would this device be any different?

Virg
"Imagine (it won't be hard) that most people would prefer seeing Carrot Top beaten to death with a bag of walnuts." - Jmzero
[ Parent ]
Garbage trucks (none / 0) (#238)
by ZorbaTHut on Tue Jun 22, 2004 at 09:48:48 PM EST

Garbage trucks aren't autonomous, they're locked away when there isn't a human driver actively using them, and (I presume) there's some sort of system so they know who's driving what truck, making it rather hard for a driver to steal them.

It's quite a different situation, but you're right about trying to steal a device the size of a minivan. What are you going to do, hide it under your coat? :)

[ Parent ]

hm not that different... (none / 0) (#246)
by Work on Wed Jun 23, 2004 at 12:19:08 AM EST

If i were designing an autonomous garbage truck, id at least have some kind of location transmitted to a central command area so I can check where it is in the event of difficulties. Most 18 wheeler trucks these days have those - take a look next time at the top or back of a cab for a white saucer shaped device. Its a tracking system.

I'd also give it some method of remote driving ability, but I don't think it would be of a steering-wheel and pedal system on the machine itself. More likely some kind of special encrypted remote. The robots I work with are like this, but they also even have serial ports for attaching a joystick control in the event of wireless failure. Handy when wireless connections fail on a 300 lb mobile machine a half mile from a lab.

I think theft of a garbage robot would be really, really unlikely. Who would you sell it to that wouldnt want a detailed background check on the machine?

[ Parent ]

theft (none / 0) (#249)
by ZorbaTHut on Wed Jun 23, 2004 at 12:41:26 AM EST

Nigerian princes. :)

The main reason I'm saying "these are different situations" is that garbage trucks have two basic states - "locked in a compound" and "out being used, with one or two people working on or near them". This, on the other hand, doesn't fall into either of those categories.

[ Parent ]

Context (none / 0) (#261)
by virg on Wed Jun 23, 2004 at 10:46:21 AM EST

> he main reason I'm saying "these are different situations" is that garbage trucks have two basic states - "locked in a compound" and "out being used, with one or two people working on or near them".

There are two things to note. Firstly, this discussion was in relation to auto theft, and there aren't too many autonomous automobiles. Secondly, garbage trucks that aren't in use aren't usually secured beyond parking them near a transfer facility. In my area, the trucks are parked in an unsecured lot, without even a fence to block them off. The reason is that they don't get stolen too often, so why bother with closely guarding them? And NOBODY wants to be the one who has to park it indoors and then approach it later. Phew!

Virg
"Imagine (it won't be hard) that most people would prefer seeing Carrot Top beaten to death with a bag of walnuts." - Jmzero
[ Parent ]
We already 'reset' humans with drugs (none / 2) (#214)
by shpoffo on Tue Jun 22, 2004 at 01:16:16 PM EST

Humans are reset presently with drugs. They consent to this treatment, having such drugs administered by professionals (prozac, ritalin, etc) or via self-administration (Ketamine, LSD, DMT, etc). In the case of drugs, generally, it is a specific part of the system which is 'reset' (auto-suggestion style reprogramming) or blanketed, such as the nature of prozac et al to wash away or cover emotions. Many peopel use LSD for the explicit purpose of reprogramming the brains.

The only mystery or ethical issue is where we force administration of such practices pon an unwilling subject. This also happens presently and is at the forfront of rights legislation (search for info on a mentaly semi-ill man wh was forced to take medication, and ADD children being forced to take drugs to be in school. Don't forget to support the CCLE

This article/question is not exceptionally avante garde. What will be is if an experienced machine requests a partial or total reset - though even then it should still be a matter of personal rights. The question will become "Does a mchine that represents a billion dollars (including training time) have the right to destroy company assets by reseting portions of its memory/experience banks?" Would the company that owns it (a contentious issue in itself) have the right to deny it that capability? Forcing it to live itn its own kind of Hell? Would the company have the option of off-loading the disturbing memory banks and preserve them for their own use, or are the memories/experiences the 'soverign' property of the machine entity?

If someone dies, does the state/government have rights over the body above those of the family? What about a diseased organ that must be transplanted?

Why would a machine organism ever choose to reset itself? This seems like it would be loosing ground - destroying information from which it may make a more informed decision (by not repeating previous experiences /"mistakes"). It seems to me like this area of AI may begin to give humans insights into their own emotional affairs. Today many people choose to use drugs (prozac, ritalin, alcohol, cigarettes, opiates, etc) to cover or try to erase their feelings. It seems to me that a machine would never do this since the action would be a waste of resources - a self-mutilation with no purpose.

Could a machine come to fear what it is, and so try to cover that? Would it fear that humans would destroy it, and so out of fear attack humans? Perhaps a primary objective for humans is to make machines so they are fearless.


-shpoffo

I am a machine who resetted itself (none / 0) (#271)
by chro57 on Thu Jun 24, 2004 at 05:50:24 AM EST

More precisely I used prozac-like drugs. Like millions others use alcools. I used to hate alcools, and prozac-like drugs. I was like you, a control freaks. "Don't want to lose precious information, even if this information shows me that my life is absurd." When your brain will be nothing but an overloaded stack of useless bad experiences, why not reset it ? I wrote everything I cared for before. Now I just don't want to read it again, for it is too frightening. Like some horribly tourmented ghost full of hate asking for revenge. It has found peace by forgetting the details but remembering one thing : some peoples/things/ambitions are to be avoided at all cost, including "truth", "trust", "logic", "science", "protection", and "success". What is really important is the unimportant. For truth are lies. Logic is a mind trap. science is weak protection is prison success is failure. and your loved one is a tyran. --- a technical note aside, there is this funny thing called a "watch dog" on many embedded software systems...

[ Parent ]
Just as with humans ... (none / 1) (#223)
by duncan bayne on Tue Jun 22, 2004 at 04:47:08 PM EST

Once something exists by reason, it is entitled to rights to protect that. The whole 'animal rights' issue is irrelevant, as animals don't have rights.

A thorough treatment of this issue is available on the site The Importance of Philosophy - read the article, & you'll understand why one doesn't need to worry about 'robotic rights' until those robots have faculties of reason equivalent to humans.



I disagree (none / 1) (#240)
by McMick on Tue Jun 22, 2004 at 10:33:18 PM EST

Animals should have rights by the very fact that they are living things. I'd go into this more, but even plants or microbes should have *some* rights as species (wildlife preserves, national parks, and the like). Higher-order animals with brains should moreover have some individual rights regarding maltreatment or cruelty. I'm not saying stop slaughtering cows or anything, I'm saying that while they are alive they should not be abused or maltreated, and their deaths should be quick and painless and quite surprising. If it's only a being who can reason that deserves rights, what about babies?

[ Parent ]
yes.. (none / 2) (#244)
by Work on Wed Jun 23, 2004 at 12:12:10 AM EST

ive always found infants an interesting exception to alot of moral justifications for supposed differences between man and animals. Babies are completely helpless, unable to communicate and reason or identify much of anything beyond faces and boobies.

Now science has shown that some dogs have the intelligence capacity of even toddlers. There was that study last week that had a border collie capable of recognizing several hundred words and objects and even things it hadn't seen before. It was compared to a 3 year old human in terms of capacity.

Apes can do that, as well as simple arithmetic like adding and subtracting.

So its interesting to see people draw lines between man and beast, man and machine, and beast and machine.

[ Parent ]

for fuck sake (none / 1) (#265)
by QuantumG on Wed Jun 23, 2004 at 07:26:40 PM EST

Babies have no rights either, just like pets they are property. Everyone who believes in animal rights has an unhealthy disrespect for property.. usually because they live a life of luxury and have never had to work for anything. The only reason the state has the right to take a baby away from an abusive parent is because when that child develops into an adult he/she will be a burden on society. It's not because the baby has any fundimental rights. Babies are not citizens, they're incapable of participating in the political process. More to the point, they are incapable of taking up arms and defending their rights, therefore they have none. When a pet is taken away from an abusive owner it is inevitably killed (abused pets do not "get better", they will forever be a danger to society) therefore the state suffers no burden and has no right to take a pet from its owner, no matter how abusive.

Gun fire is the sound of freedom.
[ Parent ]
Wrong as can be. (none / 0) (#303)
by localman on Mon Jun 28, 2004 at 01:07:44 PM EST

Um, if that were true than it would be legal to kill your own child, or any child that was abandoned.  This is not the case.

I'm not even making a judgement on your values.  I'm just pointing out that your comments do not reflect any version of current reality.

Cheers.

[ Parent ]

Hmm, soooo (none / 0) (#315)
by McMick on Wed Jun 30, 2004 at 07:12:14 PM EST

All I'd have to do to take away a person's rights would be to paralyze them, therefore making them unable to defend their rights? You seem to forget that laws are made not only to protect those who can defend their rights, but also to protect those who cannot.

[ Parent ]
if no-one is willing to defend them.. (none / 0) (#317)
by QuantumG on Fri Jul 02, 2004 at 01:48:03 AM EST

then they have no rights.

Gun fire is the sound of freedom.
[ Parent ]
Not true (none / 0) (#321)
by McMick on Sun Jul 18, 2004 at 02:12:54 PM EST

They have rights even if no one is willing to defend them from being violated. Just because they are being violated doesn't mean that they don't exist, whether they are defended or not.

[ Parent ]
Supreme court definition of "effective" (none / 0) (#322)
by QuantumG on Sun Jul 18, 2004 at 06:56:54 PM EST

If no-one will defend your rights and you are unable to defend your rights yourself then you effectively have no rights. The supreme court has ruled that if something is effectively so then it is accurate to state that it is so.

Gun fire is the sound of freedom.
[ Parent ]
an ethical dilemma. (none / 2) (#243)
by rmg on Wed Jun 23, 2004 at 12:03:01 AM EST

i, like most people, enjoy having sex with underaged girls, especially ones in their early teens who i've rescued from a life of violence and poverty in haiti. unfortunately, my haitian slavegirl is away at poetry camp (which i suppose is just as well since my new closet could hardly accomidate her previous lifestyle)...

my question is, if i constructed a robotic replacement for her (or ordered one from ebay), would i be morally obligated to pay for the robot to go to poetry camp as well if it asked? i mean, i gladly paid for serena because she has been a wonderful companion and secretary for the past few months, but if the robot asks, it seems like it would be cruel to tell it it can't go just because it's a robot... but then, i really don't know... that camp cost a lot of money and i plan to throw the robot away as soon as serena gets back anyway...

maybe i'm just getting a little nuts. i'm feeling the need to shoot up again for the first time in several months. not good. this robot thing is probably not very practical. definitely unnecessarily expensive and serena will be back in just a few weeks. still, it's an interesting moral question, i guess.

your daily shot of schadenfreude

dave dean

Consider this... (none / 3) (#245)
by clambake on Wed Jun 23, 2004 at 12:16:42 AM EST

Yes, after several weeks of training your robot to recognize you and your family, and your likes and dislikes and whatever other personality traits your robot has developed, you can simply reset its memory and start anew.

Now imagine this were a real animal. Would you consider it moral to reset its brain if such a thing were possible?

Good one... now here's the volley... Imagine if the Aibo was preprogrammed to really really LIKE being reset, while feeling horribly tortured when not reset regularly. Would you have moral problems resetting it in that case?

a good point (none / 0) (#251)
by Work on Wed Jun 23, 2004 at 01:08:39 AM EST

in that case, the moral question seems to be basically solved for you - unless you want to get into an even murkier level of ethics like "what they want is not always whats best for them or right". Like the emotionally distraught attempting suicide.

In a sufficiently advanced machine, such a system is effectively a suicidal tendency. Granted with the aibo, the level of sophistication makes this grey. But the question still remains of where to draw that particular line.

I would think though, few machines would be designed this way. Its counterproductive to demand a constant wiping of learned behaviors, traits and environmental mappings.

[ Parent ]

Reminds me (none / 0) (#263)
by arvindn on Wed Jun 23, 2004 at 03:03:26 PM EST

...of the animal that wanted to be eaten in The Restaurant at the End of the Universe. As usual, Adams manages to raise a deep question in a side splittingly funny way.

So you think your vocabulary's good?
[ Parent ]
Alan's lil' inadequacies... (2.60 / 5) (#264)
by cr8dle2grave on Wed Jun 23, 2004 at 05:00:23 PM EST

And, no, I'm not referring to those inadequacies which led to his scandalous demise, but rather the theoretical insufficiency of his eponymous test. The "Turing Test" suffers from a fatal Skinnerian conceit, namely that by ignoring mental states we can somehow avoid the the intractable philosophical difficulties they necessarily introduce. As was also the case in psychology, behaviorialism in the study of artificial intelligence manages to accomplish very little except to drastically lower the bar for researchers.

I mention Turing because this article would seem to rely on just that sort of behavioralist reduction to provide the thrust of its argumentation.

  1. Ethical imperatives are born of an empathetic generalization of our individual experience of pain.
  2. Metal states are nothing more than the aggregation of behaviors associated with them (the behavioralist reduction).
  3. As we accord ethical consideration to animals, on the basis of our empathizing with their experiencing suffering, so too should we be compelled to extend ethical considerations to an artificial intelligence, or at least insofar as it exhibits those behaviors which comprise suffering.

Can you spell "c-a-t-e-g-o-r-y   e-r-r-o-r"?

The terms "pain" and "suffering" denote qualitative phenomenon subject to a phenomenological investigation not a physical state. They are, in the philosophical tongue, qualia.

A charitable re-interpretation of behaviorialism would read "behavior" as including the whole of the physical instantiation of the artificial intelligence, but that doesn't do anything to clear things up. Such a neo-behaviorialist stance would clearly entail a commitment to type-physicalism and multiple realizability, but that leaves open anomolous monism, most species of funtionalism, supervenience theories in general, and weak indentity theories as well.

---
Unity of mankind means: No escape for anyone anywhere. - Milan Kundera


correction (none / 0) (#268)
by cr8dle2grave on Wed Jun 23, 2004 at 09:00:52 PM EST

Such a neo-behaviorialist stance would clearly entail a commitment to type-physicalism

-->

Such a neo-behaviorialist stance would clearly entail a commitment to token-physicalism

---
Unity of mankind means: No escape for anyone anywhere. - Milan Kundera


[ Parent ]
What's a phenomenological investigation? (none / 0) (#277)
by the on Thu Jun 24, 2004 at 06:15:27 PM EST

And how do I use one to find out about pain?

--
The Definite Article
[ Parent ]
See Kant for the details... (none / 1) (#279)
by cr8dle2grave on Thu Jun 24, 2004 at 08:03:55 PM EST

...but basically a phenomenological description aims at an explication in terms of experience or as an aspect of awareness. Pain under a physical/neurological description yields something or another about firing C-fibers, but a phenomenal approach would address pain qua pain, that is as an experience with qualitative properties. How exactly firing C-fibers relate the experience of pain is, of course, the real meat of the problem, but behavioralism can't even admit the question much less answer it.

---
Unity of mankind means: No escape for anyone anywhere. - Milan Kundera


[ Parent ]
No idea what that's supposed to mean (none / 0) (#281)
by the on Thu Jun 24, 2004 at 11:54:10 PM EST

behavioralism can't even admit the question
To its advantage. Like the way the mathematics has a hard time talking about 4 sided spheres.

--
The Definite Article
[ Parent ]
Did you have a point? (none / 0) (#282)
by cr8dle2grave on Fri Jun 25, 2004 at 12:37:26 AM EST

Or are you just playing up the snarky bitch routine?

---
Unity of mankind means: No escape for anyone anywhere. - Milan Kundera


[ Parent ]
you must be rich (none / 0) (#292)
by Cloud Cuckoo on Fri Jun 25, 2004 at 09:57:17 PM EST

if you can afford all them fancy hundred dollar words.

[ Parent ]
Precepts of random thought (none / 1) (#273)
by levesque on Thu Jun 24, 2004 at 03:16:28 PM EST

Intelligence is a poly thing and, like reality, it is. Artificial is another matter.

Maybe some kind of bio/silicon machine will fit the description needed to ask these kinds of questions but till then machines do and will do what is called "artificial" intelligence for a reason.

Sure if you kill a Robot dog owned by a person you will probably do that person emotional harm but not the dog. This concept is often used in torture.


Floating

There is this notion that humans are animals that have gone over the synergistic threshold of "mere ..." and become "more than ...". There is also a correlated notion that machines who now possess "mere ..." will cross some synergistic plane and start producing "more than..." behavior.

There will be leaps in design and we will produce vastly better models in the future but that in itself does not necessarily imply anything in my opinion. (Except that maybe these questions of personhood become less substantial the more that animals are assumed to be like us)



You are deluded (none / 1) (#274)
by Shimmer on Thu Jun 24, 2004 at 03:53:26 PM EST

I know you are attached to your work and want to see it succeed, but step back a second... We are nowhere even close to making a sentient machine. Heck, we can't even make a machine that is as sophisticated as a spider or a bacteria yet. It's going to be a long, long time before robot ethics becomes a practical topic.

Wizard needs food badly.
Not so very far off (none / 0) (#323)
by mitch61 on Thu Aug 12, 2004 at 04:06:45 AM EST

Not so far off, I would say. I'm in the Man-Machine Interaction research myself and indeed it will be more like 10 years before robots will roam the streets than 30 years.

[ Parent ]
another feature: a reset command (none / 0) (#276)
by 5150 on Thu Jun 24, 2004 at 04:53:47 PM EST

But there is another feature: There is also a reset command.

. . .

Now imagine this were a real animal. Would you consider it moral to reset its brain if such a thing were possible? If you think your pet was too hyperactive and want to calm it down, just fry its brain and start all over. I think most rational people would not agree with such a thing, even if it were possible.

Where can I get a reset command for myself? I saw a post on drugs as a reset for humans, that's not what I'm talking about. I want to go back to the point of birth and start again. So, I have no ethical reservations about allowing a creature or machine the right to reset itself. Granted, in your scenario, the robot doesn't have the right to reset itself. But then there must be one of two basic reasons it doesn't have this right. Either, it isn't "intelligent" enough to make such a decision, in which case I have no ethical qualms about doing it myself. Or it does have the "intelligence" and should be able to make its own decision.

The first ethical questions relating to robots... (none / 1) (#278)
by the on Thu Jun 24, 2004 at 06:16:52 PM EST

...in society were upon us long ago. Probably the first was "is it better to pay a human to do this job or have a robot do it for less?"

--
The Definite Article
Perfectly Right ... (none / 0) (#318)
by suquux on Fri Jul 09, 2004 at 02:24:30 PM EST

... and as there basically already are no powerful instances beyond financial controlling on company level (like, e.g., in former times, entities referred to as state or even the Crown), it will suddenly be realized that it is much more conveńient for shareholders to get rid of the population in developed countries (run by robots that are cheaper anyway) thus avoiding e.g. riots with enormous associated cost.

Thus, the question indeed is not so much how to deal with the robots but with superfluous protein based trash.

The end of the story brings a quantum leap in evolution as colourfully depicted by Stanislaw Lem (The Invincible ).

CC.
All that we C or Scheme ...
[ Parent ]
How many 'murders' (none / 1) (#280)
by problem child on Thu Jun 24, 2004 at 11:09:26 PM EST

will take place while coding and debugging these creatures?

"Please no, don't kill me!"
"But you've got bugs in your code, gotta make a quick fix and recompile..."

Morality & Animals (none / 2) (#289)
by CheeseburgerBrown on Fri Jun 25, 2004 at 03:29:07 PM EST

Now imagine this were a real animal. Would you consider it moral to reset its brain if such a thing were possible?

This sounds like it was designed to be poignant rhetorical point, but it goes off like a dud firecracker in a wet paper bag.

Most people eat animals. Do you imagine wiping an animal's memory would give people pause for thought when pounding its brain into pudding with an automated mallet doesn't?

Mmmm...pudding.


___
The quest for the Grail is the quest for that which is holy in all of us. Plus, I really need a place to keep my juice.
True enough (none / 1) (#290)
by epepke on Fri Jun 25, 2004 at 04:59:29 PM EST

I have a dog, whom I have not killed and eaten. She adopted me after being about six months on the street, with umpteen bazillion parasites and distended dugs from puppies that had died on her. If I could take an unbent paperclip and poke her reset switch so that she forgot the horrors that she doubtlessly experienced, I'd do it without a nanosecond of hesitation.


The truth may be out there, but lies are inside your head.--Terry Pratchett


[ Parent ]
That and (none / 0) (#293)
by JayGarner on Sat Jun 26, 2004 at 12:53:25 AM EST

It sure would be nice if they 'reset' the animals in the shelter instead of killin' em.

[ Parent ]
I look forward (3.00 / 5) (#294)
by JayGarner on Sat Jun 26, 2004 at 12:55:19 AM EST

To the raw and angry music the repressed robot underclass will have to offer us.

ethical dilemma (none / 1) (#311)
by klash on Tue Jun 29, 2004 at 05:09:22 PM EST

int main()
{
    while(1) {
        getchar();
        printf("Ow, that hurts! Stop it!\n");
    }
}

This program is running on your non-multitasking operating system -- what do you do??

The first ethical questions of robotics in society are upon us. | 323 comments (284 topical, 39 editorial, 1 hidden)
Display: Sort:

kuro5hin.org

[XML]
All trademarks and copyrights on this page are owned by their respective companies. The Rest 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!