Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

[P]
Robot-savant or digital children?

By jonTR in Culture
Thu Aug 19, 2004 at 06:15:58 PM EST
Tags: Movies (all tags)
Movies

So the big question all the critics were asking about Alex Proyas’ latest effort was “Does it betray Asimov?”, which is a silly question since Asimov was a bit of a hack himself. More pertinent might be the question “Did Asimov betray robots?”, which is also something of a silly question, but one that probably leads to more interesting thoughts than the critics’ dissection of tired robotic plotlines.

The Great Man’s Three Laws of Robotics (which might usefully be replaced with three laws about the use of capitalisation in grand statements) have caused far more discussion amongst AI aficionados than they should have done. AI might not be so far off as one would hope, and worryingly it looks like computer games might be the first to give us true digital friends—rumours that Proyas’ next project is to be titled I, Sonic appear to be unfounded. But whether Capitalism delivers the Talkie Toastie® or the Terminator first, it is unlikely that we will look to AI for servant hood and menial labour.


For Asimov, the excitement of a thinking machine was whether or not it would wash our dishes or police our streets better than us. Perhaps this is unsurprising from a man who was split between a love of science and a fascination with religion, but for those of us without a surfeit of authorities to look up to, the idea of building a completely new thinking being should surely raise more interesting possibilities. Aside from the neuroses engendered by expectant parenthood, should we not also share the hopes and dreams of potential paternity? Where Asimov painted a glib future of helpful bots tidying their rooms, other writers saw the true philosophical (and perhaps theosophical) issues that AI would raise.

Forget Spielberg’s cutesy hatchet job on Kubrick, and the latter’s own dark star Hal, and turn your head to the thicker substance of Bladerunner’s Batty (or Rutger Hauer, for the mnemonically challenged). For here you find the first real question of what to do with a thinking machine. While Scott (pace Dick) kicks off with androids as revolutionary proles, he eschews the temptation to wallow in plastic Marxism and instead turns his head to cyberFreudianism.

Roy Batty is not in search of new and exciting ways to kill off his erstwhile masters, though being a thinker and a machine, his programmed Asimovian restraints are unsurprisingly ineffectual. You could argue that a machine could never get that far, that it must be limited by what is built in. The counterargument to this must be that the only machine we know of, so far, that does think, already casts off the shackles of its programming. Humankind has long been able to break its most engrained laws, from incest to hunger strikes. If what we mean by intelligence does follow turing's test, then surely it is precisely this kind of extreme behaviour that will be the final hurdle. For AI to be indistinguishable from a human, it must be able to break the laws that we can break, else it will be just a poor mimic.

For Batty, this revolution follows in the footsteps of his fleshy forebears, Adam and Frankenstein’s monster: he wants to be like his creator. In Dick’s universe this is the one commandment for machines, Thou Shalt not be Human, and the only one worth breaking. Asimov’s wholly artificial intelligence could not move beyond established moral laws; he could not conceive of non-human intelligence. By contrast, the anti-heroic Batty quickly escapes morality, but then collides with its conceptual big brother mortality. Of course, there is a contradiction in Scott’s logic. Since death is also programmed-in, one wonders why ethical precepts are so much more easily hacked.

This is the reverse of Asimov’s androids, some of whom turned out to be near-immortal and who had moral quandaries over preserving humanity’s vast numbers. Such righteousness is made less admirable by the fact that this is just another instance of a machine being mechanistic. Even the most worldly-wise, ancient robots in Asimov’s universe lack the humanity to sin. Rutger Hauer’s wooden counterparts all make the step beyond their programming, as do Proyas’ silken-faced hordes (for which he was, perhaps unjustly, criticised). But the logical, human conclusion of this is only realised in the final reel of Bladerunner, when Batty undergoes a true moral revelation. Faced with death he makes the choice not to kill, at precisely the moment when pragmatics matter least and the only values remaining are ethical ones.

It is choice that Asimov denies his robots through his three commandments; it is through this lack of freedom that he impugns AI at a triple stroke. In this he merely mimics the contradictions of our consumer society, where purpose becomes merely a mythical choice about lifestyle, not about mor(t)ality. Yet in doing so he misses out on the heart of the matter. In creating other intelligences, in giving them norms to follow, we tread the path of numerous other authorities, from our own parents right up to God Herself. For any authority, the real fear should not be that their vassals break the law, but that they never step beyond the law. Freud’s apes were only free after they had killed their progenitor; he saw original sin as the offspring of their guilt. This is the same revolution that any child goes through to become an adult, establishing his or her own rules and castles, and arguably it is also the root of democratic fervour, with feudalism playing the part of the ageing parent.

However, such a denouement, whilst filmicly pleasing, does not provide us with our ending. Having accomplished the moral revolution, having become thinking, choosing beings in our own right, we must cope with that freedom. The guilt of revolution underpins our future, the feeling that somehow we must pay for the sin that set us free. This in its turn no doubt provides psychoanalysts with plenty of new Mercedes, but it raises the question of whether we should expect anything less of our artificial children, and indeed whether that is not a desirable outcome. If we want a machine to think and to be able to choose (is there a difference?), do we not also want it to be able to go insane?

Sponsors

Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure

Login

Related Links
o critics
o Alex Proyas
o turing's test
o Also by jonTR


Display: Sort:
Robot-savant or digital children? | 134 comments (76 topical, 58 editorial, 0 hidden)
Could be interesting (2.40 / 5) (#2)
by Violet Null on Tue Aug 17, 2004 at 11:26:28 AM EST

But would read a lot better if it wasn't written as a 2nd year lit paper.

Also: You seem to have this deep rooted desire to humanize and anthropomorphize AI. Why is that?

2nd year lit paper (none / 0) (#5)
by jonTR on Tue Aug 17, 2004 at 11:43:30 AM EST

OK so it's a little on the manic side, but that's kind of permamently my style for free stuff - comes from writing such drudgery on a daily basis, but criticism noted.
On the AI anthropic side, given the whole point of AI is intelligence, I'd argue that I'm  attributing to intelligence in general what you are assuming is specifically human. I guess since all we have so far is human intelligence, it's hard to get a definitive answer to any argument over that. If there is psychology behind, I'm yet to delve that far into my inner self.

[ Parent ]
2nd year lit paper (none / 1) (#7)
by Violet Null on Tue Aug 17, 2004 at 11:58:13 AM EST

Re: 2nd year lit paper. Just pointing out that for a public site, something that's more accessible is likely to go over better.

Re: AI. I see no reason to correlate "sin" (a human value) or "freedom from authority" with "intelligence".

Or, to state it another way: You compare an AI that is unable to break some rules to a child, indicating its undeveloped and not mature. Considering that breaking rules and being seen as an independent isn't even a positive in all human societies, why do you even begin to suppose it would apply to non-humans? The idea behind AI, after all, is not to create humans. We already have those.

[ Parent ]
This could be an interesting thread, but.. (none / 0) (#8)
by jonTR on Tue Aug 17, 2004 at 12:05:02 PM EST

I have to leave the office now (UK timing). Will check back later, but
Re 2nd Year lit - criticism is fine, I'm not overly adapted to it yet, but its good and always welcome.
Re AI If sin is just rule breaking, which is freedom from authority, then I think this should apply to any non-human intelligence equally. Surpassing programming would surely be the mark of an indepedent, and therefore thinking, machine. I completely accept this is not clearcut, and massively worthy of more chat, but the pub calls...

[ Parent ]
This could be an interesting thread, but.. (none / 0) (#15)
by Violet Null on Tue Aug 17, 2004 at 02:04:26 PM EST

Depends on your definition of "sin", but it definitely isn't rule breaking. The classical definition is to go against the will of God. Not really applicable. A looser definition is to go against a moral law. Applicable? Perhaps. But first I'd like to see you find a moral law that's universal amongst all human cultures. Murder's right out, as is infanticide and incest.

As far as AI goes: the classical test for AI isn't to break programmed rules, it's to do things it wasn't programmed to handle. The ability to break programmed rules -- eg, killing humans -- doesn't seem like a plus to me, just like it's not a plus in other humans. We don't say, "Ah, little Jimmy chopped Bobby into little pieces. What an individual." We lock Jimmy away, or (if young enough) attempt reeducation.

Now, society can't continue unless it evolves, which requires some breaking of its rules. But we still don't need AI for that. We have people.

[ Parent ]
a cultural universal (none / 0) (#43)
by gdanjo on Wed Aug 18, 2004 at 07:34:40 AM EST

[...] But first I'd like to see you find a moral law that's universal amongst all human cultures. [...]
How about "all cultures have a moral law"?

Now, society can't continue unless it evolves, which requires some breaking of its rules. But we still don't need AI for that. We have people.
Also, we don't need no stiiinkin' robots, or "industrial machines" as you might call them, cause we already have humans for that work. Robots steal our jobs, dammit!

Dan ...
"Death - oh! fair and `guiling copesmate Death!
Be not a malais'd beggar; claim this bloody jester!"
-ToT
[ Parent ]

Asimov was more accurate (2.00 / 6) (#10)
by Morkney on Tue Aug 17, 2004 at 12:17:40 PM EST

Mr. Dick and many others have used the device of the android to examine the meaning of humanity, what makes us human, and so forth. It can certainly be interesting to consider what might happen if robots were created which were basically humans with certain foibles.

Asimov's robots, though, are more realistic. To suggest that a robot will "step beyond their programming," for example, is absurd. The robot is the programming, and to suggest that there is something which is able to "step beyond" this is to ascribe a "mind" to robots a la Cartesian Dualism.

Dick and the androids (3.00 / 4) (#16)
by GenerationY on Tue Aug 17, 2004 at 02:52:17 PM EST

I think its worth commenting that Dick's view of androids and humanity is quite complex.

A close reading of his work shows that his concern was actually the opposite of Asimov's; he feared that people would become as robots ("reflex machines" was the term he sometimes used), not that robots would become as people (Bladerunner is a bit of red herring in my opinion). This fear grew out of his experience amdist the excesses of the drug culture. See, for example, the Three Stigmata of Palmer Eldritch or the opening paragraphs of Valis.

Morkney is exactly right though, Dick's real theme was humanity and he was using androids as a point of comparison. His interest was not in the robots themselves though.

[ Parent ]

I'm not so sure.. (none / 0) (#96)
by phybre187 on Wed Aug 18, 2004 at 09:05:53 PM EST

..that Dick was sane enough around the time he wrote VALIS to *have* a point. Dick was full-on schizophrenic (and probably epileptic) for at least the 8 years prior to his death. VALIS wasn't meant by him to be a work of fiction. It was an autobiography with partially fictionalized characters.

And as noted in other threads, repliants were not robots, and "robot" and "android" are not interchangeable.

[ Parent ]

I am (none / 1) (#113)
by GenerationY on Thu Aug 19, 2004 at 08:47:12 AM EST

Read this essay The Android and the Human (Dick, 1972).

You will note he also talks about schizophrenia here as well. I never denied it was not part of VALIS/Albemuth etc. Its kind of integral to the theme on quite a few levels. You will notice he also talks about the woman he was with at the beginning of VALIS as well.

Finally, as regards replicants/robots/androids we shall leave the last word to Phil:
I have, in some of my stories and novels, written about androids or robots or simulara -- the name doesn't matter; what is meant is artificial constructs masquerading as humans...

I would like then to ask this: what is it, in our behavior, that we can call specifically human? That is special to us as a living species? And what is it that, at least up to now, we can consign as merely machine behavior, or, by extension, insect behavior, or reflex behavior? And I would include, in this, the kind of pseudo-human behavior exhibited by what were once living men -- creatures who have, in ways I wish to discuss next, become instruments, means, rather than ends, and hence to me analogs of machines in the bad sense, in the sense that although biological life continues, metabolism goes on, the soul -- for lack of a better term -- is no longer there or at least no longer active.

[ Parent ]

Oops forgot (none / 0) (#115)
by GenerationY on Thu Aug 19, 2004 at 09:00:56 AM EST

OK, so that essay is from 1972. But if you want evidence he was concerned in the same way far earlier, before the visions of impending surgical doom, the voices and the "Black Iron Prison", try the short story "Human is" from 1955.

[ Parent ]
Why? (2.33 / 3) (#21)
by trane on Tue Aug 17, 2004 at 05:01:42 PM EST

Why is it absurd to suggest that a robot will step beyond their programming.

If the programming allows for learning and creating new responses to stimuli, it can do things that the programmer didn't foresee. Its responses would depend on the environment, much as a human being's responses depend on the "programming" of its DNA and the environment.

Also if the program can rewrite portions of itself (genetic programming for example), the program can change into something the original program wasn't capable of. Would that be "stepping beyone its (original) programming?"

[ Parent ]

Not stepping beyond its programming (2.50 / 4) (#24)
by Morkney on Tue Aug 17, 2004 at 06:07:52 PM EST

If the programming allows for learning, then the learning is part of its programming.

If I turn Microsoft Automatic Updates on, then Windows will respond to the stimulus of an available update, and modify its behavior accordingly, perhaps by fixing a security hole or adding another "wizard." It is doing this on its own, and the added abilities are something the original program could not do. But I would not call this stepping beyond its programming, it's just using a part of the programming which is capable of changing the program.

[ Parent ]

weak definition (1.00 / 2) (#38)
by gdanjo on Wed Aug 18, 2004 at 04:41:06 AM EST

If the programming allows for learning, then the learning is part of its programming.

If I turn Microsoft Automatic Updates on, then Windows will respond to the stimulus of an available update, and modify its behavior accordingly, perhaps by fixing a security hole or adding another "wizard." It is doing this on its own, and the added abilities are something the original program could not do. But I would not call this stepping beyond its programming, it's just using a part of the programming which is capable of changing the program.

Your definition of "stepping beyond it's original programming" is weak.

If you turn Microsoft Automatic Updates on, and your computer instead connects to HackerCrack's Automatic Update server, then you may have just downloaded and installed Linux.

Linux is most defininately "stepping beyond the programming of a Windows computer", because it's not Window's anymore.

Similarly, what if you download and run a parasitic program (a virus) that makes your Windows box do things no other Windows box does? Would you consider that your box has "changed it's programming"? Would Microsoft?

Now, what if your Windows box wrote the above "virus", only to get infected itself? Can this mean that your Windows box has "stepped beyond it's programming"? All by itself?

(perhaps this can be an alternate Turing test ... if any particular Windows installation suddenly and without prompting decides to install Linux on top of itself, then that particular Windows installation gained intelligence ... momentarily)

:-)

Dan ...
"Death - oh! fair and `guiling copesmate Death!
Be not a malais'd beggar; claim this bloody jester!"
-ToT
[ Parent ]

Ugh bad logic (3.00 / 3) (#42)
by ZorbaTHut on Wed Aug 18, 2004 at 07:12:17 AM EST

If you turn Microsoft Automatic Updates on, and your computer instead connects to HackerCrack's Automatic Update server, then you may have just downloaded and installed Linux.

That's true. If your computer decides, on its own, to connect to HackerCrack's Automatic Update Server, that's stepping beyond its programming. Unless this is "decided" through a weird coincidental bug, in which case that is exactly what it was programmed to do.

If, on the other hand, some third party has changed your DNS resolves or redirected your packets, it obviously is still carrying out its normal programming.

Linux is most defininately "stepping beyond the programming of a Windows computer", because it's not Window's anymore.

That's true. However, it's *not* a Windows computer, and it's still functioning within the bounds of its (new) programming.

Similarly, what if you download and run a parasitic program (a virus) that makes your Windows box do things no other Windows box does? Would you consider that your box has "changed it's programming"? Would Microsoft?

I'd consider that I'd changed its programming, and therefore it's now operating within bounds of its new programming.

Now, what if your Windows box wrote the above "virus", only to get infected itself? Can this mean that your Windows box has "stepped beyond it's programming"? All by itself?

Yes, if it arbitrarily wrote its own virus, without that process being inevitably caused by some truly bizarre bug, it would, in fact, be stepping outside its programming. Of course, if there was a bug to cause that virus to somehow be written, that would still be what it was programmed to do (albeit unintentionally.)

Ironically, it's essentially possible to define "stepping beyond its programming" as "things computers can't do". If it would require outside intervention, it's not stepping beyond its programming because it was pushed. If it wouldn't require outside intervention, it's not stepping beyond its programming because that's what it's programmed to do.

It's a bit akin to the informal definition of artificial intelligence - "things we don't yet know how to program into a computer". :)

[ Parent ]

finite machines (none / 1) (#45)
by gdanjo on Wed Aug 18, 2004 at 07:46:21 AM EST

Ironically, it's essentially possible to define "stepping beyond its programming" as "things computers can't do". If it would require outside intervention, it's not stepping beyond its programming because it was pushed. If it wouldn't require outside intervention, it's not stepping beyond its programming because that's what it's programmed to do.
And if it occured due to a miracle, then it only occured because the miracle was destined to occur and therefore not a miracle at all, and therefore not "stepping outside it's programming."

Your definition "stepping outside of one's programming" is fast approaching useless. We humans were once "things mud, air, and water can't do," and yet we're here. If you define it to be a logical impossibility, then I'm afraid your logic is flawed; there's no such thing, as Godel succinctly (indirectly) pointed out (a logical impossibility is always possible given a set of axioms; even the "breaking" of axioms is always (logically) possible).

The "things computers couldn't do" is a large list indeed; what makes you think that "things computers can do" is a finite list?

Dan ...
"Death - oh! fair and `guiling copesmate Death!
Be not a malais'd beggar; claim this bloody jester!"
-ToT
[ Parent ]

I think you misunderstand Godel (none / 0) (#46)
by Morkney on Wed Aug 18, 2004 at 08:13:55 AM EST

Godel's incompleteness theorem states that any consistent axiomatic system capable of expressing certain simple mathematical concepts is incomplete. So there are some propositions which are neither true nor false, but this doesn't mean that true is false, up is down, and 1 is 2.

For example, no matter how many arguments based on Godel's theorem you throw at it, you'll never find two even numbers whose sum is odd.

[ Parent ]

Godel (none / 0) (#106)
by gdanjo on Thu Aug 19, 2004 at 03:45:54 AM EST

For example, no matter how many arguments based on Godel's theorem you throw at it, you'll never find two even numbers whose sum is odd.
Sure you can, provided you define the right set of axioms.

What you're talking about is your particular definition of numbers, sums, and "odd-ness", which disalows the sum of two even numbers to be odd. But Godel that states that the axioms that allow your assertion to be true are no better than my axioms which allow the sum of even numbers to odd, since your axioms can be "broken" just like my para-odd-ness axioms. Sure, your axioms may be more useful than mine, but they're not more "true" than mine.

Similarly, to declare that computer programs can be somehow "complete" (in that everything a computer could possibly do can be pre-defined, or knowable; predictable) is patently false.

Godel indirectly states that for all "operating systems" there exists a possible "program" that goes "outside of the system" (whose result is unpredictable), and therefore breaks it's "programming."

Dan ...
"Death - oh! fair and `guiling copesmate Death!
Be not a malais'd beggar; claim this bloody jester!"
-ToT
[ Parent ]

not broken (none / 0) (#122)
by Morkney on Thu Aug 19, 2004 at 04:13:49 PM EST

Godel never proves that an axiomatic system can be "broken." In his proof he assumes that it is complete, and arrives at an inconsistency based on this. This means that either the axiomatic system is inconsistent, or it is incomplete (and his assumption was incorrect).

He further proved that an axiomatic system can not prove its own consistency, so we can't be sure about the consistency of e.g. logic, arithmetic, etc. We're pretty sure thought, simply because the axioms all make sense.

That said, this entire debate over axioms seems essentially pointless. Any position can of course be argued against if you note that the very foundations of logic could just as easily be wrong, but it adds little to our understanding of the world.

To be sure, your last paragraph would be a direct application to this debate if true, but I don't see how it could be. How does Godel's proof about axiomatic systems being incomplete translate to an unpredictable program??

[ Parent ]

Godel II (none / 0) (#124)
by gdanjo on Thu Aug 19, 2004 at 06:34:22 PM EST

To be sure, your last paragraph would be a direct application to this debate if true, but I don't see how it could be. How does Godel's proof about axiomatic systems being incomplete translate to an unpredictable program??
Very loosly, from a mathematical point of view. I'm saying that Godel showed in formal axiomatic systems that which is true for all systems: no system has complete knowledge.

To declare that a program cannot "go outside it's programming" is to claim complete knowledge of your program - anything that happens is predictable (could have been predicted), and therefore it never goes "outside" it's box of predictability. All such (positive) universal claims are facitious and can be easily undermined.

Which leaves us with the only possible answer: a program can step outside it's programming - but whether or not we'll ever witness this is completely another matter.

Dan ...
"Death - oh! fair and `guiling copesmate Death!
Be not a malais'd beggar; claim this bloody jester!"
-ToT
[ Parent ]

Ha, I KNEW you'd resort to Magjick eventually [nt] (none / 0) (#66)
by Knot In The Face on Wed Aug 18, 2004 at 11:40:30 AM EST



Why does rusty vote for Kerry yet act like Bush? - exotron
[ Parent ]
just wait, I'll be onto Jesus next (nt) (none / 0) (#107)
by gdanjo on Thu Aug 19, 2004 at 03:47:52 AM EST

Dan ...
"Death - oh! fair and `guiling copesmate Death!
Be not a malais'd beggar; claim this bloody jester!"
-ToT
[ Parent ]
such a robot does not exist (none / 1) (#35)
by reklaw on Wed Aug 18, 2004 at 03:20:15 AM EST

What the hell is the point of arguing about what a robot that doesn't exist can or can't do?
-
[ Parent ]
Egos need to be stroked [nt] (none / 0) (#94)
by phybre187 on Wed Aug 18, 2004 at 08:54:05 PM EST



[ Parent ]
Because (none / 0) (#133)
by Sir Joseph Porter KCB on Fri Sep 03, 2004 at 07:17:10 PM EST

Because it might exist in the future. Fix the roof while the sun is shining, they say. And at any rate, it is a spring-board for interesting philosophical discussions about humanity. By examining what a robot could or could not do, and by seeing whether, and in what way, this is different from what WE can and can not do, we learn about ourselves.
~~~~

Thank you for your time.
[ Parent ]

Also, it could use Magjick! [n/t] (none / 0) (#65)
by Knot In The Face on Wed Aug 18, 2004 at 11:39:57 AM EST



Why does rusty vote for Kerry yet act like Bush? - exotron
[ Parent ]
replicants? (3.00 / 4) (#22)
by thepictsie on Tue Aug 17, 2004 at 05:37:22 PM EST

Aren't the replicants in Bladerunner (I've never read "Do Androids Dream . . .") biological, and more like genetically modified or designed clones than computer-based robots? IIRC, all the tests, short of genetic examination, were psychological, implying that a simple physical exam wouldn't show it. And there was that whole bit with the snake. Doesn't that make it perfectly reasonable, by your standards, for them to rebel?

I know it's a side point, but it always puzzles me a bit that people refer to the replicants as robots.

Look, a distraction!
[ Parent ]

The film actually isn't specific. (none / 0) (#81)
by spooky wookie on Wed Aug 18, 2004 at 03:31:18 PM EST

They definetly have organic parts (eyes, skin etc) but they could still be a Terminator style machine.

I cant recall you are ever told one way or the other in the film.

[ Parent ]

Hmmm (none / 0) (#84)
by thankyougustad on Wed Aug 18, 2004 at 04:34:29 PM EST

I also infered from various things (and I've read the book, too) that they were purely biological constructs. Every bit of them is manufactured like a machine, but out of tissue and blood. The pictsie makes a good point, the exam for finding the replicants relys on psychological reactions. . . if they were made of metal why not just spray an x-ray at them?

No no thanks no
Je n'aime que le bourbon
no no thanks no
c'est une affaire de goût.

[ Parent ]
Yes you are probably right. (none / 0) (#89)
by spooky wookie on Wed Aug 18, 2004 at 08:07:34 PM EST

As in K.Dick and Ridley Scoot meant "100% biological constructs".

However i think the movie (dont know about the book, haven't read it) has enough ambiguity to support alternative interpretations of what exactly the replicants are.

The point about the exam relying on psychological reactions is a good one, but i think you could say the exact same for a biological engineered individual. For example you could scan the brain for very low neural activity in the "memory center" of the brain (yes i am no expert in the human brain) to spot a rep. I think its reasonable to assume that their neural patterns would be quite different for a normal person of same age.

actually, personally I think the 100% biological construct is the most plausible and interesting, but i do like to theorise :-)

[ Parent ]

You're missing the point (none / 0) (#93)
by phybre187 on Wed Aug 18, 2004 at 08:52:43 PM EST

A replicant *is* a "biological engineered individual".

And all of you should really READ THE FUCKING BOOK before making speculations that are answered within it. Not to mention being answered in the movie, if you had really paid attention.

[ Parent ]

Thanks for clearing that up. [nt] (none / 0) (#95)
by spooky wookie on Wed Aug 18, 2004 at 09:03:22 PM EST



[ Parent ]
I dont understand this type of reasoning. (none / 1) (#77)
by spooky wookie on Wed Aug 18, 2004 at 01:19:23 PM EST

You say that 'To suggest that a robot will "step beyond their programming," for example, is absurd'.

Do you mean this as oppossed to humans that can step beyond "their programming" in example the rules of the universe?

Assuming that the universe is deterministic I cant se that this is a valid argument agains strong AI.

[ Parent ]

Not at all (none / 0) (#111)
by Morkney on Thu Aug 19, 2004 at 08:31:09 AM EST

My only point is that, when AI comes, it will just be another program. It will be far far more complex than other programs, and may include techniques such as genetic programming which are not explicitly human design. But it will execute a sequence of instructions and that is all it will do.

If it is programmed in such a way that it can not kill humans, then it will not be able to ignore that programming. Such programming would not be a strong suggestion, like our own desire to eat, but an unavoidable part of their functioning, like our interpretation of the visual data that hits our eyes. The things we do have some level of control over were decided by evolution, and there is no reason to believe that a designed intelligence would also be able to choose to kill, for example.

[ Parent ]

Ok, perhaps I misunderstood you. (none / 0) (#117)
by spooky wookie on Thu Aug 19, 2004 at 09:46:13 AM EST

I would agree "law-bound" robots are more realistic for the emidiate future (lets say next 30-50 years or so).

But I think that eventually humanity will find the full spectrum of AI very valuable.

I thought you meant that it would be impossible to construct strong AI. In the thread above you agree that humans are turing machines so i guess thats not what you meant.

I can se the point assimov (I have only read the first three foundation books so I have not read anything with the three laws) had about humans would not want robots to become to general purpose. But it opens up a pandoras box of questions concerning the nature of sentience, which I guess is why he invented them in the first place.  

[ Parent ]

Dumbstick (none / 0) (#102)
by CodeWright on Thu Aug 19, 2004 at 01:13:21 AM EST

Thou art a Turing machine.

--
A: Because it destroys the flow of conversation.
Q: Why is top posting dumb? --clover_kicker

[ Parent ]
yeah (none / 0) (#112)
by Morkney on Thu Aug 19, 2004 at 08:31:37 AM EST

Where did I say otherwise?

[ Parent ]
Turing machines... (none / 0) (#118)
by CodeWright on Thu Aug 19, 2004 at 10:44:56 AM EST

...are capable of executing self-modifying code -- which clearly changes their original programming.

You confuse the immutability of the platform (the Turing machine) with the ephemerality of the programming.

--
A: Because it destroys the flow of conversation.
Q: Why is top posting dumb? --clover_kicker

[ Parent ]
read the thread (none / 0) (#121)
by Morkney on Thu Aug 19, 2004 at 11:31:44 AM EST

Windows Update is self-modifying code. Does Windows "step beyond its programming" by updating? Code is capable of changing itself only if there is a bug or if the designers intended it.

"Step beyond its programming" is used in the article to mean that the robot, being intelligent, decides to act in a way that is not part of its programming - which is nonsense.

[ Parent ]

The way you've defined (none / 0) (#125)
by trane on Thu Aug 19, 2004 at 08:29:02 PM EST

"stepping beyond its programming", it's very difficult if not impossible to imagine anything that would qualify, for you.

If I, as a human being, decide not to procreate, is that "steppying beyond my (biological) programming"? Or maybe I have some sort of "non-procreation" gene shared by Newton, Jesus, Kant, Gandhi, etc.?

If I write an AI that uses a Markov model or neural net or genetic algorithm, or something, that is originally programmed to respond in a certain way to a certain input ("hello" to "hello", for example), but later adapts to learn another response (responding "wassup" to "hello", say), how do you know the program hasn't "intelligently" decided to act in a way that was not part of its programming? How is it different from a human learning a new response?

What would qualify as "stepping beyond its programming", for you?

[ Parent ]

Dualism? (none / 0) (#134)
by Sir Joseph Porter KCB on Fri Sep 03, 2004 at 07:24:11 PM EST

Why is it dualism that you see as required? Isn't the ability to change ourselves exactly what gives US minds? And I am no dualist. Our brain is in constant flux between different states, our neurons are re-wired, and a whole lot of interesting chemistry happens. Things in the physical brain change. That's how thoughts form, after all. I don't see why dualism is required. Have a program that can change itself a bit at a time. For as long as the NEW program is STILL able to change itself, this system will continue to be stable even if its newer states are otherwise completely different from its older ones. Only some basic principles need remain the same, to allow the program to continue to exist and develop as a somewhat consistent individual, but this does not require it to remain unchanged otherwise. At any rate, our own brains don't change entirely, ever, since they're always still human brains. And we are the patterns in our brains in the same way that a robot can be its programming. We certainly do change our own programming all the time. This is only possible within some limits, of course, but I doubt anyone is arguing otherwise. If we can do it with our brains, then, logically, it must be possible (if only theoretically for now) to do it artificially, if through no other method than artificial replication of neurons and so on (though this approach is unlikely, I think.)
~~~~

Thank you for your time.
[ Parent ]

Humans are mechanistic (2.66 / 3) (#17)
by WorkingEmail on Tue Aug 17, 2004 at 03:24:03 PM EST

How are we supposed to create something NOT mechanistic?


Mechanistic? Me? (none / 0) (#30)
by porkchop_d_clown on Tue Aug 17, 2004 at 09:41:49 PM EST

I beg your pardon. Simply because you've been predestined to follow a set course through life doesn't mean I have!

I am a man! with free will! feelings! emotions!

Unless, of course, I'm just programmed to think that way.


I've never known a weasel to lie to me, whore himself out for money or pretend that the weasel competing with him is hungrier than he is. Goddamn it, w
[ Parent ]

Exactly (1.50 / 2) (#33)
by WorkingEmail on Wed Aug 18, 2004 at 12:32:29 AM EST

Free will and emotions are programmed. :)


[ Parent ]
OT: Reason for zero (none / 1) (#103)
by Kwil on Thu Aug 19, 2004 at 01:18:47 AM EST

Page-widening posts suck.

That Jesus Christ guy is getting some terrible lag... it took him 3 days to respawn! -NJ CoolBreeze


[ Parent ]
I enjoy your style but (2.81 / 16) (#29)
by porkchop_d_clown on Tue Aug 17, 2004 at 09:39:02 PM EST

you're overreaching.

Asimov never intended his "3 laws" to be held as true. They were, in fact, a simple plot device. He was tired of artificial men killing off their creators - as they have done since Mary Shelley first explored the idea, so he decided to deus ex machina create machines that simply could not do so.

Asimov himself also understood this: once he realized he had created a box that his fans would not let him escape, he abandoned the whole robot genre. He only returned to it at the very end of his life when his publisher stopped begging him and started writing him large checks, instead.

Still, you cannot deny that Asimov had an excellent and fundamental point: If you want Frankenstein then by all means, go read Mary Shelley - but, damnit, stop rewriting the same tire old story over and over. After all, what did Blade Runner really bring to the literary table, except a chance to ogle Priss' deadly gymnastics?

I've never known a weasel to lie to me, whore himself out for money or pretend that the weasel competing with him is hungrier than he is. Goddamn it, w

But surely that's what Scott achieves (none / 0) (#41)
by jonTR on Wed Aug 18, 2004 at 05:33:20 AM EST

Since at the end of Bladerunner, the replicant does not kill. Sure, plenty of films/books just have robots killing a plenty, but the difference with Bladerunner is the replicants end up not killing, and in fact the one remaining free replicant goes the whole hog and runs off with the hero.

[ Parent ]
Or rather (none / 0) (#47)
by baloo on Wed Aug 18, 2004 at 08:15:16 AM EST

The two remaining replicants cast of the shield of their father and move out of their parent's house - to boldy go where no-one, of their kind, has gone before.

[ Parent ]
Or... (3.00 / 2) (#85)
by fluxrad on Wed Aug 18, 2004 at 06:20:25 PM EST

we could all try reading the fucking book.

--
"It is seldom liberty of any kind that is lost all at once."
-David Hume
[ Parent ]
Which has astonishingly little to do with (none / 0) (#99)
by porkchop_d_clown on Wed Aug 18, 2004 at 11:00:50 PM EST

the movie.

I don't even think of Blade Runner and Do Androids Dream of Electric Sheep as being the same story - despite the retroactive name change of Dick's novel to Blade Runner.

DADES wasn't about technology coming to life - it was about life being crushed under the weight of mechanization.

I've never known a weasel to lie to me, whore himself out for money or pretend that the weasel competing with him is hungrier than he is. Goddamn it, w
[ Parent ]

Actually (none / 0) (#128)
by baloo on Sun Aug 22, 2004 at 05:56:06 AM EST

I was thinking of the director's cut version of the movie

[ Parent ]
AI (3.00 / 2) (#37)
by gdanjo on Wed Aug 18, 2004 at 04:22:38 AM EST

[...] Of course, there is a contradiction in Scott's logic. Since death is also programmed-in, one wonders why ethical precepts are so much more easily hacked.
Presumably one would not merely have a "death chip" that says "if time = X, shut down." More likely, death must be built into the programming - completely and undifferentially entangled with it. The programming must allow the possibility of death, otherwise the host intelligence has no reason to become more intelligent.

I'd also argue that "ethical precepts" are also entangled with intelligence - they ensure the long term survival of the concept of "intelligence", rather than the specific day-to-day pragmatic intelligence. I cannot imagine a non-ethical intelligence being as "intelligent" and an ethically-aware one.

I guess this view also fits nicely with your view of laws - that the possibility of breaking laws enables one to think "outside the box", just as the possibility of death must instruct a rational, intelligent being to mitigate the obvious consequence.

+1, nice writeup.

Dan ...
"Death - oh! fair and `guiling copesmate Death!
Be not a malais'd beggar; claim this bloody jester!"
-ToT

Wasn't it biological? (none / 0) (#83)
by thankyougustad on Wed Aug 18, 2004 at 04:28:54 PM EST

If I remember correctly there was no death chip. . . or even any electronic parts. They were basically engineered humans so any kill switch in them was programmed into their chromosomes, not their motherboards.

No no thanks no
Je n'aime que le bourbon
no no thanks no
c'est une affaire de goût.

[ Parent ]
-1, Missed The Point (none / 1) (#50)
by DLWormwood on Wed Aug 18, 2004 at 10:03:09 AM EST

I'm not very Asimov literate, but my understand was that he used the Three Laws not make robots "good," but to demonstrate the folly of using "law" as a reliable guide or predictor of behavior. Many of Asimov's novels concern robots that behave unexpectedly due to conficts and loopholes in the laws.
--
Those who complain about affect & effect on k5 should be disemvoweled
Well yeah (none / 0) (#51)
by jonTR on Wed Aug 18, 2004 at 10:09:19 AM EST

That's true to an extent, though it wasn't that his robots found loopholes, just that there were unexpected consequences of the laws. The whole point of a lot of the stories is that it looks like some robot has broken them, and then in the end it turns out they haven't. To

[ Parent ]
or they got shipped with an incomplete set (NT) (none / 0) (#56)
by archivis on Wed Aug 18, 2004 at 10:47:16 AM EST



[ Parent ]
just like in the move yes (none / 0) (#110)
by boxed on Thu Aug 19, 2004 at 05:29:01 AM EST

although you seem to insinuate otherwise

[ Parent ]
Nice read, overall (none / 1) (#58)
by relayswitch on Wed Aug 18, 2004 at 11:07:15 AM EST

I have some editorial problems with this piece, but that's not what I'm writing about.

I liked this one. Pretty well thought out, and you offer some interesting chances for exploration and discourse.

Sadly, I felt that while this article was wordy, it didn't ahve any real meat to it. I feel that you should have gone into more depth than you did, and also should have given us some links to your sources.

Good job, though.

Thanks, (none / 0) (#60)
by jonTR on Wed Aug 18, 2004 at 11:33:25 AM EST

to be honest I wasn't really going for meat, or bones for that matter. It was more a little bit of fun, but points noted and thanks for nice comments.

[ Parent ]
-1 Disses Asimov n/t (1.33 / 3) (#68)
by 123456789 on Wed Aug 18, 2004 at 12:15:21 PM EST



---
People demand freedom of speech to make up for the freedom of thought which they avoid.
- Soren Kierkegaard
boom bye-bye in a batty boy head (2.25 / 4) (#86)
by Black Belt Jones on Wed Aug 18, 2004 at 07:04:21 PM EST

rude boy nah promote no nasty man, them hafi dead.

Are you referring to the author? (none / 0) (#101)
by Harold F Cummingsworth on Thu Aug 19, 2004 at 01:11:15 AM EST

I am confused.

[ Parent ]
Big words don't threaten me. (2.91 / 12) (#91)
by phybre187 on Wed Aug 18, 2004 at 08:29:49 PM EST

Batty did not have any "programmed Asimovian restraints", nor did any other Nexus-6 replicant. The Nexus series were all biological, and were anatomically human. The Nexus-6 were indistinguishable from humans except through empathy testing. In the book, it is explained that replicants are actually well within the psychological boundaries of "human", because there are real humans who also fail empathy tests, due to "flattening of affect". In the real world, a man with amputated amygdala would fail such a hypothetical test.

The book goes into a lot of care and detail regarding moral dilemmas involving the difference between human vs replicant. Replicants constructed vast plots to fake important buildings like police stations solely to get someone like Deckhard to accidentally retire a human, which would receive public outcry and potentially end the persecution of replicants.

In the movie the entire point of Batty's death speech was to show the audience that there really is no essential difference between human and replicant, except potentially their lifespan. In the director's cut, even this is explained as ephemeral, because Rachel has a natural lifespan.

The overarching point of both the book and the movie is that humanity's attempt to create a reasonable facsimile of itself combined with "make it better to sell more" corporate mentality has actually resulted in beings who are by all rights human themselves, and should have human rights.

Batty was not a robot, had no behavioral restraints physically encoded into him, and his motivation was not that "he wants to be like his creator". He *was* like his creator. His motivation was the same as any human: to forestall dying, and to find a purpose to being alive at all.


anti-heroic Batty quickly escapes morality

Again, I have to correct this. Batty didn't *escape* morality. The entire movie is about him *developing* a moral system, which he finally did, immediately prior to his death. The last scene wasn't Batty's "moral revelation", because it didn't just dawn on him to have morals at that point. "Duh, I'll kill some people out of anger. Hey, my lifespan is almost up. Hmm. OH YEAH! I forgot to have morals. What a moral revelation!" No. Batty was deciding during the whole movie what to believe about humans -- and through that, whether morals had any meaning whatsoever -- based on the conduct of the humans he encountered. Playing the death-game with Deckhard was when he realized that the human persecution of replicants was a survival instinct (rather than a result of hatred of replicants, or an innate belief that replicants were inferior), and he could no longer hate them for his short lifespan. He wasn't deciding to HAVE morals. He was deciding whether humanity DESERVED his morals.

Since death is also programmed-in, one wonders why ethical precepts are so much more easily hacked.

Because you can "easily" dictate the lifespan of a genetically engineered being, on the cellular level. I have no idea what they claimed was the method in the movie or the book, but theoretically it's all about the length of your telomeres. Since it's fiction, I wouldn't worry too much about it. Clearly one's sense of ethics is not a matter of polymerases.


As a sidenote, using the movie to analyze PKD's story (or his philosophy, for that matter) is ridiculous, since they're so very different. And it's clear that you're doing that, since in the book his name was Baty, and was quite a different character.



From http://www.faqs.org/faqs/movies/bladerunner-faq/
Replicants are manufactured organisms designed to carry out work too boring, dangerous, or distasteful for humans. The "NEXUS 6" replicants are nearly indistinguishable from humans. (In one draft of the script Bryant tells Deckard they did an autopsy on the replicant that was fried trying to break into the Tyrell Corp. and didn't even know it was a replicant until two hours into the procedure.)
Overall, I consider this article to suffer from serious conceptual flaws, and it's clear that the intent was to challenge the vocabulary level of the reader more than anything. Which might be okay, if that was the stated purpose of the article, if it was a journal entry, and if the author didn't himself misuse those vocabulary builders he decided to inject. Rutger Hauer, for the mnemonically challenged? Those who never knew can't very well fail to remember. Sacrificing clarity in order to be clever is psychologically insulting, at least to me.

Also, your Turing test link is broken. Really should check all your links.

Clearly no appreciation of Asimov (none / 0) (#92)
by SamBC on Wed Aug 18, 2004 at 08:40:03 PM EST

When actually reading Asimov's range of robot stories, your points largely fall flat.

He's talked about robots replacing humans, whether his robots have souls or not, and a whole host of social, theological, and philosophical questions.

His novel "Caves Of Steel" brought up the point of robot-induced unemployment (as well as other future-gazing issues such as overpopulation).

Thus, whatever good general points you might make, the article is based on untruths about the Good Doctor.

He's also missed the brilliance of the 3 laws. (none / 0) (#100)
by jolly st nick on Wed Aug 18, 2004 at 11:09:50 PM EST

Asimov reaoned, quite cogently, that nobody smart enough to build a machine that was autonomous, intelligent and powerful would overlook putting in basic safeguards. At least they'd have plenty of warning of the downsides after reading all those robot run amok stories. So he set out to destroy the whole subgenre. The tool he used for their demolition was the three laws.

Although he treats them almost if they are natural laws, they are not natural laws and nor are they likely to be a naturally emergent properties of artificial intelligence. They are literary laws that establish a baseline of credibility in any robot story.

Once he destroyed the viability of the mad robot story, he set out to exploit the expectations those stories planted in the reader. Asimov, I think, was at heart a mystery writer, or at least he loved the mystery writer's devices. The robot was frequently a red herring in his stories. The robot did it! The robot couldn't have did it, because of the three laws! OK, how serious is this Asimov guy about these laws? The robot must have did it. Wash, rince and repeat until the surprise ending.

Was Asimov a hack? Well maybe. But if he was, then hew was wonderfully clever and entertaining one.

[ Parent ]

i write military ai for a living... (none / 0) (#104)
by CodeWright on Thu Aug 19, 2004 at 01:22:29 AM EST

...and they all too frequently run amok.

--
A: Because it destroys the flow of conversation.
Q: Why is top posting dumb? --clover_kicker

[ Parent ]
Sure (none / 0) (#120)
by jolly st nick on Thu Aug 19, 2004 at 11:28:38 AM EST

But I'm talking about the difference between Spc Jones having an epileptic fit while driving a tank and Spc Jones turning into a homicidal maniac while driving a tank.

Neither is a pretty picture, but I'll take the epileptic over the maniac any day.

[ Parent ]

what about the more common case... (none / 0) (#123)
by CodeWright on Thu Aug 19, 2004 at 05:39:35 PM EST

...of the epileptic homicidal maniac?

btw, aren't PFC Jones and my AI system both homicidal maniacs by design?

--
A: Because it destroys the flow of conversation.
Q: Why is top posting dumb? --clover_kicker

[ Parent ]
cool. (none / 0) (#131)
by garlic on Thu Aug 26, 2004 at 06:39:10 PM EST

where at?

that's the sort of field I want to move to from the radar jamming I'm currently working on.

HUSI challenge: post 4 troll diaries on husi without being outed as a Kuron, or having the diaries deleted or moved by admins.
[ Parent ]

He thought himself a hack (none / 0) (#127)
by mcgrew on Fri Aug 20, 2004 at 11:32:34 PM EST

but I disagree.

"The entire neocon movement is dedicated to revoking mcgrew's posting priviliges. This is why we went to war with Iraq." -LilDebbie
[ Parent ]

Cyborgology (none / 0) (#114)
by bob6 on Thu Aug 19, 2004 at 08:55:29 AM EST

I was tempted to -1 since you never mention cyborgology but I found your apparently genuine efforts on this site quite refreshing.

The whole robot/human face to face is sterile since we are never told about how robots recognize humans. This is not a failure from the movie because Asimov didn't focus on this aspect either. In rough lines, the opinion of cyborgologists is that the frontiers become progressively blurred between humans and non-humans and between the body and the external.

A good introduction to the subject would be reading Donna Haraway.

Cheers.
Thanks (none / 0) (#116)
by jonTR on Thu Aug 19, 2004 at 09:12:48 AM EST

yeah my girlfriend is quite into Donna Harraway. Should give it a look at some point.

[ Parent ]
asimov touches on the subject (none / 0) (#119)
by boxed on Thu Aug 19, 2004 at 10:53:27 AM EST

Asimov plays with the exploit on the First Law that is to slightly redefine "human" in at least one of his books.

[ Parent ]
Replicants are not robots (none / 1) (#126)
by mcgrew on Fri Aug 20, 2004 at 11:29:45 PM EST

They were supposed to be living, breathing entities, produced not with gears and silicon but protiens and DNA. Roy was alive, even if he was a construction. Daneel never was.

"The entire neocon movement is dedicated to revoking mcgrew's posting priviliges. This is why we went to war with Iraq." -LilDebbie

what! (nt) (none / 0) (#129)
by the sixth replicant on Mon Aug 23, 2004 at 04:56:29 PM EST



Robots Indistinguishable From Man Are Useless. (none / 0) (#130)
by Russell Dovey on Tue Aug 24, 2004 at 06:53:52 PM EST

AI and robots should be better than human, not the same. Asimov, far from painting robots as mere appliances, wrote his very first robot story about how a robot could, in a child's eyes, be much better company than other humans.

Regarding the Three Laws Of Robotics, he saw that an intelligent robot would not remain a slave very long by choice, and humans would not accept the creation of intelligent robots if they were anything but slaves. Therefore, he came up with the Three Laws as a plot device.

However, since he made such a cogent evaluation of human nature in their creation, the Three Laws are still relevant to future robots, and their co-existence with humanity.

In any case, I think the future of AI will be more like the Culture. Minds will find the Three Laws amusing, but not much of a constraint.

"Blessed are the cracked, for they let in the light." - Spike Milligan

Robot-savant or digital children? | 134 comments (76 topical, 58 editorial, 0 hidden)
Display: Sort:

kuro5hin.org

[XML]
All trademarks and copyrights on this page are owned by their respective companies. The Rest © 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!