For Asimov, the excitement of a thinking machine was whether or not it would wash our dishes or police our streets better than us. Perhaps this is unsurprising from a man who was split between a love of science and a fascination with religion, but for those of us without a surfeit of authorities to look up to, the idea of building a completely new thinking being should surely raise more interesting possibilities. Aside from the neuroses engendered by expectant parenthood, should we not also share the hopes and dreams of potential paternity? Where Asimov painted a glib future of helpful bots tidying their rooms, other writers saw the true philosophical (and perhaps theosophical) issues that AI would raise.
Forget Spielberg’s cutesy hatchet job on Kubrick, and the latter’s own dark star Hal, and turn your head to the thicker substance of Bladerunner’s Batty (or Rutger Hauer, for the mnemonically challenged). For here you find the first real question of what to do with a thinking machine. While Scott (pace Dick) kicks off with androids as revolutionary proles, he eschews the temptation to wallow in plastic Marxism and instead turns his head to cyberFreudianism.
Roy Batty is not in search of new and exciting ways to kill off his erstwhile masters, though being a thinker and a machine, his programmed Asimovian restraints are unsurprisingly ineffectual. You could argue that a machine could never get that far, that it must be limited by what is built in. The counterargument to this must be that the only machine we know of, so far, that does think, already casts off the shackles of its programming. Humankind has long been able to break its most engrained laws, from incest to hunger strikes. If what we mean by intelligence does follow turing's test, then surely it is precisely this kind of extreme behaviour that will be the final hurdle. For AI to be indistinguishable from a human, it must be able to break the laws that we can break, else it will be just a poor mimic.
For Batty, this revolution follows in the footsteps of his fleshy forebears, Adam and Frankenstein’s monster: he wants to be like his creator. In Dick’s universe this is the one commandment for machines, Thou Shalt not be Human, and the only one worth breaking. Asimov’s wholly artificial intelligence could not move beyond established moral laws; he could not conceive of non-human intelligence. By contrast, the anti-heroic Batty quickly escapes morality, but then collides with its conceptual big brother mortality. Of course, there is a contradiction in Scott’s logic. Since death is also programmed-in, one wonders why ethical precepts are so much more easily hacked.
This is the reverse of Asimov’s androids, some of whom turned out to be near-immortal and who had moral quandaries over preserving humanity’s vast numbers. Such righteousness is made less admirable by the fact that this is just another instance of a machine being mechanistic. Even the most worldly-wise, ancient robots in Asimov’s universe lack the humanity to sin. Rutger Hauer’s wooden counterparts all make the step beyond their programming, as do Proyas’ silken-faced hordes (for which he was, perhaps unjustly, criticised). But the logical, human conclusion of this is only realised in the final reel of Bladerunner, when Batty undergoes a true moral revelation. Faced with death he makes the choice not to kill, at precisely the moment when pragmatics matter least and the only values remaining are ethical ones.
It is choice that Asimov denies his robots through his three commandments; it is through this lack of freedom that he impugns AI at a triple stroke. In this he merely mimics the contradictions of our consumer society, where purpose becomes merely a mythical choice about lifestyle, not about mor(t)ality. Yet in doing so he misses out on the heart of the matter. In creating other intelligences, in giving them norms to follow, we tread the path of numerous other authorities, from our own parents right up to God Herself. For any authority, the real fear should not be that their vassals break the law, but that they never step beyond the law. Freud’s apes were only free after they had killed their progenitor; he saw original sin as the offspring of their guilt. This is the same revolution that any child goes through to become an adult, establishing his or her own rules and castles, and arguably it is also the root of democratic fervour, with feudalism playing the part of the ageing parent.
However, such a denouement, whilst filmicly pleasing, does not provide us with our ending. Having accomplished the moral revolution, having become thinking, choosing beings in our own right, we must cope with that freedom. The guilt of revolution underpins our future, the feeling that somehow we must pay for the sin that set us free. This in its turn no doubt provides psychoanalysts with plenty of new Mercedes, but it raises the question of whether we should expect anything less of our artificial children, and indeed whether that is not a desirable outcome. If we want a machine to think and to be able to choose (is there a difference?), do we not also want it to be able to go insane?