A.I. is the latest in a long line of films to tackle the subject of the interactions between machines with intelligence and/or emotion. Making it a good time to bring up some of the moral and ethical issues that revolve around the premise.
In A.I., David is the first child programmed to be capable of experiencing love. His creator(s) believe that emotion will be the key to unlocking a deeper level of intelligence. It appears they were right, because once David's love-function is activated he slowly becomes more human. He develops survival instincts, he has desires, and in the end he even has dreams.
Could humanity ever learn to feel emotion towards a machine
A.I. seems to answer no to this question. His mother abandons him, because though she feels strongly for the childlike aspects of David, no matter how many qualities of humanity he exhibits she cannot fully forget the fact that he is not, like her, organic. His creator loves David in the way that I might love a particularly brilliant comment I've made. His care for David is simply ego-stroking. Joe and Teddy actually seem to genuinely care for David more than anyone else in the film, and they aren't even programmed for emotion.
If we as a society were ever able to advance A.I. research to the point of a creation like David, I find it unlikely that our reactions would be any different. Humans have consistently shown fear and prejudice against things that are new and different, portrayed by the Flesh Fair in A.I. Regardless of how realistic and accurate the simulation of humanity may be, as long as we're aware that the entity is not organic then that reoccuring fear of the unknown will haunt us.
A topic that is has been less explored, in my opinion, which A.I. also tackles is how such an intelligence would react to a world that refuses to accept it.
The actions of David, which probably makes up the strongest point of criticism with A.I., are decidedly mechanical. He acts with a single-minded devotion and idealism to obtain the return of the love that dominates his existence. He only briefly gives up in his pursuit, even in the face of what we would consider impossible odds. Only for a brief moment does David give in, once he sees the many copies of himself. At this point, he immediately tries to destroy himself which is not because of the fact that he is not unique itself, but because it shatters his illusions that he is special and that his specialness could somehow eventually lead to gaining the love he so greatly desires.
Sci-fi in general usually takes a slightly different approach to this issue. The most common story seems to be of the robots resenting humanity and eventually turning on them. I find this an unlikely situation in reality if machines truly gain emotion and independent thought. The reason is not due to some sort of Asimovian restrictions (undoubtedly the system would be way too complex to apply such blanket concepts, especially since such entities blur the lines of humanity), but due to fear. If a being is capable of self-preservation and emotion, then it will naturally fear it's own death or the electronic mind equivalent of pain. Fear of punishment is what I'm talking about, of course. Law enforcement for robotic beings would undoubtedly be highly effective as long as they were considered such and not human which brings me to the most relevant issue.
What does it mean to be alive?
If a being (mechanical or otherwise) thinks for itself, feels, and fears it's own demise, is it not more or less alive? Does a sentient, feeling machine have the right to be considered human or at least a citizen as opposed to property?
A.I. portrays my opinion best in the Flesh Fair when they bring David forth to be destroyed and the crowd protests, because he fears his own death and pleads for his life. I believe strongly that anything that fears it's own demise (if it's true emotion and not just a script or something) can be considered living. If such a being has an intelligence level up to or beyond that of humanity, then I believe that it should be given all the rights attributed with being human. To not do so would be another form of slavey.
Sadly, if such technology ever develops this is the way I see reality going. Fear of the unknown. Fear of a robot revolt. Reluctance to accept the new. All of these things add up to a society unwilling to accept what I would consider it's newest members. So these beings which are capable of thought along the same lines as you and I are bought and sold as property and are nothing but objects. They can be destroyed/killed on a whim. And hundreds of years later everyone will regret it terribly.
History does repeat itself.