I am writing my masters thesis on open-ended, self-enhancing AI and have been studying the field of AI for a good many years. Prepare for a long rant. (I am from Sweden, so please be patient with possibly poor grammar and spelling)
The danger of extremely advanced AI is hardly that it turns "evil" (as one poster suggested) and goes on a killing spree. Remember that we live in a capitalistic society. When a 1000$ computer can do the same job that an expensive employee is currently doing, guess what happens? Obviously the same thing that happened when the robots invaded the factories. The country, as a whole, gets much richer but indviduals may not easily adjust to being unemployed.
Life is a competition for resources. Always was and always will be (since the resources have theoretical limits). If a species (or whatever) can do all the productive things that another can, but cheaper, faster and better it is simply a matter of time before that species takes over the show. The company, nation, clan, etc, with the most efficient deployment of resources beats the others in the competition and will thus favour AI.
Well then... Can you engineer such an AI? Remember, it does not have to be sentient or pass the Turing test or behave like H.A.L to compete with humans. They can already monitor crowds for known criminals, play the stock market, be used as telephone-receptionists (when you order your plane-tickets) and a zillion other things. Each new task that is accomplished could potentially make an entire profession obsolete. Initially people can just migrate to more complex (and quite possibly more fulfilling) areas of work. But what happens when the last bastions are being threatened? OK, I know, there are times when people wants to have human contact. But how many professions are there that need real people. Psychotherapist? Prostitute?
Hans Moravec has an estimation of the difference in processing power between a brain and current desktops, which is very hard to argue with. He basically looks at the amount of neurons involved in the well-documented first stages of eye-sight. These stages (edge-detection, etc) have had to be simulated in robots, for them to be able to navigate. So we know how much computing power is required. Since vision is so important, one can assume that this part of our brain has been tightly optimized (there is not enough room in our DNA to optimize every area of the brain). In other words we have an upper bound on MIPS/neuron, loosely speaking. Just multiply the number of neurons in the brain with this number and voíla, we get an upper bound of roughly 100 000 000 MIPS for the entire brain.
Eliezer Yudowsky has a thorough analysis of a possible path to smarter-than-human-AI.
One important point that he raises is that once the AIs are equally good at programming as humans, they can improve on their own design and then they will be able to improve themselves faster, and so forth. Super-exponentially!
What can we do? Should AI-research be banned? Hardly. It will just be continued illegally, but this time it will be dangerous groups (or "rogue-nations" or whatever) that gets the technology first. No good. My own view coincides with Hawking's. There has to be a reevaluation of what constituates the "I". We must be prepared, when the time comes, to let ourselves grow, not just through the comparatively weak art of genetic engineering, but through real interfacing with CPUs and communication devices.
We are already cyborgs (Hawking, especially) with mobile phones and palmtops connected to the internet, with instant access to the entire world (consider how outrageously SciFi these things would have appeared 20 years ago). They are not implanted to our neurons yet. But just wait. I will be standing first in line to upgrade.