Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

[P]
Physicist Stephen Hawking Contributes to AI Debate

By joegee in News
Sun Sep 02, 2001 at 04:51:43 PM EST
Tags: Technology (all tags)
Technology

From The Associated Press comes a story about physicist Stephen Hawking, who is warning that humans must be prepared either to enhance the species through genetic engineering in order to compete with future machine intelligences, or perish.


In Doctor Hawking's opinion, with machines doubling their raw processing power every eighteen months "the danger is real that they could develop intelligence and take over the world."

Hawking joins the growing list of public figures (including Sun Microsystem's Bill Joy) who warn of dangers associated with the development of machines that are smarter than us.

As I was researching links I found an older article with an interesting alternative perspective on artificial intelligence, Salon.com's take on Jaron Lanier's "One-Half a Manifesto", entitled Artificial Stupidity.

Do we stand at the brink of an AI holocaust, or are futurists off the mark in their predictions that mankind will create a machine that possesses the complexity and nuance of a human intelligence? Even if a machine is intelligent, will it be self-aware?

Sponsors

Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure

Login

Poll
Will a machine ever be sentient?
o Yes, undoubtedly within twenty-five years. 17%
o Possibly, but we have major obstacles to overcome. 32%
o I don't believe we know enough about intelligence and self-awareness to answer this. 23%
o I doubt it, no digital system will ever be able to approach the variability of the human mind. 3%
o Never, machine sentiency is impossible. 4%
o Why worry about machines, we have yet to prove that people are sentient? 18%

Votes: 64
Results | Other Polls

Related Links
o The Associated Press
o enhance the species through genetic engineering
o Bill Joy
o Salon.com
o Jaron Lanier's "One-Half a Manifesto"
o Artificial Stupidity
o Also by joegee


Display: Sort:
Physicist Stephen Hawking Contributes to AI Debate | 43 comments (42 topical, 1 editorial, 0 hidden)
Man vs. Machine (3.00 / 9) (#1)
by mmcc on Sat Sep 01, 2001 at 08:43:49 PM EST

Stephen Hawking seems to be totally out of touch with reality.

the danger is real that they could develop intelligence and take over the world.
Hmmm, let's see:

  • Machines cannot self-assemble; they are built by humans.

  • Machines are not self-sufficient; they rely on humans for their energy.

  • Machines cannot repair themselves. They need maintainers.

  • Machines are (generally) not fault tollerant.

  • Machines are highly inefficient.(compared to biological organisms)

  • Machines are not built from commonly available material.

    Machines cannot dominate the world without dominating the real world.

    Hawking seems to be forgetting that no machine has anywhere near the flexability of the human body.

    Evolution has been tailoring our bodies (and minds) to suit our environment forever and a day. Are we really so arrogant as to believe that we can better evolution in the measely amount of time we have left on this planet?

    Nobody has even come close to building any self reproducing machine. Think about it. A male and female human (and other animals) can survive alone, indepently of civilization and produce offspring without any additional technology.

    Which machine can do that?

    Even if you make machines that are intelligent like humans (which i seriously doubt) they still have to overcome all the above problems to dominate the world.

    Anykind of human-machine hybrid would still be subject to the same problems.

    Machines aren't flexible. Things that aren't flexible are doomed to fail in nature.

    The idea of machines dominating the world is the domain of science fiction, not reality. Perhaps Hawking could get a job as a Hollywood script writer? Matrix IV here we come :-)



  • It is possible (3.50 / 6) (#2)
    by cunt on Sat Sep 01, 2001 at 09:43:02 PM EST

    All you have managed to argue is that the machines that currently exist will not take over the world. I am certain that everyone already knew that. None of your objections address what machines could be, but merely what they are.

    Of course, we do not know if we will ever create self-sufficient machines, but the idea of such a machine is certainly tenable. If you doubt this, look at the biological machines that inhabit much of the planet.

    [ Parent ]
    Skynet (2.00 / 2) (#5)
    by Leadfoot180 on Sun Sep 02, 2001 at 02:05:45 AM EST

    Yeah, didn't you ever see Terminator 2? :-P

    [ Parent ]
    Nonsense (3.25 / 4) (#7)
    by Tezcatlipoca on Sun Sep 02, 2001 at 05:56:44 AM EST

    What machines will do and look like in 10000 years is beyond our imagination.

    All the objections you raise assume technology stales today in its current primitive state.


    ------------------------------------
    "They only think of me as a Mexican,
    an Indian or a Mafia don"
    Mexican born actor Anthony Quinn on
    Hol
    [ Parent ]
    Biological machines (4.33 / 3) (#9)
    by delmoi on Sun Sep 02, 2001 at 07:58:04 AM EST

    Human beings and all other life on are already "machines". Artifical Machines are a subset of those. There certanly is no physical reason that artifical machines can't be as complex as non-artifical 'natural' ones. Or do you know something I don't?
    --
    "'argumentation' is not a word, idiot." -- thelizman
    [ Parent ]
    Re: biological machines (none / 0) (#25)
    by mmcc on Sun Sep 02, 2001 at 09:49:22 PM EST

      Human beings and all other life on (earth?) are already "machines". Artifical Machines are a subset of those.
    Artificial marchines are not a subset of "biological machines". No artificial biological machines exist.

      There certanly is no physical reason that artifical machines can't be as complex as non-artifical 'natural' ones. Or do you know something I don't?
    Well, if they are biological, then they compete in the same arena as natural life on earth, and natural life has been refining itself for a long time.

    If they are mechanical or hybrid, then they are subject to the limitations i originally proposed.

    Either way you're going to have a hard time improving on evolution. Unless, of course, you think you're some kind of god...



    [ Parent ]

    what constitutes god? (none / 0) (#28)
    by rebelcool on Mon Sep 03, 2001 at 01:52:07 AM EST

    The question has always amused me. Given that we can clone animals now, id say we're within a century of two of actually designing new life forms. If one of the main requirements for being a god is the ability to create life, man would indeed become that god.

    Anyways, you're assuming the evolution would start at the beginning all over again. Seems rather ludicrous. We already have the blueprints for lifeforms on earth...why begin again? Take the plans we have now and put in our own improvements. Perhaps that is how life began on earth. Another species became technologically advanced enough to create life and seed it on other planets. The universe is certainly old enough for that to have happened - many times over in fact.

    COG. Build your own community. Free, easy, powerful. Demo site
    [ Parent ]

    where do people come up with ideas like this? (none / 0) (#36)
    by delmoi on Tue Sep 04, 2001 at 06:19:29 AM EST

    Artificial marchines are not a subset of "biological machines". No artificial biological machines exist.

    No. Both 'artificial' and 'biological' machines are both subsets of the same set. There is no reason why they can't overlap. And there is no reason why machines in the 'artifical' subset cannot posses ablites of the 'biological' subset. And you don't have a shred of evidence that that isn't true.

    Your own assertions don't mean anything if you can't back them up. There is no law of physics that says that humans cannot produce things that that biology cannot, in fact quite the opposite. Humans have created motors the size of molicules. There is no theoretical limit to human endevor. Only practical. And what was impractical 100 years ago is easy today.

    Perhaps you should examine your own biases.
    --
    "'argumentation' is not a word, idiot." -- thelizman
    [ Parent ]
    Artificial biological machines exist? (none / 0) (#39)
    by mmcc on Tue Sep 04, 2001 at 08:00:38 PM EST

    Please show me one.

      Perhaps you should examine your own biases.
    Against what? Reality?



    [ Parent ]

    How short-sighted (3.00 / 2) (#10)
    by RangerBob on Sun Sep 02, 2001 at 08:54:15 AM EST

    The problem with your argument is that you're assuming that technology will stay at current levels and never evolve. I'm pretty sure that I can look around my house and see that I'm a lot better off than the cavemen were, so it's safe to say that this is an invalid argument.

    Your very argument seems to be based on the fact that no one has done it YET. Let's not assume that mankind has discovered all there is to know, because you're only fooling yourself if you do. People once said that man would never fly. They said that man would never leave the Earth's atmosphere. Sorry, but history has shot ya down on this one :)

    We can't predict what will happen in the future with technology. Discoveries will be made long after we've turned to dust, and I think that technology will someday take far different forms that we currently have. Who can say that machines will always be made out of plastics, composites, and metal? None of us can say with any certainty that there will NEVER be things like organic machine technology. Work is already being done in making machines that can repair themselves.

    Lack of imagination and vision as Hawking possesses them is not a valid reason to go on the attack. The people with visions and open minds are the ones who create new things, make new discoveries, and better themselves. Which is good, because this means that the majority with closed minds can sit back and enjoy the fruits of their labor.

    [ Parent ]
    I am not worried (3.14 / 7) (#3)
    by cunt on Sat Sep 01, 2001 at 09:51:25 PM EST

    If Robot Wars is any indication, machines will dominate the earth by boring all the humans to death.

    An old, dead perspective (4.40 / 5) (#4)
    by slaytanic killer on Sat Sep 01, 2001 at 10:56:49 PM EST

    Maybe I'm too taken with what Claude Shannon says, but he also believed that machines would take over the world -- and found that a better future than human world domination.

    We are not clever enough.. (4.75 / 4) (#6)
    by Weezul on Sun Sep 02, 2001 at 04:35:31 AM EST

    ..to invent a replacment for ourselves over night. We will eventually invent replacments for ourselves, but it will take a *long* time. We will likely have many many years of debates about "car/toaster/comp rights" as we create intelegent, but not very smart computers to be our slaves. Hell, we may just use nerve cells to run various pieces of equipment.

    Regardless, we will have many many years of experence living with intelegent machines before we make ourselves any real compitition. This makes it likely that any replacment of humans with machins will be slow and bloodless (a peaceful evolution).

    "Fascism should more appropriately be called Corporatism because it is a merger of state and corporate power." - Benito Mussolini
    I don't know (none / 0) (#19)
    by MicroBerto on Sun Sep 02, 2001 at 03:32:57 PM EST

    If you ask me, all it takes is the creation of a self-concious system, and then it CAN happen overnight. That system can easily learn and build new things faster than us, learn greed on the way, and before you know it, it's learned its way to domination. Perhaps it will take a long time to make a self-concious system, but once that's done, it could be over quite quickly.

    Bertoline - My Daily Comic
    Berto
    - GAIM: MicroBerto
    Bertoline - My comic strip
    [ Parent ]

    You forget one important thing... (none / 0) (#29)
    by simon farnz on Mon Sep 03, 2001 at 07:05:35 AM EST

    Assuming we're bright enough, we just have to pull the power. Design the machine dependant on either mains or regular recharges, then as soon as it becomes trouble, starve it; it works well enough on humans.

    So long as the machine is unable to get power any other way, it is harmless, because we can kill it.
    --
    If guns are outlawed, only outlaws have guns
    [ Parent ]

    Why? (none / 0) (#34)
    by ucblockhead on Mon Sep 03, 2001 at 07:39:47 PM EST

    What makes you think that "conscious" == "smarter than us"?


    -----------------------
    This is k5. We're all tools - duxup
    [ Parent ]

    The Kids (4.00 / 5) (#8)
    by brettjs on Sun Sep 02, 2001 at 07:03:15 AM EST

    Aren't intelligent machines just a natural step in the evolution of humans? We are, after all, embuing them with our own collective intelligence and methods of interpreting the world. If I'm correct, then most advanced AI systems are modelled after human thought patterns/organization (neural nets, etc).

    So, by fighting against the emergence of intelligent or (dare I say it) concious machines, aren't we sort of, you know, doing the "guppy" thing and killing our own kids?

    Guppies aren't known for their huge leaps upward on the evolutionary ladder in the past few million years, by the way.

    Bleh (4.50 / 4) (#11)
    by DJBongHit on Sun Sep 02, 2001 at 09:12:56 AM EST

    Stephen Hawking may be a brilliant physicist and all, but unless he was misquoted here, he seems to be rather misguided in the field of computer intelligence/psychology.

    Ok, for computers to be able to enslave us, one of 2 things must happen - either we have to program them to enslave us, or they'd have to "evolve" to do that through some genetic algorithm.

    Neither of these is terribly likely to happen. The first one won't because, well, that would be a pretty fucking stupid thing to do. The second won't because it assumes that the natural progression of intelligence is to become an "evil overlord." This may be true in the case of humans, but we live in the physical world and our psychological makeup is the result of eons of evolution trying to find the best way for us to survive in the physical world, where we had to compete with other animals for food and other resources. In this situation, emotions such as greed were beneficial. Why would computers (using a genetic algorithm where the most beneficial traits stay and the less beneficial ones are weeded out) evolve such an emotion? It simply doesn't make sense. Artifial intelligence doesn't mean "emulate the human mindset."

    However, if Hawking was speaking more metaphorically, he may have a point - after all, one could say that we're already enslaved by machines. Look at the near-panic that ensued at the thought of the power going out for an extended period of time at midnight, 01.01.00. We've gotten to the point where humanity can't survive without all our electrical toys. But that's beside the point, and it's a far cry from Matrix-esque apocalypse scenarios.

    ~DJBongHit

    --
    GNU GPL: Free as in herpes.

    That's a very simplistic view. (3.00 / 1) (#23)
    by Rainy on Sun Sep 02, 2001 at 05:38:07 PM EST

    A complete AI will be a wholly different beast - Imagine that you could poke into your brain and change the neurons that provide your motivational goals? What if a faction develops AI that hates another faction (and it becomes smarter than they hoped for and nukes the hemisphere with the faction it hates)? What if an AI is programmed with Asimov's 0th law - if you remember, it says that a robot must help humanity survive and prosper, and decides that the best way to help it survive is to enslave it? Not as crazy as it sounds - consider the whole nuclear stockpiling thing. And apart from all the things I thought of and you thought of and Hawking and Joy thought of, there's any number of things that we couldnt' think of because nothing of the sort was ever done.
    --
    Rainy "Collect all zero" Day
    [ Parent ]
    Machines won't but designed man may (4.00 / 2) (#12)
    by FlinkDelDinky on Sun Sep 02, 2001 at 09:45:53 AM EST

    I don't think machine AI will take over the world. I've got some, I think good, thoughts on why this is so.

    First of all we don't know what intelligence is and I'm pretty sure that no machine will accidentally evolve intelligence unbeknownst to us. I think we'll have to create/discover a theoritical basis of intelligence and then create a 'real' AI.

    Second is once we've got an AI it may be intelligent but not very 'smart'. It may be nothing more than a system that solves problems given to it. It just won't care about anything because it's got no emotional component to it. It's got no motivations (other than what we specify).

    Third, while silicon has been improving at phenomal rates materials (for robot bodies) and power generators (so they don't need extension cords) evolve much more slowly.

    I believe the above points are cogent :-)

    However, genetics is improving too. The thing is there's lots of good and ethical reasons to fiddle with human DNA. And in fact we are doing just that.

    I suspect that the good reasons will eventually result in designer babies. At first this won't really mean anything except will have a bunch of really hot looking, hyper artistic, super geniuses running around that could, if they wanted, procreate with regular humans. After all, who doesn't want a good looking smart kid, won't that make for a more enjoyable life?

    And so the competition begins. I don't think it's unreasonable that eventually the designer babies will have completely engineered DNA designed for easy manipulation of attributes but not compatibility with homo sapiens sapiens.

    It seams far flung to me but something gives me the chills about it.

    Yeah, right... (3.66 / 3) (#13)
    by ucblockhead on Sun Sep 02, 2001 at 11:01:22 AM EST

    Another expert in one field thinks he's an expert in another.

    The idea that machines might develop intelligence on their own is about the same as the idea that locomotives might develop legs on their own. It ain't gonna happen.

    If intelligent machines get built, it will be because people figure out how to build intelligent machines and then purposefully build intelligent machines.

    Is a 1 Ghz Pentium IV any "smarter" than a .477 Mhz 6502? No. It is faster. It has a bigger instruction set. But "smarter"? That word isn't even a word we can really apply because they are just dead dumb hunks of silicon.

    And given the state of AI research, we are at the very least a half-century from being able to purposefully build anything with any sort of real intelligence, probably much longer.


    -----------------------
    This is k5. We're all tools - duxup

    Runaway intelligence (none / 0) (#24)
    by sigwinch on Sun Sep 02, 2001 at 08:50:12 PM EST

    If intelligent machines get built, it will be because people figure out how to build intelligent machines and then purposefully build intelligent machines.
    You are totally missing the point. The question is not who will build the first few generations of intelligent machines, but how fast will the machines change when they become smart enough to redesign themselves.

    The technological singularity theory goes like this:

    1. Human-equivalent AIs are *impossible* today because no team interested in making one can afford enough CPU power or memory to support the necessary software.

       

    2. Computational power is increasing at a geometric rate (Moore's law).

       

    3. At some point in the not so distant future, the price of hardware needed for human-equivalent AI will fall below the research budget a small group of people can easily assemble. The transition from oh-my-god-too-expensive computers to free-AI-in-every-box-of-Cracker-Jacks will occur over a period of two to ten years.

       

    4. When the hardware gets cheap enough, people will develop AIs, out of sheer hacker curiosity if nothing else.

       

    5. The human brain, though it can accomplish wonderful things, is a very limited organ. It is not good at discovering or understanding complex phenomena. For example, high-temperature superconductors were discovered by random trial and error, and hundreds of man-years of effort still have not uncovered their secrets. Or consider the Ranque-Hilsch Vortex Tube, one of the oddest and most astonishing machines ever invented, and which is still poorly understood.

       

    6. At some point, the AIs will become smart enough to build new machines, which will allow the next generation to become even smarter, which means it will be even better able to discover ways to improve itself, and so forth.

       

    7. If a runaway cycle of self-improvement starts, it is likely to asymptotically approach the fundamental limits of computation/thought in this universe.

       

    8. Nobody knows exactly what the fundamental upper limit on computation speed is, but there are reaons to believe that it is very, very, very high. For example, the basic signal processing elements of the human brain take milliseconds to take action. It is reasonable to believe that the signal processing system could be reimplemented with superconductive signal processing elements, which take tens of picoseconds to take action (although interconnect delays slow it down to the nanosecond range). A person whose brain had been implemented with superconductive circuits would think a million times faster than you do. You'd have to think for 10 days straight to have as many thoughts as they have in one second. They would have more thoughts in an hour than you have in a lifetime.

       

    9. But the runaway AIs would just think fast, they'd think broadly too. Their speedy minds wouldn't be restricted to the pitiful seven-item working store of the human brain, and their long-term memory would contain vast libraries of knowledge.

       

    10. Such machines would be utterly alien to us. Their thoughts would be of such scope and grandeur that we could only perceive a minute portion, as if trying to do astronomy by looking at the night sky through a pinhole. Their great theories would have such subtlety and complexity that we could barely begin to fathom one of them through a lifetime of study. For one of them to try to teach a human something would be like a human punching holes in a card for a Jacquard loom. Even if they are benevolent, living with them will require tremendous adjustments.

       

    11. But they are not likely to be benevolent. There is a good chance that they will ignore us, as we ignore anthills, which would be a Bad Thing. Moreover, if they remain aware of the universe (i.e., not turning their thoughts inward to abstract introspection), they will probably perceive us as a mortal threat. Humans might be comparatively dumb, but that's still plenty smart to annhilate a planet with nuclear weapons, or even detonate the sun if we get really ambitious.

       

    12. Of course, if there are AI factions, they are likely to regard each other as even worse threats, and even a limited war between gods would be very bad for humanity.
     

    And given the state of AI research, we are at the very least a half-century from being able to purposefully build anything with any sort of real intelligence, probably much longer.
    I strongly suspect that the slow progress in AI research is because the machines are so pitifully slow and have such little memory. If you can only afford a thousandth of the computation needed for thinking, your AIs are gonna be pretty pathetic. Besides which, 50 years is but an instant. 50 years ago (more or less), there were no transistors or nuclear bombs. 50 years before that, no airplanes, electricity, or internal combustion engines.

    --
    I don't want the world, I just want your half.
    [ Parent ]

    There's a problem... (none / 0) (#26)
    by physicsgod on Sun Sep 02, 2001 at 10:41:21 PM EST

    Computational power is increasing at a geometric rate (Moore's law).
    Moore's law is a descriptive law, not a perscriptive law. Just because transistor size has been halving ever 18 months for the past 30 years doesn't mean it's going to continue indefinatly. In fact the prescriptive laws of quantum mechanics dictate that you can't build a transistor smaller than a certain size. So there will be a limit to the maximum chip size, unless there's a paradigm shift away from transitor logic.

    Another thing to bear in mind is that while biological components are slow they're much more flexable. The neurons in your brain are connected to more than on neighbor, so even if you get a number of transistors equal to the number of neurons you probably won't have a brain.

    And then there's the very nature of intelligence. Is the speed of thought governed by the speed of the components, or is there a point of diminishing returns where faster switches doesn't lead to faster thought?

    --- "Those not wearing body armor are hereby advised to keep their arguments on-topic" Schlock Mercenary
    [ Parent ]

    Neurons != Transisters (5.00 / 1) (#33)
    by ucblockhead on Mon Sep 03, 2001 at 07:34:56 PM EST

    A neuron is far more complex than a transister. (Just a comment, I agree with everything you said.)

    Human brains are fundamentally parallel while today's computers (even the "massively parallel" ones) are fundamentally serial. Does that matter? Can a silicon computer that is as parallel as the brain be made? No one knows...

    And I'd be amazed if I (at 36) lived to see the question answered.
    -----------------------
    This is k5. We're all tools - duxup
    [ Parent ]

    Wrong question. (2.50 / 2) (#37)
    by kelkemesh on Tue Sep 04, 2001 at 11:56:06 AM EST

    Ask instead, is what a neuron does far more complex than what a transistor does.

    [ Parent ]
    My point (none / 0) (#32)
    by ucblockhead on Mon Sep 03, 2001 at 07:31:28 PM EST

    My point is that all of that presumes that creating an AI is just a matter of collecting enough CPU power and then doing a bit of hacking. Nothing could be farther from the truth. The fundamental reason for the slow progress in AI research is because the problem is hard. Fifty years ago, they said we'd have AI in twenty years. Twenty years ago, they said we'd have it in fifty.

    I got a degree that concentrated on AI in 1987. Since then, Moore's law has increased computing power by a factor of 1000. The machine I am typing this on is over a thousand times faster than the Vax I used to do Neural Net simulations on as a Senior. Despite that, AI research has only trudged along, as it has since they started in the fifties. Much of the AI research now is close to what it was in the early eighties besides 15-20 years of Moore's law. This is not to say that they are doing pointless stuff, merely that the journey is far longer than most people realize. But then, people always seem to underestimate how complex intelligence is. In the fifties, people thought we'd have a chess-master computer in the early sixties.

    Anyway, I've read estimates of the processing power of the human brain that are in the order of one trillion times faster than today's PCs. Even with Moore's law (assuming Moore's law holds, which is pure assumption further out than fifteen years), it will be sixty years before the hardware you speak of exists. Then, it has to be programmed. No simple task.
    -----------------------
    This is k5. We're all tools - duxup
    [ Parent ]

    CPU vs. Algorithm (none / 0) (#42)
    by sigwinch on Wed Sep 05, 2001 at 08:41:47 PM EST

    My point is that all of that presumes that creating an AI is just a matter of collecting enough CPU power and then doing a bit of hacking. Nothing could be farther from the truth.
    I agree completely. I just think that having the CPU power is a necessary prerequisite. Trying to do a human-equivalent AI on a PDP-11 is a pipe dream.
    The fundamental reason for the slow progress in AI research is because the problem is hard.
    Not just simple hard, but complex hard. I.e., factoring the product of two unknown 500-digit primes is simple hard. Designing a fully automated air traffic control system is complex hard. That's why it is taking so long. (That, and the fact that very few people are working on cognitive AI.)
    Anyway, I've read estimates of the processing power of the human brain that are in the order of one trillion times faster than today's PCs. Even with Moore's law (assuming Moore's law holds, which is pure assumption further out than fifteen years), it will be sixty years before the hardware you speak of exists.
    I always take those estimates with a grain of salt. The brain does perform many operations, but its symbols are very complex and redundant. For example, when you think of "frozen lake", gigabits of information flow through your brain. Even including all the related ideas and memories that rise unbidden when you think of "frozen lake", a computer could do the same thing with tens of kilobits of information. For pure symbol processing, I think a modest cluster of 1 GHz CPUs could hold an AI, if anybody had the software.

    --
    I don't want the world, I just want your half.
    [ Parent ]

    How does this (one step) follow... (none / 0) (#40)
    by eightball on Wed Sep 05, 2001 at 01:20:20 PM EST

    6. At some point, the AIs will become smart enough to build new machines, which will allow the next generation to become even smarter, which means it will be even better able to discover ways to improve itself, and so forth.

    Okay, so maybe intelligent machines will be capable of building new machines, but why would they be left to their own devices with the raw materials to do this?
    How does it follow that they will be able to create new machines that are smarter? We are having enough trouble coming up with intelligence that is not self-aware, much less intelligence that can create even higher intelligence. I can understand being able to create the same thing that runs faster, but that is about the limit of it.

    [ Parent ]
    Don't panic folks (3.60 / 5) (#14)
    by spacejack on Sun Sep 02, 2001 at 12:46:11 PM EST

    It's already happened -- machines took over the world in the form of corporations. I'm pretty convinced they're bordering on some definition of alive, and they certainly do run most things nowadays.

    Hawking said it, so it must be true. (4.00 / 2) (#16)
    by John Milton on Sun Sep 02, 2001 at 01:42:38 PM EST

    What particular qualifications does Hawking have in AI or Biology? None. He's a physicist. I'll just chalk this up to another brilliant scientist turned crackpot.


    "When we consider that woman are treated as property, it is degrading to women that we should Treat our children as property to be disposed of as we see fit." -Elizabeth Cady Stanton


    Hawking (2.00 / 1) (#17)
    by Refrag on Sun Sep 02, 2001 at 02:01:18 PM EST

    If this isn't a hoax, someone needs to tell Stephen to get off the crack! First, he starts a rap career and now this!

    Seriously though, one would think that someone with his mental accuity would realize that humans shouldn't tamper with our own makeup because there is no way our minds can fathom the possible consequencies.

    Refrag

    Kuro5hin: ...and culture, from the trenches

    Interesting (5.00 / 1) (#18)
    by Verminator on Sun Sep 02, 2001 at 02:56:10 PM EST

    I'd like to see the original interview, for some reason I doubt the AP got the full meaning of the interview across in this half-page writeup. I highly doubt that Hawking is talking about superintelligent robots running around as rulers of some "Planet-of-the-AI's-esqe" world of the future, with we pitiful humans living in cages and forming underground "resistance movements" if we don't improve ourselves. More likely we'll become increasingly reliant on increasingly powerful computing systems that we understand less and less about. Actually I can nearly guarantee that last sentence will describe our near future.

    The idea of humans being "improved" isn't all that farfetched. There's an excellent writeup about this concept here (as well as a half-baked story of mine over at lit.hatori42), it's a little long but well worth the read (his story, not mine). The difference here being that Yudkowsky feels that it will be the benevolently-ruling superintelligent computers doing the improving. While I don't agree about us all ultimately being transferred from the real world into some utopian digital construct, the idea of a singularity of intelligence infinitely more powerful than anything we know or can possibly understand is intriguing and fairly plausible.
    If the whole country is gonna play 'Behind The Iron Curtain,' there better be some fine fucking state subsidized alcohol! And our powerlifting team better kick ass!

    interesting (4.00 / 2) (#20)
    by rebelcool on Sun Sep 02, 2001 at 03:37:07 PM EST

    I watched "Colossus: The Forbin Project" last night and it got me thinking (for those who dont know, its about a computer that begins to enslave humanity because it launches nukes at big cities when it doesnt get what it wants).

    For centuries, man has made tools out of the lesser intelligent animals. Putting them to use to serve us. Same with computers. We use them as a tool, because we know more than they do.

    Were a computer (or any entity) smarter than humans to come along, perhaps it would use as tools to its own end. Sound ridiculous? Well thats we humans do.

    If you're going to disagree with Mr. Hawking, at least have a rational argument against it.

    COG. Build your own community. Free, easy, powerful. Demo site

    Analysis, not the name (none / 0) (#22)
    by slaytanic killer on Sun Sep 02, 2001 at 05:29:26 PM EST

    Hmm, I am curious why people are dismissing it. I'm sure that many people take this seriously just because Hawking said it; but of course those people don't matter. What is more important is the analysis behind it, which seems sound.

    We have few ways of sanely testing biological interfaces with machines, and right now it won't net much, but it is something worth considering sooner rather than later. If only for the reason to develop some standards for safety.

    One wonders if our minds will eventually be the tail to the machine's dog...

    [ Parent ]
    Luddite paranoia (4.00 / 1) (#21)
    by 0xdeadbeef on Sun Sep 02, 2001 at 04:06:34 PM EST

    Why do people believe that an agenda automatically comes out of having intelligence? Perhaps it is necessary to create a useful intelligence (an autonomous machine has to want to do what it does), but when we control that agenda, how is there any threat?

    Only if somebody makes a machine to enslave humanity will there be a machine to do so. And if that is possible, one can just as easily make a machine that just wuvs us to pieces, and will destroy any errant machine that could hurt us. Asimov got this right over fifty years ago.

    Singularitarians (none / 0) (#27)
    by MTremodian on Sun Sep 02, 2001 at 11:50:54 PM EST

    Singularitarians are a particualarly odd group of people who want this to happen, are convinced that it will happen in our lifetimes, and are dedicated to making sure of this. "Group" of "people" might be exagerating, since there is, as far as I know, only one.


    ...speed overcomes the fear of death.

    So what's the problem? (4.00 / 1) (#30)
    by kvan on Mon Sep 03, 2001 at 08:53:20 AM EST

    We need to improve the human race? Well, duh! We should be doing that anyway, as soon as we have safe and reliable technology to so. Keeping up with AI doesn't even enter into it.

    Computers will eventually take over? Good for them. We "took over" from apes too, and to think that we're the ultimate in intelligence is, well, rather sad, really. I sincerely hope that intelligence will evolve further than to Homo sapiens sapiens; it would be a boring universe if we're the pinnacle of evolution!


    "Many people would sooner die than think; in fact, most do." - Bertrand Russell


    Assimilate (4.00 / 1) (#31)
    by cyberdruid on Mon Sep 03, 2001 at 04:25:01 PM EST

    I am writing my masters thesis on open-ended, self-enhancing AI and have been studying the field of AI for a good many years. Prepare for a long rant. (I am from Sweden, so please be patient with possibly poor grammar and spelling)

    The danger of extremely advanced AI is hardly that it turns "evil" (as one poster suggested) and goes on a killing spree. Remember that we live in a capitalistic society. When a 1000$ computer can do the same job that an expensive employee is currently doing, guess what happens? Obviously the same thing that happened when the robots invaded the factories. The country, as a whole, gets much richer but indviduals may not easily adjust to being unemployed.

    Life is a competition for resources. Always was and always will be (since the resources have theoretical limits). If a species (or whatever) can do all the productive things that another can, but cheaper, faster and better it is simply a matter of time before that species takes over the show. The company, nation, clan, etc, with the most efficient deployment of resources beats the others in the competition and will thus favour AI.

    Well then... Can you engineer such an AI? Remember, it does not have to be sentient or pass the Turing test or behave like H.A.L to compete with humans. They can already monitor crowds for known criminals, play the stock market, be used as telephone-receptionists (when you order your plane-tickets) and a zillion other things. Each new task that is accomplished could potentially make an entire profession obsolete. Initially people can just migrate to more complex (and quite possibly more fulfilling) areas of work. But what happens when the last bastions are being threatened? OK, I know, there are times when people wants to have human contact. But how many professions are there that need real people. Psychotherapist? Prostitute?

    Hans Moravec has an estimation of the difference in processing power between a brain and current desktops, which is very hard to argue with. He basically looks at the amount of neurons involved in the well-documented first stages of eye-sight. These stages (edge-detection, etc) have had to be simulated in robots, for them to be able to navigate. So we know how much computing power is required. Since vision is so important, one can assume that this part of our brain has been tightly optimized (there is not enough room in our DNA to optimize every area of the brain). In other words we have an upper bound on MIPS/neuron, loosely speaking. Just multiply the number of neurons in the brain with this number and voíla, we get an upper bound of roughly 100 000 000 MIPS for the entire brain.

    Eliezer Yudowsky has a thorough analysis of a possible path to smarter-than-human-AI. One important point that he raises is that once the AIs are equally good at programming as humans, they can improve on their own design and then they will be able to improve themselves faster, and so forth. Super-exponentially!

    What can we do? Should AI-research be banned? Hardly. It will just be continued illegally, but this time it will be dangerous groups (or "rogue-nations" or whatever) that gets the technology first. No good. My own view coincides with Hawking's. There has to be a reevaluation of what constituates the "I". We must be prepared, when the time comes, to let ourselves grow, not just through the comparatively weak art of genetic engineering, but through real interfacing with CPUs and communication devices.

    We are already cyborgs (Hawking, especially) with mobile phones and palmtops connected to the internet, with instant access to the entire world (consider how outrageously SciFi these things would have appeared 20 years ago). They are not implanted to our neurons yet. But just wait. I will be standing first in line to upgrade.

    David Fendrich

    A question... (none / 0) (#41)
    by eightball on Wed Sep 05, 2001 at 01:31:31 PM EST

    Something I have wondered with the claims that companies want to replace everyone with machines (I do not mean to sound prejudiced about this, please do not take it that way.)...
    I don't understand how that economy is supposed to work. Sure, the machines can produce 'product' at a reduced price, but if the humans aren't making money, how do they afford anything? I supposed you could continue an 'economy' by having mega-corporations trade things back and forth, but that seems a little pointless (even more so than capitalism ;)

    Can you tell me what you think will happen in this situation?
    Thanks..

    [ Parent ]
    Economic Theory (4.00 / 1) (#43)
    by vectro on Thu Sep 06, 2001 at 12:56:40 PM EST

    This is the same thing as happened in agriculture. Used to be that the majority of the population worked in agriculture in the US; that number has been reduced to something like 3%. What are the consequences of this automation?

    Well, so obviously this had a severely negative impact on the agriculture labor market. As we saw in The Grapes of Wrath, this lead to a migratory effect. But ultimately these people (or their descendants) were able to find jobs in other places. These jobs, in all likelihood, will not pay as well, due to the depressed labor market.

    On the other side of the equation, food is now cheaper. This lowers the cost of living.

    The net effect of all this automation is that the same number of people now can produce much more. So ultimately, you should have more production going on, which should generally mean a higher standard of living. The problem is that as part of this process, you end up with the people who own the equipment gaining an advantage. This is what happened during the industrial revolution, and resulted in the whole union movement.

    It may be that widespread automation of labor will result in social problems such as a further divide between the wealthy and the poor. But in the long term, if conditions get bad enough, the poor will take some drastic action (revolt, in the worst case) and restore some modicum of equality. And even after this is done, we should continue to have the increased production from automation.

    Note that I'm no economist, but it's an interest of mine.

    “The problem with that definition is just that it's bullshit.” -- localroger
    [ Parent ]
    Human brain (none / 0) (#35)
    by Harakh on Tue Sep 04, 2001 at 12:23:32 AM EST

    IMHO we cannot make a computer that would be working like a human brain because we dont really know how it works. Yes there are neurons etc but why do they work like they do? etc I once heard a good phrase somewhere that said "If a human could understand the human brain the brain wouldnt be complex enough for us to understand it" - it looses some with my poor translation skills but the point is pretty clear i think. Computers will evolve into fussy-logic and partial AI that makes them do more advanced tasks without human intervention but if a computer was to become an "evil overlord" - which i doubt btw - it wouldnt have the special treats that make the human race go forward in leaps. That's the feelings and the motivations that drive us and the sudden, unlogical maybe, insights that make us special. Personally im all for more smarter computers cause that'll make stuff easier for me :).

    My take - Visual Basic will rule the world (none / 0) (#38)
    by dbc001 on Tue Sep 04, 2001 at 04:05:02 PM EST

    I think that assuming that only pure, academic AI research is the only path to a computers-run-the-world scenario misses a lot of the point here. Hasn't anyone noticed that it's becoming easier to do increasingly complex things? For example, with Visual Basic you can write an entire program without writing very much code at all.

    At some point we will develop simpler interfaces for doing complex tasks - maybe you will be able to design your car and it's engine without understanding how any of the components work. So when we develop simple interfaces to design anything that can self-replicate (which will probly be pretty soon), we are in a situation where only a select few truly understand what is going on in the world.

    Further, there seems to be a finite limit to what humans can know - I would say that humans are limited to having either broad knowledge on a lot of subjects or expert knowledge on a few. We will be able to develop systems that can use all available knowledge on all subjects and put that to use - far beyond any humans understanding. So again, only a select few highly intelligent people will have the capacity to understand everything that is going on.

    -dbc

    Physicist Stephen Hawking Contributes to AI Debate | 43 comments (42 topical, 1 editorial, 0 hidden)
    Display: Sort:

    kuro5hin.org

    [XML]
    All trademarks and copyrights on this page are owned by their respective companies. The Rest © 2000 - Present Kuro5hin.org Inc.
    See our legalese page for copyright policies. Please also read our Privacy Policy.
    Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
    Need some help? Email help@kuro5hin.org.
    My heart's the long stairs.

    Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!