Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

[P]
What could you do with infinite computing power?

By SIGFPE in Technology
Mon Nov 20, 2000 at 03:24:00 PM EST
Tags: Culture (all tags)
Culture

As many readers will be aware there have been many articles in recent years about how Moore's law will result in computers having similar power to brains in about 30 years meaning that we will be able to emulate humans and thus result in a 'singularity'. Of course this rests on the assumption that CPU clock speed equates with the power you need for intelligence. I'm interested in questioning this assumption and one way to do this is look into what we could do if we had infinite computing power.


I've defined what I mean by infinite power in a footnote below (*)

Many people would instantly take for granted that we would have machine intelligence at this point. But on further reflection one needs to have some code to run on this machine. How easy would it be to program something like human intelligence? One might argue that you don't need to - you could just enumerate programs until you find one that does the job. But then you need to write some code to recognise intelligence and that might be just as hard. So given infinite computing power just how long do you think it would take to accurately emulate humans? Or do you think it would not be possible at all?

We make the assumption every day that if we just had a little more computing power we'd be able to solve this or that problem. If we went all the way would infinite computing power translate into infinite power in the real world? I think it's a fundamental geek assumption that it does.

(*) I want to define infinite power in such a way that we don't get paradoxes with things like the halting problem. So lets assume this computer can be programmed in Douglas Hofstadter's Bloop and that such programs always produce output in under a second. Basically programs must terminate in a finite time - but that time can be as long as you like! In particular you can't use Bloop to prove Fermat's Last Theorem by testing all of the integers (though you could search for all proofs less than 10000 pages long, say).

Sponsors

Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure

Login

Related Links
o Bloop
o Also by SIGFPE


Display: Sort:
What could you do with infinite computing power? | 68 comments (68 topical, editorial, 0 hidden)
... (2.25 / 8) (#1)
by Bad Mojo on Mon Nov 20, 2000 at 01:06:08 PM EST

If you could make a large battleship with the press of a button, why would you need to?



-Bad Mojo
"The purpose of writing is to inflate weak ideas, obscure pure reasoning, and inhibit clarity. With a little practice, writing can be an intimidating and impenetrable fog!"
B. Watterson's Calvin - "Calvin & Hobbes"

... (2.28 / 7) (#2)
by Signal 11 on Mon Nov 20, 2000 at 01:16:44 PM EST

If you could make a large battleship with the press of a button, why would you need to?

If you've ever played combat games with a friend on a playstation or a nintendo you would know that it's not the mere presence of the button, but rather how fast you can push it!


--
Society needs therapy. It's having
trouble accepting itself.
[ Parent ]

Inteligence != Sentience (3.54 / 11) (#3)
by titivillus on Mon Nov 20, 2000 at 01:17:41 PM EST

I don't assume the central assumption of those who accept the Singularity. It takes more than just having it blurt out "When's tea time?" and "So that's it, we're all going to die." on occasion to mimic human sentience. I suspect that computing will have changed by leaps and bounds by 2100, but the AI community would be saying they'll have Artificial Intelligence in 2110.

WWS11D ? (What Would Signal 11 Do?) (1.75 / 16) (#4)
by Signal 11 on Mon Nov 20, 2000 at 01:23:08 PM EST

Well even with infinite computing power Daikatana would still suck.

Let's reverse the tables alittle bit - let's assume you could perform any computing operation infinitely fast. What good would that do us if we don't have the programming tools and methods to actually do work with it?

We need to advance the state of "software engineering" to *real* engineering. As it is right now you change one variable and the program blows up. A minor error message usually aborts programs... I mean, comeon? Even if the hardware was theoretically perfect in every respect, we'd still write buggy code.. it would just crash faster.


--
Society needs therapy. It's having
trouble accepting itself.

Prohibitive Error Checking (4.00 / 3) (#5)
by vinay on Mon Nov 20, 2000 at 02:11:18 PM EST

The problem is, it's incredibly difficult to take into account every single error situation. For any problem of sufficient complexity, it's mathematically difficult to take into account every possible error, especially when you include user input and hardware limitations. Compound that with the fact that there are abstractions: We typically write software to a generalized platform (i.e., C, C++, java), and often compile it under many different systems. It's possible that there are bugs in those abstractions, for instance.

As it is right now you change one variable and the program blows up.

I'm not quite sure exactly what you mean here. If you change one variable and you're now altering something different than before, then yes, I'd expect the program to crash. The kind of error checking/exception handling I think you're talking about is very difficult to implement and prohibitive on the levels you're discussing.

I agree that software could and should be better, but at the same time, what you're talking about would make code prohibitively complex, and would certainly introduce more errors. Of course, that's where things like Aspect Oriented Programming come in. And even with that innovation, it's a knotty problem. I think it's infinitely better for a program to die gracefully (and safely) than for it to try to take into account every single possibility.

-\/


-\/


[ Parent ]
SE vs ME? (3.33 / 3) (#7)
by jabber on Mon Nov 20, 2000 at 02:23:39 PM EST

Now, now, you're not being fair to Software Engineer. We really are trying - it's still a relatively young sicence, and we're already much further along than Mechanical and Civil engineers were in the same time.

Hell, after hundereds of years they're still trying to get a lighter, tougher steel and a better concrete figured out; and if someone comes along and changes one variable... Let's call it, oh, "G", things crash..

SE is moving ahead quite well. We have standard, tracable, repeatable development processes available. We have CASE tools. We have estimation techniques, we have increasingly complex reusable designs and methodologies. It isn't perfect, and it needs to improve, but it isn't as bad as you imply. If your experience says that changing a variable will crash a program, you ought to consider changing your development methods. You ought to range check and test, not just hack all night and cross your fingers.

If I change a variable in my code, it will simply return different results. And if I put a diving pool on the top floor of a sky-scraper, I'm asking for trouble. Code is designed and developed with certain requirements in mind. If you stress a bridge beyond it's intended design, or you fail to test it, you end up with it's sillouette on a web-site.

The State of the Art in SE is maturing nicely, but it is up to developers to apply what places like the SEI at CMU have learned.

[TINK5C] |"Is K5 my kapusta intellectual teddy bear?"| "Yes"
[ Parent ]

"Real engineering" (2.50 / 2) (#8)
by trhurler on Mon Nov 20, 2000 at 02:24:58 PM EST

is just what we do with more process and more experience. Guess what? Move one support beam on that bridge, and the result may well collapse instead of bearing the load safely. That's the way constructs are; they can be sturdy, but their designs are fragile and easily ruined. Programs are no exception. The thing is, though, economics will probably prevent programming of anything but truly vital systems from ever being a real engineering discipline, because the people who are good at programming are generally the same people who hate strong process requirements and vice versa. Sure, you can have a reliable process, but if in the end you either have to pay people a million dollars a year or put up with programmers who can't put out ten good lines of code in an entire day, are you GOING to do so? Probably not.

--
'God dammit, your posts make me hard.' --LilDebbie

[ Parent ]
infinite computational power != infinite power (3.20 / 5) (#6)
by Anonymous 242 on Mon Nov 20, 2000 at 02:14:45 PM EST

"We make the assumption every day that if we just had a little more computing power we'd be able to solve this or that problem. If we went all the way would infinite computing power translate into infinite power in the real world? I think it's a fundamental geek assumption that it does. "

I don't necessarily speak for all geeks (and I problably don't) but I think it is much more geek to say that all problems are potentially solvable given enough computational power. And even in that statement, I'm not certain that I'm not abusing the word potentially. It could very well be that some problems are unsolvable no matter how much computational power is leveraged in attempting to solve them, but we don't know until we try and succeed. Failure doesn't tell us that such a problem is unsolvable, but that that current attempt to solve it doesn't work.

In either case, if computational power tranlates to temporal power, then quantum physicists rule the world with an iron hand, right? The only quantum physicist I know of that ran for president received even fewer votes than Harry Brown.

The ultimate adertising campaign (2.00 / 1) (#10)
by SIGFPE on Mon Nov 20, 2000 at 02:45:22 PM EST

The only quantum physicist I know of that ran for president received even fewer votes than Harry Brown.
I guess you mean the leader of the Natural Law Party. But suppose Hagelin were able to run simulations of voter responses to advertising on the proposed computer. Might he not be able to run the ultimate campaign? But then of course this requires the simulation of voters - probably as hard as simulating a fully general human.
SIGFPE
[ Parent ]
you already have the power (2.00 / 4) (#9)
by maketo on Mon Nov 20, 2000 at 02:38:01 PM EST

(distributed computing). You only need the underlying algorithms your brain uses to solve problems, recognize pictures, produce/understand speech, represent knowledge....simple stuff ;)

Just like a fast car does not make a good driver - a big computer does not make a proper brain :)

agents, bugs, nanites....see the connection?
not (2.00 / 1) (#19)
by Potatoswatter on Mon Nov 20, 2000 at 04:01:16 PM EST

The qualifications he gave are:
  1. It can be programmed in Bloop, and
  2. It can finish any problem in finite time.
Distributed computing is only good for certain types of problems (generally, the ones which can be broken into lots of discreet subproblems, each described by a small amount of unique information). I didn't even check the Bloop link, but I suspect it's not specialized to distributed processing. Certainly, no distributed network meets the second requirement, which is the less realistic rule, the "infinite" term.

Second, the brain is not a digital computer by a long shot. It's a big, wet mass of trillions of semi-unregulated chemical reactions per second. The problem of finding the algorithms that the brain uses is one and the same with just inventing them from scratch. Of course, research can help us along by telling us the specifications of the "algorithm" that the mind uses. But it's not scientifically provable that everybody uses the same algorithm to decode the "bitmap" from their eyes to intermediate "vector representation" to cognition. I'd actually argue that we should assume people don't, and we should be looking for the algorithm (dropping quotes) of self-adaptation that each neuron has which molds our minds to what problems they encounter, and how evolution has found inputs to this program to produce reliable results from an unreliable machine.

And once you have the answer to this problem, you do have a "proper brain", and not just a big computer.

myQuotient = myDividend/*myDivisorPtr; For multiple languages in the same function, see Upper/Mute in my diary! */;
[ Parent ]

Hmm.. Shakespear? (1.80 / 5) (#11)
by Zane_NBK on Mon Nov 20, 2000 at 02:48:44 PM EST

With infinite computing power I think it'd be time to simulate those infinite number of monkeys on an infinite number of typewriters. Maybe I could be the next shakespear. :)

-Zane


hhmm... what about Shakespeare? (3.00 / 3) (#25)
by douper on Mon Nov 20, 2000 at 06:39:58 PM EST

but what If I simulated Shakespeare himself? who would create the works faster?

An infinite number of monkeys on an infinite number of typwriters who would eventually create all the works of Shakespeare being simulated with ifinite computing power

or:

Shakespeare who did create all the works of Shakespeare in one lifetime being simulated with infinite computing power.



[ Parent ]

No method is faster. (2.00 / 1) (#34)
by Zane_NBK on Tue Nov 21, 2000 at 12:11:06 PM EST

Since the computing power is infinite (or all problems are solved in under a second) then all methods are equally as fast.

-Zane


[ Parent ]
Coding for your monkeys? (none / 0) (#40)
by shook on Wed Nov 22, 2000 at 11:41:32 AM EST

Ah, here's the kicker. So you had a computer that could simulate typing monkeys. With infinite power, they would produce a copy of the works of Shakespeare. But you already had a text file of Shakespeare's work to compare it with. So what does your monkey-copy produce? Other than a really fast, brute-force random character generator, nothing.

Now, if you were running that program, and it was able to produce the works of Shakespeare from random characters, I would be willing to bet there would be some extremely good never-before-seen poems, novels, and "How to Get Chicks" manuals also being spit to your stdout. You could easily code a program to find Hamlet or the Bible. You could even probably write something to find large chunks with grammatically correct sentences. But how do you find the true gems, among the garbage? (Garbage including the 300 million scripts of Hollywood blockbusters your grammar checker would find).

[ Parent ]

Humor alert (1.00 / 1) (#43)
by spaceghoti on Wed Nov 22, 2000 at 02:43:22 PM EST

Why does this sound suspiciously like the Internet?



"Humor. It is a difficult concept. It is not logical." -Saavik, ST: Wrath of Khan

[ Parent ]
That's what capitalism is all about... (1.00 / 1) (#44)
by Zane_NBK on Wed Nov 22, 2000 at 05:45:36 PM EST

I just publish every possible subset of output and let the market decide which are gems. Can't be much harder sorting through that than finding a random good book on Amazon as it is. :P

-Zane


[ Parent ]
Infinite power would help (3.85 / 7) (#12)
by zakalwe on Mon Nov 20, 2000 at 02:51:30 PM EST

I don't think anyone thinks that as soon as we get enough power an AI will magically appear without anyone lifting a finger, but having more power does allow more approaches to be used.

Intelligence is a very hard problem, and the complexity of understanding how our minds work is quite possibly beyond human level intelligence. But with computing power, we can attempt 'brute force' methods of attempting AI, and maybe produce something intelligent enough to explain itself to us.

For example, with this supposedly infinite power, we could emulate every atom in a solar system similar to ours, and see if the interactions at this level would produce something intelligent after a few million simulations of billion year time periods. As for how to recognise the intelligence without checking by hand every time - well make that their problem. Put some clues in our simulated universe, and wait till something triggers their solution. When this happens, its very probable we've got an intelligence. (Now we just have to work out how to communicate with it).

That solution only needs us to know about chemical interactions. (And hope that our theory of chemical interactions is sufficiently accurate that it can create something similar to us, or just sufficiently complex that some form of intelligence can develop.), and so is pretty easy(relatively) to implement. Possibly any system complex enough would give us an AI.

Intelligence and the human being (3.00 / 5) (#13)
by error 404 on Mon Nov 20, 2000 at 03:12:03 PM EST

First, it seems a bit arrogant to think that there is something special about the human intelligence level that a machine implimentation would result in 'singularity'. Human intelligence is somehow a 'black hole' level?

Human intelligence is at the level where a birth head size any bigger would cause infant/mother mortality at too hight a rate for the advantages more intelligence brings. Nothing special about that level.

But we won't really emulate what humans are about with computers. No matter how much CPU and RAM we throw at the project. Because it isn't about intelligence. It is about motivation and creativity. Humans, and organisms in general, want things. Things like transcendance and oxygen and sex and beer and sex and beer and sex and beer. Machines don't. Humans will break the rules and be illogical and find a way to get what they want. Machines may emulate the behavior, but if a machine breaks the rules either it is defective or the real rules aren't the ones in the manual. Sure, you can program a computer (given appropriate IO devices) to say "mmmm, beer" and move to a postion with a low center of gravity in emulation of contentment. But it isn't the same. It isn't respectable intelligence that differentiates us from machines, but lowly animal desires and pleasure and pain. A machine may emit the same words I would, but it won't feel a bowl of rice fill out its stomach just so.

..................................
Electrical banana is bound to be the very next phase
- Donovan

I'm less sure (3.00 / 1) (#14)
by _cbj on Mon Nov 20, 2000 at 03:26:57 PM EST

It's a common enough feeling, for sure. Joseph Weizenbaum's infamous polemical book, "Computer Power and Human Reason", makes much that point.

I'm not sure though, because I don't think that hardcoding desires into an AI would be any different from hardcoded human libido. It's still at an inaccessibly low level, from the intelligence's point of view. So what's the diff?

[ Parent ]
Singularity (none / 0) (#15)
by zakalwe on Mon Nov 20, 2000 at 03:45:00 PM EST

The logic behind the 'Singularity' theory goes that if we can create an artificial intelligence, then we can immediately tell that intelligence "Design me a being more intelligent than you". If we can bootstrap this process by creating a being with intelligence sufficient to do this, then we can keep building more and more intelligent machines, and as their intelligence increases, they can tell us how to build faster machines, and so simulate these intelligences faster ( hence increasing the rate of intelligence growth ), and so on.

This basicly results in an exponential growth, that would rapidly leave us poor humans behind - creating a level of intelligence as far above us as we are to amoebas. Of course, this all relys on getting that one bootstrap.

After acheiving this of course, anything else is going to be pretty much an anticlimax. If anything we humans can do can be done faster and better by just going to the nearest AI and telling it what you want, what's the point of doing anything. "Why should we stay awake all night arguing whether there's a God, if this machine gives us his phone number in the morning". All rather depressing, and it brings to mind the proveb "May all your desires, but one, come true"

[ Parent ]

Now I get it (singularity) (none / 0) (#18)
by error 404 on Mon Nov 20, 2000 at 03:57:09 PM EST

In some ways, we have reached that already. We use the tools we have now to make better tools.

But it seems to me that the things we lack for that singularity are operational definitions of intelligence and a way to generate new designs without human intervention. More power would be nice, but the real obstacle is direction.

Heck, if we could come up with a good, computable definition of intelligence, I could run a background process on my own machine that launches slightly mutated versions of itself and then judges their intelligence and turns over control as soon as a "smarter" one comes along. It would be slow, but as long as there is a computable definition of "smarter" it could be making progress now.


..................................
Electrical banana is bound to be the very next phase
- Donovan

[ Parent ]
Autonomous tools (none / 0) (#36)
by swr on Tue Nov 21, 2000 at 03:59:05 PM EST

In some ways, we have reached that already. We use the tools we have now to make better tools.

We've been doing that for many thousands of years. But to this day it's still the same old humans using the tools and making better tools.

When we can make a better tool, and the better tool can make an even better tool all by itself, that will result in a singularity.

Alternatively, we could artificially enhance the human brain (the brain is a tool). You might say that the previous generation teaches the next generation and the next generation learns on top of that ("standing on the shoulders of giants", etc) but the human brain itself hasn't changed that much since the dawn of civilization. Presumably we are already at least somewhat limited by our brains; otherwise we wouldn't need paper for storage, computers for processing, etc.



[ Parent ]
"Precious" brain space leads to larger h (none / 0) (#55)
by pin0cchio on Sun Nov 26, 2000 at 12:35:19 AM EST

Human intelligence is at the level where a birth head size any bigger would cause infant/mother mortality at too hight a rate for the advantages more intelligence brings.

So how does this explain humanity evolving into Precious Moments (called Eloi here to avoid TM disputes) by the year 802,701 (as predicted by H.G. Wells)? Female reproductive parts can and will evolve as necessary to create children that trigger the maternity reflex with excessive cuteness, especially in the coming age when (teen-age) mothers frankly don't give a d*mn about their babies.


lj65
[ Parent ]
Medical technology (none / 0) (#68)
by error 404 on Thu Nov 30, 2000 at 10:28:18 AM EST

changes that balance point. But evolution is slow, so I don't expect noticable changes for a long time. Maybe H.G.had it right - that's a long time he's talking about.

I kind of suspect that a reprogrammed cuteness response is more likely than further adaptation of the reproductive system, which is already pretty extreme. But even more likely is the addition of soft tissue to the outside of the head, where it can inflate rapidly after birth for cuteness without the problems associated with expanded cranial size. Cats and bears, for example, have big, cute heads that are mostly soft tissue around amazingly small crania. Hmm, considering the other body feature that changes size dramaticaly in seconds, there might already be a word for such a person...


..................................
Electrical banana is bound to be the very next phase
- Donovan

[ Parent ]
Define intelligence (3.83 / 6) (#16)
by jesterzog on Mon Nov 20, 2000 at 03:50:04 PM EST

In 1958, a guy called Herbert Simon wrote an article about AI. He predicted that within 10 years, computers would:

  • Be world class chess champions
  • Compose great music
  • Routinely translate between languages

We're just starting to get the chess one now, but really it was "solved" ages ago and it's just throwing the same algorithms into more and more powerful computers. I don't think computers have ever composed any really great music, and they're maybe about 2/5 of the way to translating between languages. (Meaning we can mostly code understanding of morphological and syntactic processing, but semantics, discourse processing and pragmatic analysis are a real problem.)

I think to conquer AI, we'd really need to define what AI actually is, because there a lots of different definitions. Some people reckon it's "solving tasks that would require intelligence for people to solve". For other people it's only about "teaching computers to do things that people currently do better".

The second one's more practical because you can work towards a goal. The first one's very fuzzy and bordering impossible: how can you tell when something is as intelligent as a person? (Which person?) There's a test called the Turing test that simply requires someone to be able to talk to a computer and not be able to tell if it's a computer or not. (Remember eliza? :) Obviously this is a bit of a limited test though, it's only going back to another goal.

I like the Isaac Asimov ideas about designing computers with enough AI that they can design more advanced versions of themselves. (Then they start running elections and humanity relies on them for everything and we end up under ignorant totalitarian rule by giant computers. :) I don't know if it's theoretically possible for computers to design better computers without any outside help or not... at least in a continuous chain of successors always being able to keep designing. It comes back down to how you define "better".


jesterzog Fight the light


Yeah, but... (none / 0) (#28)
by ghjm on Tue Nov 21, 2000 at 12:20:51 AM EST

Here's the question, though. The culture of the 1950s was nearly universal in its acceptance and reverence of the progress ideal. If the freedom movements of the 1960s never happened, i.e. if all of society's output had remained shackled to "progress," who can say what might have happened? We might have genuinely accomplished world-class chess computers and usable machine translation by 1968 - at the cost, of course, of not gaining the important freedoms that were won in "our" 1960s. Really, in the 60s and 70s, we turned away from "progress" in favor of getting the house in order - it wasn't until the personal computer revolution of the 1980s that "progress" seriously started up again.

-Graham

[ Parent ]
Turing and Intelligence (4.50 / 2) (#42)
by spaceghoti on Wed Nov 22, 2000 at 02:33:36 PM EST

Quoting from this website with regard to the Turing test:

The interrogator is connected to one person and one machine via a terminal, therefore can't see her counterparts. Her task is to find out which of the two candidates is the machine, and which is human only by asking them questions. If the interrogator cannot make a decision within a certain time (Turing proposed five minutes, but the exact amount of time is generally considered irrelevant), the machine is intelligent.

The website goes on to discuss the inherent flaws in the Turing test, most importantly that "...there is no definition for (human) intelligence..." All this has already been discussed eloquently here, but no one has really discussed (that I've seen, thus far) what they think constitutes intelligence.

In past reading, I've come across the concept of heuristics, defined here as "An algorithm which usually, but not always, works or which gives nearly the right answer." Not quite the wealth of information I was looking for on the web, but we'll work with it. Another way to describe heuristics as I understand it, is a non-linear method for arriving at a conclusion. While this algorythm won't necessarily come to a logical conclusion, it has the strength of being capable of reaching conclusions impossible to achieve with linear-based algorythms due to insufficient data. In short, the human brain (the only known source of "true" intelligence we can point to) works on a heuristic basis.

The programming for Eliza mimics human behavior, but Eliza never came to its own conclusions. It merely spouted back random, pre-programmed responses in such a way that it imitated a human well enough that the test subjects were unable to determine who was mechanical and who was chemical.

Ultimately, we don't really understand the chemical processes in our brains well enough to find a way to mimic them in solid-state. At present, our solid-state processors are capable of two modes: on or off, yes or no. That doesn't really allow for the range of possibilities that the chemicals in our brains allow for, which can possibly explain why our brains aren't locked into "yes/no" programming (though some individuals I can think of make me wonder). With IBM's breakthrough quantum computer, we're not only looking at computers that could (combined with the solid-state electronics we have now) not only mimic human behavior, but could legitimately come to valid or semi-valid conclusions based on limited information.

Ultimately, the best test of intelligence I ever heard was a system (biological or mechanical) that was capable of conceiving of "self." Again, that opens a whole new can of worms in attempting to identify and verify true conception of "self" and what "self" is to begin with (thousands of years of philosophy aside, we're looking at a "Short Circuit" scenario). A system that is able to grasp the concept of self as opposed to others and the inter-relation between that disparity is as close to Intelligence as to make no difference.

Getting back on topic, would the "infinite computer" be capable of heuristics merely by dint of pure processing power/time? I think it's highly unlikely. A photon traveling at the speed of light is still traveling in a straight line. Once that path is modified due to gravitational forces or because it bounces off an object (such as your computer screen to your retina), it's still moving in a straight line, just on a new course. Therefore such a computer operating on linear processes will still have all of the strengths and weaknesses of linear thinking. It will be able to perform pre-programmed tasks as quickly as you might wish, but it will never make the leap into heuristic programming, which is what I believe to be a far better test of intelligence.



"Humor. It is a difficult concept. It is not logical." -Saavik, ST: Wrath of Khan

[ Parent ]
I dsagree (4.00 / 1) (#52)
by SIGFPE on Fri Nov 24, 2000 at 01:41:55 PM EST

Ultimately, the best test of intelligence I ever heard was a system (biological or mechanical) that was capable of conceiving of "self."
I couldn't disagree more! I think that highly valuing introspection is a bit of human vanity and I hope that intelligent machines are smart enough to take more interest in solving real problems than sitting around introspecting all day! One reason why introspection is so highly valued in our society is that writing in the field is dominated by philosophers who like to sit around all day not doing work. If I had the time right now I'd express myself less facetiously but I am quite serious when I say this!
SIGFPE
[ Parent ]
Problem solving versus philosophy (4.00 / 1) (#58)
by spaceghoti on Sun Nov 26, 2000 at 02:23:28 PM EST

I think that highly valuing introspection is a bit of human vanity and I hope that intelligent machines are smart enough to take more interest in solving real problems than sitting around introspecting all day!

While I respect your stance and believe I understand where you're coming from, I think you're missing a fundamental truth to the argument. What constitutes intelligence? If intelligence is only about problem solving, then congratulations! We've had Artificial Intelligence since the 1950s when UNIVAC first came online! However, Turing and the rest of the community researching AI are talking about an artificial system capable of operating on the same level as humans (or as close as can be managed) so as to be capable of generating a spontaneous conversation with original content. In other words, they're looking for a machine to be creative and non-linear, not just number-crunching.

While philosophy may seem to be a waste of time, consider this: how do you know what answers to provide if you don't know how to ask the question? That's one of the primary functions of philosophy: to help define the world as we know it and to clarify the questions that we as humans normally ask about our world. It isn't always enough to be able to come up with answers to questions. You first have to know how to ask the question. When (I'm being optimistic here, but I'm a huge Star Trek fan) computers develop the capacity to define their own questions, I'll consider the era of Artificial Intelligence to be upon us.



"Humor. It is a difficult concept. It is not logical." -Saavik, ST: Wrath of Khan

[ Parent ]
Maybe we agree then (4.00 / 1) (#59)
by SIGFPE on Sun Nov 26, 2000 at 04:25:56 PM EST

If intelligence is only about problem solving, then congratulations! We've had Artificial Intelligence since the 1950s when UNIVAC first came online!
Not at all. Here are some real problems: design a more efficient internal combustion engine, prove Fermat's last theorem, find the bug in my code, write a great piece of music, make my hair look nice. These are all things humans can do but machines can't. When machines can do these things and more I'll be calling them intelligent. I guess I agree with you. But I don't see where a concept of 'self' comes into this.

Humans evolved as social animals. We have finely tuned skills for interacting with other humans - modelling their reactions, predicting their behaviour, manipulating them and so on. Once we have social skills it seems only a small step to generalise the class of 'other humans' to include oneself and hence be self aware. This is all part of a package of solutions to *social* problems - problems mostly made, incidentally, by competition with other social problem solvers doing something similar (and probably also trying hard to figure out how to make themselves unpredictable). (He he...I was just watching competitive Poker on Fox Sports...)

I see no reason why any of this should have much to contribute to many of the sorts of problem solving that I think marks something as intelligent. Sure, if something is self aware and can make interesting conversation about its own state of mind I might call that intelligent too. But I see no necessary link between these different kinds of problem solving.

(Aside: I take a very plain straightforward approach to defining 'intelligence'. If someone comes round to my house and fixes a particularly difficult plumbing problem, say, using an elegant solution, I might say they are smart. I don't care how their brains work. I don't care if they have 4 limbs and are descended from apelike ancestors or are made of silicon. I will use the term in an unprejudiced manner based on what they actually succeed in doing.)

There has been much discussion about a 'singularity' - when humans are able to make more intelligent machines we might see runaway development. The key thing is that we have a loop with positive feedback. I think that many people seem to believe that self-awareness also causes loop with +ve feedback with runaway effects. All I see is a loop. So though I think Godel, Escher, Bach is one of the greatest books I have read I think the latter part of it is a bit dated now with its endless harping on about strange loops in AI.
SIGFPE
[ Parent ]
I, Robot (4.00 / 1) (#60)
by spaceghoti on Sun Nov 26, 2000 at 06:49:53 PM EST

Perhaps we do agree, in some aspects. However, we're approaching the same issue from separate sides.

Here are some real problems: design a more efficient internal combustion engine, prove Fermat's last theorem, find the bug in my code, write a great piece of music, make my hair look nice.

Rather than spend a lot of time going through these point by point, I'll attempt to summarize my argument. These are all specific tasks that can be programmed under existing technology to produce results for all of these activities. Some of them can be programmed to mimic a human quite closely. This is a results-oriented approach to programming, but it has an inherent flaw to it in my thinking: just because the computer solved the problem for you doesn't mean the computer did so through its own understanding. A monkey is capable of being taught a complicated series of buttons to push to produce desired results. Does that make the monkey intelligent? Look at a pigeon. Pigeons can be taught to push buttons as well. I doubt anyone here would make a case that a pigeon is intelligent by any definition you care to put forward. And yet, in terms of problem-solving the pigeon is a learning system capable of adapting to changing circumstances, and is thus more intelligent than the 8-processor Sun Enterprise box sitting upstairs in my office.

I choose self-awareness as my definition of intelligence because with a system that is self-aware, I don't need to question whether or not the results the system produced were due to mindless repetition of pre-programmed results. We already have systems that do that now. We're looking for something that makes the leap into being capable of programming itself for a necessary task to better understand the requirements for that task. If you can find a way to define such a state that doesn't involve using the phrase "aware" then I will gladly concede defeat.



"Humor. It is a difficult concept. It is not logical." -Saavik, ST: Wrath of Khan

[ Parent ]
A quick question before I reply properly... (4.00 / 1) (#61)
by SIGFPE on Mon Nov 27, 2000 at 12:49:29 PM EST

I choose self-awareness as my definition of intelligence
How do you tell if something is self aware?
SIGFPE
[ Parent ]
Good question! (4.00 / 1) (#62)
by spaceghoti on Mon Nov 27, 2000 at 01:51:36 PM EST

Discerning self-awareness is the key. It's a philosophical issue that's never really been properly answered. People are always discussing and exploring the implications of such awareness. My personal thought is that it's when you can refer to the Idiomatic "I" and understand the distinction between "you" and "me." To understand that the functions and processes of "I" are separate and distinct from "you" and there being intellectual (I hesitate to say emotional, because of the topic under discussion) weight to the "I."

Put another way, self-awareness is when you are literally "aware" of your own thoughts and actions as belonging to you that are a result of your own internal processes and choices, rather than merely pre-programmed input. It's when you rise above the programming to assert individuality. Of course, that sort of a strict definition means that some humans I can think of wouldn't pass the test, but nobody said it was an easy discussion.



"Humor. It is a difficult concept. It is not logical." -Saavik, ST: Wrath of Khan

[ Parent ]
My view is that... (none / 0) (#63)
by SIGFPE on Mon Nov 27, 2000 at 02:34:07 PM EST

...humans have succesfully been using terms like "intelligent", "smart" and "ingenious" for several thousand years at the least without actually worrying about what goes on inside someone's head. We learn from a young age how to use these words and many people get along fine using these words succesfully without having to introduce ideas like 'self-awareness'. It is this 'naive' idea of intelligence that I use. If someone solves a novel problem for you you don't go wondering "Hmmm...maybe they weren't smart after all...maybe they solved this problem by rote". I use the same standard when judging machines. I simply don't believe that any reasonable definition of 'intelligence' should make any reference to the internal state of someone's head - after all we've managed fine for several thousand years with such a definition. It seems to me that people who insist on such a definition have swallowed some propaganda by the anti-AI camp. I think a new more rigorous definition of intelligent has appeared in recent years because many people have seen a need to redefine the term so as to exclude machines. The anti-AI camp have succeeded in shifting the goalposts.

Hmmm...if I had some time I might think about trying to argue that the whole obsession with self-awareness is a result of modern (and maybe just pre-modern) fetishes in art - ranging from, for example, Joyce's obsession with stream of consciousness style writing to the way any modern novel is now expected to devlop characters internally. These traits were very rare in older literature - characters were judged more by their outward behaviour. I think we have allowed this whole 'self-awareness' thing to distract us from the real issues in hand.

This is not to say that developing a self-aware machine isn't a good goal. I just don't think it is crucial.

So back to your previous post: I don't think we have 'intelligence' in machines in any sense of the word except in some very limited domains. I think that when you say
Some of them can be programmed to mimic a human quite closely
it is simply not true. We do not have any software that can generate original compositions that are any good (when I say original I mean not copied from someone else) and we certainly don't have automatic debuggers. Machines (and pigeons) can only perform simple tasks. I guess I'm saying that I think intelligence is a matter of degree.
SIGFPE
[ Parent ]
Intelligence and AI (5.00 / 1) (#64)
by spaceghoti on Mon Nov 27, 2000 at 05:53:51 PM EST

This is, in my opinion, the whole crux of the matter. What is intelligence? What is awareness? When I say that a computer can be programmed to mimic human behavior, I mean that in a very literal sense. Eliza helped prove that a computer can fool a human with random conversation for some time. I've also seen game programmers enter an exhaustive list of keywords that players can trigger for the program to respond as if they were intelligent systems. Does it make them intelligent if they simply look intelligent?

My take on that is no. It isn't. Intelligence can be used to describe a lot of things. Hive intelligence. Animal intelligence. We're looking for machines that can be intelligent so that we can give them a simple sentence and have them understand what we mean. You can train a dog to respond to verbal commands just like you can program a computer to perform specific tasks once you enter the appropriate syntax at the command prompt. But I don't look at a dog as intelligent because the dog is incapable of doing more than reacting to conditioned responses when we give commands. It won't learn new commands unless given specific impetus to do so. It is more intelligent than a computer because it is capable of making independent decisions, but its communication level is not what we want from another human or a "smart" computer.

No, we don't have computer programs that can generate original music that we can appreciate. I never claimed we did. But I do say we have computer programs that can mimic human responses. And Microsoft claims to be able to debug errors in programs, which I know because Microsoft VB offers to debug Illegal Operations (aka GPF) when they come up. Not that I've ever seen it succeed to any degree. But I don't look at human intelligence problem solving with regard to whether or not they did so by rote. It might be or might not be with the case of humans, and I know it. With a machine, I know the machine didn't solve a problem because it thought about it, I know the machine solved the problem because of preset programming instructions. The machine is literally incapable of expanding beyond zero-sum operations, to be able to reprogram itself in response to the task given it. As a friend of mine put it, computers are incapable of reprogramming their own rules for operation.

Whether or not we place too much emphasis on "awareness," we still need a viable definition for what constitutes intelligence so that it can be used as a crucible for machine intelligence. I haven't yet heard one that satisfies me better than the one I've put forward.



"Humor. It is a difficult concept. It is not logical." -Saavik, ST: Wrath of Khan

[ Parent ]
Kuro5hin is a good substitute for USENET! (none / 0) (#66)
by SIGFPE on Tue Nov 28, 2000 at 01:01:21 PM EST

This is, in my opinion, the whole crux of the matter. What is intelligence? What is awareness? When I say that a computer can be programmed to mimic human behavior, I mean that in a very literal sense.
You seem to be arguing like this: Computers are already intelligent in the way you define it (in terms of results), but these intelligent systems aren't very intelligent at all and therefore your definition isn't very good. But I don't think Eliza is a very good mimic (it just demonstrates what I knew already - it doesn't take intelligence to make small talk!) and I don't think any of your other examples are.
Does it make them intelligent if they simply look intelligent?
In the final analysis I'd say yes. If something looks intelligent and can convince you - what's the problem? If it looks like it can do my homework or design a spaceship and the homework gets a good score and the spaceship flies then who cares whether it is 'really intelligent'?
With a machine, I know the machine didn't solve a problem because it thought about it, I know the machine solved the problem because of preset programming instructions.
I have to concur. I don't think that this is an inherent problem with computers - just the type of software we currently know how to write. I don't know if we'll ever know how to write self-writing software - but I hope we do!
Whether or not we place too much emphasis on "awareness," we still need a viable definition for what constitutes intelligence so that it can be used as a crucible for machine intelligence. I haven't yet heard one that satisfies me better than the one I've put forward.
Why do we need to define intelligence? Why don't we just try to make machines to solve hard problems and leave it to philosophers to decide whether or not they are intelligent? I think this also better fits with the way intelligence has appeared in nature. There isn't really such a thing as general intelligence. Humans and other animals are the result of a lot of specific responses to evolutionary pressures. Tracking prey, avoiding falling over, language and so on all probably have their own specific subsystems within the brain. Combine enough of these together and you may have something as smart as a human. I think that's how people will eventually make machines that are generally considered to be intelligent.

I have a hunch that at the back of your mind you are worried about whether to allow computers as moral agents. Self awareness seems like it might be an appropriate property to consider when trying to decide whether we should extend things like legal rights to machines. I hope your definition of intelligent isn't being coloured by a need to decide what should and shouldn't receive moral consideration. I think that my own thinking about whether a machine was intelligent used to be biased in this way (though I didn't realise it at the time) - but I now thing these issues need to be treated separately.
SIGFPE
[ Parent ]
alt.kuro5hin.debate (none / 0) (#67)
by spaceghoti on Tue Nov 28, 2000 at 03:59:05 PM EST

Certainly, I don't mean to say Eliza was a good mimic, but it was good enough to satisfy the basic requirements of the Turing Test, which is to make it difficult for a person to tell the difference between computer and human intellect on the other side of the screen for at least five minutes. Looking at intelligence as results-oriented, yes. Eliza is an intelligent machine.

Obviously, at this point we can only agree to disagree. We're approaching the issue with valid arguments that are philosophically opposed. Ultimately, I think the problem here lies in what we expect from an "intelligent" machine rather than what truly qualifies as intelligence.

If something looks intelligent and can convince you - what's the problem?

My problem stems from what I expect from intelligence. Like the dog that can respond to preset commands, I don't accept that as a standard of intelligence because I expect intelligence to be flexible enough to incorporate new commands. Particularly if they can attempt to adapt on the fly rather than rely on old pathways. The phrase "you can't teach an old dog new tricks" comes immediately to mind. That isn't what I want out of artificial intelligence. A system that can convince me that it is capable of adapting to constantly changing parameters isn't likely to be a system that was merely pre-programmed to respond to all possible variations. I don't believe such a program exists. Self-programming is required.

Humans were created out of animal instincts that guided us through such things as fire, flood, famine, war, etc. These are instincts you will find in all animals to varying degrees. What defines us as human, in my opinion, is our ability to overcome these instincts, to rise above what genetics have "programmed" us to do and to alter our behavior according to what we decide. I believe that genetics give us the foundation for behavior and personality, but that we have the capacity to reprogram ourselves in ways that other animals do not. And thus, to satisfy my requirements for artificial intelligence, I require a machine to acquire that capacity as well. Would a machine that has infinite power and processing potential rise to that occasion? I don't believe so, because of the nature of the system. I believe it requires a change in the way we approach computers and technology to overcome that hurdle.

It's entirely possible that my standards for this issue are too high. That's fine. I'm comfortable with setting my sights too high on this issue because I know that first of all, technology is improving at such a rapid pace that what is too high now may be feasible in time. And secondly, the best way to achieve a goal is to aim past your mark. If we develop systems that are capable of performing tasks we want, that really is good enough for me. I may not credit the system with intelligence, but I will still appreciate the system for what it is capable of.

In all honesty, I'm not that worried about the ethics of things like artificial intelligence or human cloning. In my mind, it's a case of seeing if it will work first before working out the details. A friend quoted the phrase "just because we can do a thing doesn't mean we should do it." I disagree with that philosophy based on certain criteria: mostly whether or not the research is intended to hurt someone. Researching ways to torture people isn't something I would approve of. Researching ways to improve the human genome or to awaken artificial intelligence (sentient machines) don't qualify by my criteria. Certainly there are enough ways to abuse these things, as the potential lies in all research. If we succeed in creating sentient machines, then we'll need to grapple with the consequences and ethics of what could easily become a mechanical "slave race." But the issue isn't really on trial here, it's how we handle the issue that should always be judged.



"Humor. It is a difficult concept. It is not logical." -Saavik, ST: Wrath of Khan

[ Parent ]
Hmmm ... (none / 0) (#53)
by StrontiumDog on Fri Nov 24, 2000 at 06:04:32 PM EST

I don't think computers have ever composed any really great music

I have the latest Arman Van Helden CD (Killing Puritans), it's great music, and I have my definite suspicions that Armand Van Helden is actually an Amiga 2000.

[ Parent ]

Easy... (2.66 / 6) (#17)
by retinaburn on Mon Nov 20, 2000 at 03:50:13 PM EST

I would write a program what would replace every image of Al "Stud" Gore and George "Dub-ya" Bush with Jean Chretien. Then replace every image of Hitler and Mussolini with Mickey Mouse.

But after that someone else could use the computer.


I think that we are a young species that often fucks with things we don't know how to unfuck. -- Tycho


What no QIIIA Benchmarks? (2.80 / 5) (#20)
by Mantrid on Mon Nov 20, 2000 at 04:01:47 PM EST

Infinite FPS in QIIIA demo number whatever? I thought all computing speed measurements involved Quake 3 in some way... :P

One Half a Manifesto (3.00 / 4) (#21)
by sera on Mon Nov 20, 2000 at 04:33:08 PM EST

Jaron Lanier made a relevant comment in his recent "One Half a Manifesto":
If anything, there's a reverse Moore's Law observable in software: As processors become faster and memory becomes cheaper, software becomes correspondingly slower and more bloated, using up all available resources.
(In general, the piece is about being skeptical of the claims of technological futurism, so it's pretty relevant to the "infinite computational power" question.)

firmament.to: Every text is an index.

Another critic of unlimited computing power (3.00 / 1) (#24)
by kallisti on Mon Nov 20, 2000 at 05:52:14 PM EST

Is the guy who wrote this. Among other claims, he states that the reason for the cost of machines is due to their complexity. Having more powerful machines or nano-machines has no effect on complexity. Somewhere the design needs to be done.

In order to solve a given problem, that problem has to be defined. This definition for any real world problem is at least as hard as the problem itself. Robert Sheckley wrote a story where the Ultimate Computer was found, it could answer any question. The people who found it startng asking questions, but couldn't make any sense of the answers. Douglas Adams has similar stories. I think this is where relying on "unlimited" processing will fail as well.

[ Parent ]

I agree (none / 0) (#45)
by SIGFPE on Wed Nov 22, 2000 at 08:39:46 PM EST

This definition for any real world problem is at least as hard as the problem itself.
This is exactly why I posed the question in the first place. I work in graphics. It seems to me that in 30 years time we'll still be working hard to generate good photorealistic imagery. The problems we have right now in time are not problems of CPU (well, many are but having a million times larger render farm will leave many problems unsolved) - we simply don't have the input data we need to specify all the detail in all of the scenes we wish to render. So the idea that we'll have simulated humans in 30 years time seems to me to be so ridiculous it's laughable. I'm not sure we will be able to render humans 100% realistically at that point (though we'll be pretty close soon)!!!

But I still find the idea of a singularity convincing...just not in 30 years time.
SIGFPE
[ Parent ]
Depends on the OS (1.40 / 5) (#22)
by sl4ck0ff on Mon Nov 20, 2000 at 05:21:11 PM EST

Are we talking about fantasy *infinite* *computer* *power*, or basically a mostly problem-free computer that never really "gets tired" and can produce results quickly and efficiently? It depends first off on the Operating System, not just the hardware. I also think that we have that we have that now. We always instantly adapt to whatever amount of speed/etc we need. We need it. We invent it. If you're talking about something that surpasses your satisfaction and what you're "used to", then that can only exist when one Human has an excessive amount of passion and dedication.
/me has returned to slacking
Some thoughts on Mind (3.75 / 4) (#23)
by FlinkDelDinky on Mon Nov 20, 2000 at 05:51:02 PM EST

This question of making machines smart, and lets face it, we don't want intelligent machines we want smart ones, has been thrown about for some time.

On the one hand are the people that say the universe is a mechanical (and if not mechanical than highly ordered and logical) place and since it generated human intelligence it's merely a question of of discovering that process. After the process the universe used is understood it's merely a question of applying that method to different materials, perhaps semiconductors.

Many don't like the above idea, and to be honest I'm one of them, but I think the idea is reasonable and well founded. I like the idea that we're somehow special I guess.

Then on the other hand we've got people that say the human (or perhaps more correctly the biological) mind is not purely mechanical, that it merely leverages the brains mechanical computational abilities to help arrive at decisions. But still uses as yet unkown non-mechanical inputs to make choices. Generaly these arguers use varios aspects of quantum phenomena or philosophy to support their beliefs.

Although I'm sympathetic to the latter position I find the former more logical. In the book 'Wet Mind' the writers offered the idea that the mind could be an electro magnetic force generated by the brian that floats above it that both acts on and reacts to the brian's natural functioning (or something to that affect).

All this brain/mind talk makes me recall a show on Discovery or maybe animal planet about 'psychic' animals that know when their owners are comming home. Pretty much they explained it throught the animals superior hearing. But there was this one little doggie that they observed walk to the front door expectantly way before it's physical senses would allow it to know that it's owner was comming home (it was a test they ran on the dog, the dog 'passed' the test). Anybody else see that?

What was the ancient argument about the runner never finishing the race? Take the distance to the finish line and repeatedly cut it in half for infinity and the runner will always have a little farther to run before finishing. This problem was solved by introducing time into the equation. In other words the question was being asked incorrectly.

That's kind of where I think we are in terms of processing power and smart machines. After all what's to stop my Cyrix 266 from being a slow smart machine (with huge hard drives)? We're just not asking the questions correctly, which kind of makes sense since I don't know of any logically rigorous definitions of consciousness or intelligence.

I know exactly what I want to do... (2.60 / 5) (#26)
by 11223 on Mon Nov 20, 2000 at 07:15:01 PM EST

Turn this computer into a universe-simulator. That's right, run a universe in the computer, and see if it develops life. If not, give it up, try again, until eventually we get life, and then repeat until we get intelligent life.

--
The dead hand of Asimov's mass psychology wins every time.

How will you detect those universes with life... (4.00 / 1) (#27)
by SIGFPE on Mon Nov 20, 2000 at 08:20:52 PM EST

...as you may have to run many simulations on rather large universes to get any results?
SIGFPE
[ Parent ]
Easy (none / 0) (#54)
by zakalwe on Sat Nov 25, 2000 at 05:52:15 PM EST

When you're searching for intelligence, all you have to do is make a few tweaks to your universe. Scatter a few 'puzzles' of a nature that you feel would require intelligence to solve. (Put a few mysterious black monoliths on various moons), and rig it so that the solution to a puzzle will take the universe out of fast simulation mode, indicate to a human investigator, and open up some kind of communication channel.

There may be a few false positives where the puzzle is triggered by random chance, and doubtless there will be many civilizations you miss, but eventually, provided your universe is capable of producing them, you'll get an inteligent being or beings which solve the puzzle.

[ Parent ]

Can't do that... (4.00 / 1) (#30)
by spiralx on Tue Nov 21, 2000 at 06:14:57 AM EST

I forget exactly where the proof comes from, but it has been shown that the smallest possible system that can emulate the Universe perfectly is the Universe itself, thus no computer within the Universe can run such a simulation at even a 1:1 time ratio, let alone faster. So if you wanted to simulate a Universe then you'd either have to emulate a less complex Universe, which would perhaps limit the potential for life (which seems to rely on a certain amount of compelxity being present), or run it very slowly (which would be fairly pointless).

You're doomed, I'm doomed, we're all doomed for ice cream. - Bob Aboey
[ Parent ]

Hrmn... (none / 0) (#32)
by 11223 on Tue Nov 21, 2000 at 10:22:50 AM EST

I go the slow route. Secondly, if you had infinte computing power, you could run the simulation, which is an argument against the existance of infinite computing power. Nevertheless, if you somehow turned the entire universe into a quantumn computer, then a universe would be the ideal thing to run on it.

Side note about time: can you get Discover magazine in UK? There's an interesting article in the December issue about how time may not exist at all.

--
The dead hand of Asimov's mass psychology wins every time.
[ Parent ]

And on top of that: (none / 0) (#35)
by Hillgiant on Tue Nov 21, 2000 at 01:42:01 PM EST

How would you interpret the answers?

-----
"It is impossible to say what I mean." -johnny
[ Parent ]

I disagree (none / 0) (#50)
by Holloway on Fri Nov 24, 2000 at 10:26:13 AM EST

...there's an awful lot of redundancy in the universe, especially space with a hydrogen atom every couple of metres. You could definetaly store several universes over. Also, i'm willing to bet that natural chemical reactions that result in the same compound (or whatever) could be much faster than IRL.


== Human's wear pants, if they don't wear pants they stand out in a crowd. But if a monkey didn't wear pants it would be anonymous

[ Parent ]
Infinite computing power and AI (3.00 / 1) (#29)
by adde on Tue Nov 21, 2000 at 12:34:11 AM EST

No, I don't think just having infinite computing power would automatically give us good AI. The question we have to consider is "what is intelligence?". Since I am currently working on an AI program for a class I'm taking I've been wondering about that quite a bit. The program I work on is a learning game player, and with infinite computing power I could easily make it impossible for a human to beat. But would that be intelligent? Not really, the learning algorithms are pretty easy, and all it does is playing all possible games and then doing extensive search on them.

Note however that infinite computing power is pretty useless without infinite memory, typical AI programs run out of memory before the programmer runs out of patience.

There seems to be some sort of consensus in the academic AI community that we won't have good AI for at least a hundred years, but having a computer with infinite computing power would probably speed it up. ;)

If you want to read about cool, but not necessarily possible, Ai:s read Dan Simmons Hyperion series, it's very good.


//Andreas

I gotta say it (4.33 / 9) (#31)
by JonesBoy on Tue Nov 21, 2000 at 09:41:00 AM EST

I would become the ULTIMATE TRAVELING SALESMAN!!!!

Nobody would ever match my efficency as I speed around neighborhoods using the optimal path every time!

Sorry.

Anyway, why would you want to program a computer to emulate a human? Whats the point? A few minutes of pleasure, nine months of waiting, and a little excruciating pain and you can have a little computer to program that will attempt to emulate you for years. After that, you get to learn from it as it becomes independent and takes over your world, making you an outdated computer engineer with old knowledge being forced into retirement. And you might even be proud of your creation to boot!


Speeding never killed anyone. Stopping did.
I'm going to give this a serious reply! (3.50 / 2) (#33)
by SIGFPE on Tue Nov 21, 2000 at 11:27:57 AM EST

Anyway, why would you want to program a computer to emulate a human?
It's related to the issue discussed elsewhere on the singularity. Suppose humans are capable of creating something just ever so slightly smarter than humans. Then from that point we can delegate all programming tasks to the creations because they'll be better at it. They in turn will be able to create something smarter still. Repeat ad infinitum. So given any computational task more difficult than making human intelligence the easiest way to achieve it is to make something smarter than a human and then delegate. So emulating humans isn't interesting in its own right - but emulating humanlike intelligence is possibly the most efficient first step in any more difficult task. I think that this point is frequently missed by those who think this is all misdirected womb envy!
SIGFPE
[ Parent ]
But why try to emulate the way a human thinks? (4.00 / 1) (#38)
by Chakotay on Wed Nov 22, 2000 at 09:36:36 AM EST

It's been proven over and over that emulating something that exists in nature directly generally isn't the best solution. I've never seen a Boeing 777 flap its wings like a bird, for example, nor have I seen a nuclear submarine wiggle its tail around like a shark. Quite likely, machine intelligence will be achieved in a much different way than human intelligence works. Currently you need a supercomputer just to emulate the behaviour of a single human neuron - let alone the billions that make up our full brain. In a different way, however, it will definitely be possible to create human-like intelligence in a human-unlike way, simply by creating a computer that is able to learn, to adapt. Possibly Transmeta's code morphing technology or something similar could also be used for that...

--
Linux like wigwam. No windows, no gates, Apache inside.

[ Parent ]
Must reply to this too (4.00 / 1) (#41)
by SIGFPE on Wed Nov 22, 2000 at 01:29:27 PM EST

Two reasons for emulating humans:

(1) Actually we might not really want to emulate humans. Being able to emulate a human (or at least some(thing/one) slightly smarter) might be the simplest way to prove the existence of a singularity - although the easiest way to reach the singularity, given that it is possible, might be different. So its useful to know if we can emulate humans even if we don't actually want to do it.

(2) Suppose I wanted to write a computer program to compose amazing music. Right now nobody has the faintest clue what makes music good. In fact the simplest algorithm I could describe to make good music right now would be to use a human emulator and run a few googolplex tunes by it to see which it likes best. This is a general principle - to solve problems for real living humans the easiest solution may be to test proposals out on emulated humans. So for humans, at least, being able to emulate humans would actually be a useful bit of technology.
SIGFPE
[ Parent ]
Nature knows best (none / 0) (#47)
by jreilly on Thu Nov 23, 2000 at 11:36:20 PM EST

Actually, your comment about submarines wiggling is not as far fetched as you think. At MIT a group built robotic tuna to study how they swam, becuase they are so much more efficient at the business of moving through water. Don't forget, Nature has had millions of years of evolution to determine the best ways to do certain things.

Oooh, shiny...
[ Parent ]
Technical point (2.50 / 2) (#37)
by jacob on Tue Nov 21, 2000 at 10:26:59 PM EST

If you had a computer that could solve all solvable problems in under n seconds, as you propose, then you could solve the halting problem- just time the application, and if it executes for n + epsilon seconds, you're guaranteed that it will never terminate. Now you've got a program that is decidable, and, incidentally, thus can be computed in less than n seconds by definition- drats!

Your anal computer scientist,

-jacob



--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

No you couldn't (none / 0) (#39)
by SIGFPE on Wed Nov 22, 2000 at 11:26:13 AM EST

If you had a computer that could solve all solvable problems in under n seconds, as you propose, then you could solve the halting problem
That's why I chose Bloop as the language. The halting problem for Bloop is trivial - all programs terminate. In addition you cannot express the usual halting problem in Bloop. Nonethless all useful programs that have ever been written could be expressed in Bloop (with a suitable I/O library :-). Check out the link in my original article, or better still Godel, Escher, Bach by Douglas Hofstadter.
SIGFPE
[ Parent ]
In that case is Bloop an excessive restriction? (none / 0) (#48)
by DoubleEdd on Fri Nov 24, 2000 at 05:05:41 AM EST

OK - so we've restricted ourselves to Bloop to avoid the halting problem. However, is this a restriction that would prevent the coding of a human-like AI? It doesn't matter if all useful programs to date can be coded in Bloop. We're talking about entirely new classes of code.

Note that I personally think that the restriction to Bloop is probably valid*, although not provably so until we have an AI algorithm in whatever form. If I haven't solved a problem after a certain amount of time I will give up - so the halting problem is trivial in the case of my brain.

*The getout clause I'm going to put in here is that Bloop is a valid restriction IF Floop is sufficient to code an AI. It isn't obvious that intelligences are simulatable by Turing Machines.

[ Parent ]

Any Floop program that terminates in a finite... (none / 0) (#51)
by SIGFPE on Fri Nov 24, 2000 at 01:35:47 PM EST

...known time can be rewritten in Bloop. In other words any Floop program that has terminated could have been written in Bloop. If a program doesn't terminate it's not that useful :-) Floop and Bloop are pretty well indistinguishable on practical computers and it's the practicality that I wanted to capture in my restricted definition of infinite power.
SIGFPE
[ Parent ]
What about Windows? :-) (2.50 / 2) (#46)
by code0 on Thu Nov 23, 2000 at 09:16:42 PM EST

Nah. It would use all that processing power....

It wouldn't help... (4.66 / 3) (#49)
by brunns on Fri Nov 24, 2000 at 07:58:44 AM EST

It would only tell us that the answer is 42.

1950s is to 2000 as 2000 is to... (3.00 / 1) (#56)
by chaoskitty on Sun Nov 26, 2000 at 02:02:13 AM EST

I think it would be more suiting to put it this way: in the 1950's, lots of people imagined that computers could figure out tons of things that were previously too time consuming to do by human power.

So, if you were someone who was just given an opportunity to have a program run on one of these new computers, what would you run? What problem would you solve?

I think about this often: I have more computing power at my disposal than all of the computers in the world combined had in 1960, yet what am I doing with them? Calculating OGR nodes while they are idle?

There must be a more noble purpose...

I can't believe it hasn't been said... (3.00 / 1) (#57)
by Stitch on Sun Nov 26, 2000 at 02:44:12 AM EST

We are assuming *infinite* power here, unimaginable things can be done! Limitless possibilities! In keeping with the fantastic theme of this computer, I would set it on the near-impossible task of counting all the votes in Florida.

No, on second thought even infinite computing power would be useless...

What I would do... (none / 0) (#65)
by The Madpostal Worker on Mon Nov 27, 2000 at 08:26:33 PM EST

make some cool simulations. Right now we are playing with gas modeling in school, and 10^4 or 10^5 is a good practical limit of the number of particles. However, there are ~10^26. It would be really cool to be able to simulate a large number of particles with a large degree of accuracy, somthing we can not do right now.
<-- #include "~/.sig" -->
What could you do with infinite computing power? | 68 comments (68 topical, 0 editorial, 0 hidden)
Display: Sort:

kuro5hin.org

[XML]
All trademarks and copyrights on this page are owned by their respective companies. The Rest 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!