Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

[P]
Must an A.I. exist in our reality?

By 11223 in Technology
Thu Dec 21, 2000 at 01:55:34 PM EST
Tags: Software (all tags)
Software

Many A.I. projects focus on giving computers skills and knowledge related to the world of human and animal events. These projects, such as the OpenMind commonsense project, have achieved varying degrees, though not total, success. Even robots that play soccer (or football, depending on where you live) measure their world in terms of geometrical and spatial concepts. But what about making an A.I. relate to the world it could know best - the filesystem on its own disk?


ADVERTISEMENT
Sponsor: rusty
This space intentionally left blank
...because it's waiting for your ad. So why are you still reading this? Come on, get going. Read the story, and then get an ad. Alright stop it. I'm not going to say anything else. Now you're just being silly. STOP LOOKING AT ME! I'm done!
comments (24)
active | buy ad
ADVERTISEMENT
The most successful implementations of A.I. concepts so far have arguably been those that power search engines. These search engines navigate the web not knowing the meaning of what they read, but instead "experiencing" through its location on the web, how many links it contains, and other position-affecting information. They see the world through the eyes of an HREF, so to speak. They're successful at it because the HREF and the web are inherently computer-related concepts. So why do we keep trying to make A.I.'s interact with a human world?

The OpenMind project intends to teach an A.I. basic human-related concepts, such as "beef is the meat of a cow", "your mother is older than you", "the sky is always above you", and other concepts that relate to interacting with the physical, human world. This is difficult for a computer to grasp, because it is difficult for a computer to interact with the world through its limiting inputs and inflexible tactile devices.

Why not put an A.I. where it would be most comfortable? Why not have an artificial intelligence see through the eyes of its current working directory, and have it interact with the filesystem? Instead of teaching alien concepts about cows and human mothers, why not let it explore the computer that it resides upon?

While such a computer would still need a grasp of the English language, it would be to understand computer related concepts in a human language. The type of common sense it would need to understand is basic knowledge about the operation of a computer and the operating system. It could practice this knowledge through writing of programs; perhaps first in shell script, then moving on to advanced systems programming with C.

How would we test its intelligence when it no longer resides in a human world? Perhaps you could have the computer diagnose and fix a problem. A Turing-tester would sit down at a terminal and ask, through the talk program, why a certain computer never reaches runlevel 5 when the ethernet is unplugged. The person or computer on the other end would then diagnose and fix the problem, and report to the Turing-tester. The Turing-tester would then evaluate the solution and decide if human or computer fixed the problem.

Would giving computers a world that is native to them make it easier to create a true A.I.? How would we program such an A.I.'s sensory inputs, and give it a sense of time and place? What other challenges are there in desiging such a system that would need to be solved?

Sponsors

Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure

Login

Poll
Would you give an A.I. root on your system?
o Yes, but not on my main box. It could mount my important stuff as a read-only NFS share. 18%
o Yes, but I wouldn't even give it access to my important stuff. 7%
o No, I'm afraid of what it would do if it had a temper-tantrum. 15%
o No, I'm afraid it might decide not to let me log in. 18%
o I wouldn't want one at all if it acted like Erwin! 8%
o Rusty is an A.I. 31%

Votes: 97
Results | Other Polls

Related Links
o project
o Also by 11223


Display: Sort:
Must an A.I. exist in our reality? | 45 comments (39 topical, 6 editorial, 0 hidden)
What kind of AI are you talking about? (4.20 / 5) (#1)
by streetlawyer on Thu Dec 21, 2000 at 11:31:04 AM EST

I've no problem with most of your piece, until you seem to move from "artificial intelligence" in general, to something that could pass a Turing test. I have a huge problem with Turing tests as tests of anything interesting in the first place, but it seems clear that what you're talking about is something stronger than the "AI" which drives a Quake monster. It looks like you're wanting something that would be reflexively conscious.

I don't think that this is coherent. What would these consciousnesses be conscious of? Computer programs conscious of things on the computer. In the first place, they'd lack a self/other distinction (other than a purely arbitrary one). In the second place, all entities on a computer hard disk have purely functional roles; their significance is only formal and contingent. This strikes me as an important difference from the way in which we refer to, relate to and are conscious of objects in the real world; we relate to them as themselves, not their role (note to discusants: I'm sorry, but I'm just asserting this last point. There's a vast amount of literature on it, and disagreement is a perfectly respectable position, but I just don't have time to debate the matter).

Wittgenstein said, in a more than usually Delphic mood "If a lion could speak, we would not be able to understand it". I think something similar would apply to your putative AI.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever

Note! (4.00 / 1) (#2)
by 11223 on Thu Dec 21, 2000 at 11:37:52 AM EST

I never used the word concious. I am looking for an A.I., much like the people running the OpenMind commensense project (which looks quite promising, btw) are looking for an intelligence. The definitive test of intelligence is whether it can masquerade as a human, and I simply gave it a means to do so by proposing a new test that applies to this entity.

The entity I created in my brain isn't aware of any physical world. It's capable of communication and rational thought, but its experience is one of files and block devices, not of objects and matter and energy like ours. I think that this reality would make it much easier to write an A.I. than trying to get it to understand our world, which takes humans many many years to understand themselves!

--
The dead hand of Asimov's mass psychology wins every time.
[ Parent ]

ahhh, I see (5.00 / 2) (#7)
by streetlawyer on Thu Dec 21, 2000 at 11:53:44 AM EST

The definitive test of intelligence is whether it can masquerade as a human

Things have kind of moved on since the Turing test was first proposed. These days, it's kind of deprecated in favour of a consciousness criterion because:

  1. It's a functional definition. A functional definition is borderline OK for terms like "gene", as a placeholder for future work. It's not very satisfactory for an important metaphysical property, with no future proposal for work on the substrate of (functionally defined) intelligence. Consciousness isn't a functional property, so this objection doesn't arise.
  2. It has serious problems as a functional definition. A Turing test could be passed by a lookup table, a canonically non-intelligent thing. I've had a couple of to-and-fros on this site about whether a lookup table is intelligent; so unless there's a fantastic new argument, I'm just going to state that the ordinary language meaning of the word excludes a lookup table from being called intelligent.
  3. If something isn't conscious, there are severe problems in interpreting its words as meaningful. If something isn't conscious, it's hard to see that there is any necessary connection between its output and the world -- any "word/world link". If something isn't referring, then it isn't meaning. Therefore, it's hard to regard the system itself as intelligent unless you also regard it as conscious.


--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]
OK, I know of those objections (5.00 / 1) (#10)
by 11223 on Thu Dec 21, 2000 at 12:01:00 PM EST

But passing such a turing-test would indeed put the icing on the cake, so to speak. Passing it repeatedly would make it even better. But, yeah, there are a couple of distinctions to be made, and a couple of walls to be knocked down. Here's a few I'll try my hand at:

Self/other distinction: Process space and user distinctions come into play here. I'm looking at this from a user perspective, and while the A.I. has full access to the linux system calls (look in asm/unistd.h), it exists in a seperate process space from the rest of the computer.

Secondly, the A.I. does indeed exist in a purely functional world, but the functions of those components now are available to it, and not just us. While some look as computers as tools, I do spend a fair amount of time just putzing around, compiling programs, changing things here and there, etc. This is the type of activity (play?) that would definitely involve this computer. Later on, the computer becomes a mathematically functional tool to it, as it learns to solve mathematical problems.

This is just an idea right now, but hopefully it's good enough so that some day we might just have an A.I. system shipped with Linux 4.0 kernel!

--
The dead hand of Asimov's mass psychology wins every time.
[ Parent ]

The Epistemic Test (5.00 / 1) (#15)
by slaytanic killer on Thu Dec 21, 2000 at 01:05:42 PM EST

The main problem with these epistemological approaches is that they're too discrete. At some point, you pass a line, and voilá, intelligence.

However, while intelligence remains a shadowy idea, it is pretty clear what one means by something being more intelligent. This implies a continuity with intelligence. No one knows what is intelligent (we recently had a chemist who said that many people here aren't intelligent), but we have a sense of gradations in intelligence.

The point of the Turing Test is that only something with a human's intelligence can really judge intelligence. Epistomological rules can't -- they are just a bunch of rules, like the computer I'm typing on! Any finite set of rules that can judge "consciousness" would be an algorithm -- that means that there exists some finite algorithm that can judge consciousness. If that actually is true -- then you've just proved that something like a Turing Test can be administrated by a computer, not just a human. Call it the Epistemic Test.

Now, if you reject this view, then I imagine you are more interested in the "ordinary languge definition" of intelligence. And since then only a human can apply this (the assumption is that no computer can provide the "ordinary languge definition") then you could amend the Turing Test to rate something called Black-Box Conscious. Something is Black-Box Conscious if there is insufficient information to fully analyze the consciousness of something through direct inspection, but appears consciousness with this information excluded. That is good enough for me, since I rarely have enough information to tell if someone's a convincing robot or not.

I agree, at some point maybe we should have a definition of consciousness. But many people have died for that very same question...

[ Parent ]
Well, there's always the meta-Turing test (none / 0) (#16)
by 11223 on Thu Dec 21, 2000 at 01:08:04 PM EST

What about the meta-Turing test? Shouldn't that settle the question?

Here's another definition of conciousness: An entity is conscious if it attempts to devise definitions of consciousness to differentiate itself from its surroundings.

--
The dead hand of Asimov's mass psychology wins every time.
[ Parent ]

Your points... (4.00 / 1) (#38)
by Khedak on Sat Dec 23, 2000 at 04:47:40 PM EST

1. It's a functional definition. A functional definition is borderline OK for terms like "gene", as a placeholder for future work. It's not very satisfactory for an important metaphysical property, with no future proposal for work on the substrate of (functionally defined) intelligence. Consciousness isn't a functional property, so this objection doesn't arise.

Your assertion seems true (though it's unfair to ask you to prove it since as of yet there is no unversal consensus on what Consciousness is or is not), but as long as we're being investigative, it seems like looking at it from a functional point of view could be interesting. After all, no machine has yet been able to pass an unrestricted Turing test, so until one does, it's still an interesting avenue of research. If a machine does pass an unrestricted Turing test one day, the means by which it accomplished it should lend insight to this topic. Which brings us to your second point.

2. It has serious problems as a functional definition. A Turing test could be passed by a lookup table, a canonically non-intelligent thing. I've had a couple of to-and-fros on this site about whether a lookup table is intelligent; so unless there's a fantastic new argument, I'm just going to state that the ordinary language meaning of the word excludes a lookup table from being called intelligent.

For one, Searle asserted that a lookup table could beat the Turing test, but this hasn't been proven, and in fact some have countered his arguments. So the assertion that "a lookup table can pass a Turing test" isn't exactly something that everyone agrees upon, and since it hasn't been done, it hasn't been demonstrated either way. Even so, as you say, whether a lookup table is intelligent or not is disputable, so this is hardly a conclusive argument for the irrelevency of Turing tests.

3. If something isn't conscious, there are severe problems in interpreting its words as meaningful. If something isn't conscious, it's hard to see that there is any necessary connection between its output and the world -- any "word/world link". If something isn't referring, then it isn't meaning. Therefore, it's hard to regard the system itself as intelligent unless you also regard it as conscious.

You saw the Matrix right? A flight of fancy to be sure, but for arguments sake if it existed, none of the people in the Matrix possess consciousness because none of their words connected to anything in the "real world", only to things in the "dream world" of the Matrix. I would tend to think that consciousness/sentience deals with the Way in which a being interacts with its inputs/outputs, and not what those inputs and outputs refer to. This is a philosophical issue, you'll get different arguments from different people (Plato vs. Rand vs. Kant vs. Hofstadter, etc.) but I dont think we necessarily can say that reference to the real world is the logical end of consciousness. In fact, a powerful argument can be made to the contrary, if you consider the fact that referencing exists because of a seperation from the real world. Saying a baseball represents the earth and a marble the moon is something that humans can do. The meanings and the associations do not exist in the real world, only in our minds. Referencing includes abstraction, so why do you say that if something is too abstract, without reference to the real world, it ceases to permit meaningful consciousness? Indeed, it seems abstraction is a necessary component of consciousness and of intelligence.

[ Parent ]
thanks, good points (none / 0) (#43)
by streetlawyer on Tue Jan 02, 2001 at 08:01:49 AM EST

Though I think there is more settled ground than you do. It seems pretty clear to me that an infinitely large lookup table could pass an unrestricted Turing test, and equally clear that the difference between an infinite and finite lookup table is not the reason that we say that lookup tables are not intelligent.

Your points regarding "The Matrix" are interesting. Hilary Putnam has written on this subejct, and arrives at the opposite conclusion; that it is literally logically incoherent to seriously entertain the possibility that we are all brains in vats, because if we were brains in vats, we would not be able to construct sentences which referred to non-vat entities. I think that representation of things has all the properties that you attribute to reference, but that actually referring to objects in the real world comes about precisely through a unique, causal connection with those objects. You might find this article interesting, though I confess it made my head spin a bit.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

And about your lion.... (4.00 / 1) (#3)
by 11223 on Thu Dec 21, 2000 at 11:42:14 AM EST

If your lion is capable of doubting its own existance, then would not we be capable of understanding that basic point? And would not the A.I. be capable of doubting that it exists? Sure, on a lower level these two beings have nothing in common to us. But once they hit that "common ground" of rationalism, they can communicate quite freely with us, because each of them is built from the same universe-material as we.

--
The dead hand of Asimov's mass psychology wins every time.
[ Parent ]

Wittgenstein's lion (4.00 / 1) (#24)
by meeth on Thu Dec 21, 2000 at 02:22:36 PM EST

If your lion is capable of doubting its own existance, then would not we be capable of understanding that basic point? And would not the A.I. be capable of doubting that it exists?

I think Wittgenstein's point is that lions have forms of life completely foreign to ours. A lion might not be at all interested in or capable of understanding doubting its own existence (a lucky creature to have escaped that Cartesian silliness).

But once they hit that "common ground" of rationalism, they can communicate quite freely with us, because each of them is built from the same universe-material as we.

The lion's "rationalism" would be completely different from our rationalism. I think (based on my poorly remembered reading of PI) that to Wittgenstein, this "rationalism" would simply be another language game. Language games can only be played from within a particular conceptual and linguistic community which shares certain preconceptions and views of what is important. Since neither lions nor your AI would share the forms of life necessary to understand "rationalism", we could not communicate with them.

Two caveats: 1) I'm not sure I agree with Wittgenstein. I'm just noting that (based on my recollection of what he was saying) you don't really address his point. 2) My recollection of what he was saying may be faulty. Nonetheless, 112233, given your penchant for Cartesian philosophy, you might want to read PI if you haven't already. It seems to me that Wittgenstein pretty much thoroughly demolished philosophy in the manner of Descartes.

[ Parent ]

Aah, but now you've touched on motivation (4.00 / 1) (#25)
by 11223 on Thu Dec 21, 2000 at 02:31:25 PM EST

The thing is that the motivation towards metacognition is inherent in any intelligent entity. Questioning one's own existance is the natural extension. So, naturally an intelligent lion would wish to study its own brain, which does lead to some basic tenants of rationalism.

Secondly, I'm not sure the fundamental rules of quantumn physics/computation (take your pick) aren't universal between any two beings. I'd like to think that any two beings who study quantumn physics would come up with the same results, and state them in roughly equivalent ways (though they may not appear equivalent!)

--
The dead hand of Asimov's mass psychology wins every time.
[ Parent ]

Why? (4.00 / 1) (#26)
by meeth on Thu Dec 21, 2000 at 02:53:12 PM EST

The thing is that the motivation towards metacognition is inherent in any intelligent entity. Questioning one's own existance is the natural extension. So, naturally an intelligent lion would wish to study its own brain, which does lead to some basic tenants of rationalism.

Assertion unsubstantiated by argument, unless you mean to define intelligent this way. In that case, we have different definitions, and yours isn't that useful, as far as I can tell.

As far as quantum physics goes, Quine-Duhem indeterminacy would suggest that there are an infinite number of possible interpretations associated with the facts of that science and there is no guarantee that the interpretations reached would be the same. (Here again, I speak of arguments I don't recall very well). Nonetheless, I think the Wittgensteinian objection is more basic. Radically different forms of life -> radically different language games -> no understanding. There is no reason to suppose that a lion that could speak or an AI would have the same concerns with quantum physics (or even the notion of 'science') that we do.

[ Parent ]

OK, so I give up on this argument. (4.00 / 1) (#27)
by 11223 on Thu Dec 21, 2000 at 02:56:55 PM EST

But can't we at least make the computer AI, so we can study how it thinks and acts? If we can't understand it, it'd be actually more worth doing, because we've now been capable of creating something we don't understand.

--
The dead hand of Asimov's mass psychology wins every time.
[ Parent ]

We already do... (4.00 / 1) (#28)
by teeheehee on Thu Dec 21, 2000 at 03:21:32 PM EST

we've now been capable of creating something we don't understand

I couldn't agree with you more on this sentiment.

If everyone who funded projects had the mentality that "if we don't understand it, that's even better", then it would be too late, we would have already done it, and we'd be flying around in cars with wings.

(Discordia) :: Hail Eris!
Everything you've just read was poetry and art - no infringement!

[ Parent ]
are you sure? (4.00 / 1) (#36)
by streetlawyer on Fri Dec 22, 2000 at 04:48:36 AM EST

If everyone who funded projects had the mentality that "if we don't understand it, that's even better", then it would be too late, we would have already done it, and we'd be flying around in cars with wings.

Far more likely that we would have been poisoned or blown up in some sort of way.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

Re: Wittgenstein's lion (none / 0) (#44)
by bertok on Sat Jan 06, 2001 at 09:22:29 PM EST

But lions are clearly not that different from us. Both lions and humans are warm blooded mammals that hunt (or used to hunt before civilization) for food. We share similar social structures (packs or tribes, two sexes, ...), environments, senses, neural systems, etc... This gives us a common ground we can use to communicate both simple and abstract concepts like cooperation, sexual attraction, colors, emotions, etc...

However, finding common ground between humans and software AI is difficult at best. It would be largely limited to the world of mathematics, as most of the rest of our existence is based on biology and physics, which would be alien concepts to an AI living in a digital universe.

I wonder how much common ground we would have with aliens from other planets? Would different evolutionary paths result in wildly different forms of thought, and systems of abstraction?


--
"If you would be a real seeker after truth, it is necessary that at least
once in your life you doubt, as far as possible, all things."

[ Parent ]
Countpoints (4.00 / 2) (#4)
by Remy on Thu Dec 21, 2000 at 11:43:06 AM EST

Very interesting article (+1 Front Page for me), but I have a couple of points to hash out here.

I see a few major problems here. First, the "Turing test" you describe (which you probably should've called something else, but see next paragraph for that) is roughly parallel to the halting problem. You can't ask "Will you ever reach this segment of code?" or "Why have you not hit this section of code?", or more generally, "Will this program loop infinitely on input x?". Of course, a different question could be asked.

Going back to the Turing Test naming issue, you bring up another problem - you're now defining "true A.I." as "a machine who can fix a problem", or "a machine who can interact with itself". The Turing test's definition is simply a program that can fool the judges into thinking they're conversing with a human. Changing the definitions doesn't mean it's suddenly a true A.I.

Also, limiting an AI to a very confined system changes the usability of an AI. While self-repairing and operating computers would be nice (and, admittedly, a little frightening - thinking T2 here), I can't think that an internal-modeled AI would be useful for something like playing backgammon (which is reportedly one of the toughest AIs to code).

I agree with you that trying to get an AI to understand the ENTIRE world as *we* know it is silly. I can't think of a purpose such an AI would have, other than something purely recreational. This is why most AIs are focused on something in particular - how to play a game, or a small internal system. Most Turing Test entries I've read about are trained in one subject area to converse about.

I highly recommend you do some reading about SHRDLU, which is a closed-system AI that manipulates virtual objects in space. It has extremely good natural language processing - for more information, I would recommend this page. SHRDLU is along the lines of what you're thinking, from what I can tell.

So again, good article. :)


-- "The need to be observed and understood was once satisfied by God. Now we can implement the same functionality with data-mining algorithms." - Morpheus, Deus Ex
I know very well the similarity (3.00 / 1) (#6)
by 11223 on Thu Dec 21, 2000 at 11:47:33 AM EST

Because I did know of SHRDLU before I wrote of the article. But even then, SHRDLU's system is very geometric and reality-centric. I'm just trying to get something that can solve problems by interacting with the bash prompt and vi like I can. (Real A.I.'s don't use emacs!)

The point of the Turing-test here is indeed to fool the reviewer into thinking it's a human, but in the specific area of computer problem-solving. The reviewer says "here, fix my init scripts" and the person on the other end does so and the reviewer judges whether the fix was made by a human or a computer. So, by relating it back to this Turing-test I was indeed trying to make sure that my definition of this A.I. did satisfy previous definitions of a "true A.I.".

--
The dead hand of Asimov's mass psychology wins every time.
[ Parent ]

Fooling Judges (4.00 / 1) (#29)
by Remy on Thu Dec 21, 2000 at 03:23:18 PM EST

I'm curious how exactly a judge would be fooled. I mean, let's say that the question is, as you said above, "My script is broken, fix it." The contestant attempts to fix the script, and then, I would assume, the judge looks at it.

The problem then becomes this - how does one judge between what a human fix is and what an AI fix is? Coding style? Use of comments? Variable names? Elegance, efficiency? Whether or not they use inline ASM?

As I'm sure you know, coding style varies wildly from person to person. Hell, even within one person - some days I write very neat, commented code, and other days I just hack it out. Is one style of coding more human than another? Would there truly be a way to differentiate?

With the traditional Turing test, there's a degree of interactivity - it's not a simple "Complete this problem". The test you propose seems like a modified version of Searle's Chinese Room experiment.

And hey, there's a version of ELIZA in Emacs, so I find it hard to believe...oh never mind. :)
-- "The need to be observed and understood was once satisfied by God. Now we can implement the same functionality with data-mining algorithms." - Morpheus, Deus Ex
[ Parent ]

re: Fooling Judges (1.00 / 1) (#30)
by Pelagius on Thu Dec 21, 2000 at 03:36:33 PM EST

Conversation; smalltalk. Does it seem like Eliza, or is it indistinguishible from a person?

[ Parent ]
Aah, the question now becomes... (none / 0) (#31)
by 11223 on Thu Dec 21, 2000 at 03:37:29 PM EST

Can you recognize, in an area where responses could be so varied, those that were clearly solved in the best / most elegant fashion by an intellect? I know coding style is varied, but there's different degrees of hackery, and the most human is the less hacked.

--
The dead hand of Asimov's mass psychology wins every time.
[ Parent ]

Re: Aah, the question now becomes... (none / 0) (#32)
by Remy on Thu Dec 21, 2000 at 04:39:56 PM EST

I honestly don't feel there's a way to differentiate between what type of intellect wrote a certain snippet of code.

Again, to go back to the Chinese Room example and tweak it - imagine you gave a task to three people to write an algorithm that manipulates some data, and then outputs "foo". One of the people you give it to is a programmer who programs neat, tidy code; the second is a programmer who programs hacked out, ugly code; the third has a large textbook where they can look up all the functions they would need to write the program. YOU DO NOT KNOW THIS.

You would then receive back three similar programs. They will differ in trivial ways, obviously - variable names, comments, formatting, and the like. So, without the knowledge about the characteristics of the programmers, how can you tell which one doesn't know how to program?

I think this is rapidly losing steam in light of other threads. I'll finish my contributions by saying that since there is no standard for what is "human" code versus inhuman code (since I don't know too many programming AIs), this test would be hard, if not impossible, to conduct with any degree of accuracy.

Incidently, I find the conciousness/awareness of self intelligence test much more reasonable than the Turing Test.
-- "The need to be observed and understood was once satisfied by God. Now we can implement the same functionality with data-mining algorithms." - Morpheus, Deus Ex
[ Parent ]
This is a really interesting idea (3.50 / 2) (#8)
by dennis on Thu Dec 21, 2000 at 11:55:22 AM EST

I tend to think the OpenMind project is doomed. We won't get a computer with real-world common sense without that computer living in the real world--i.e. an autonomous robot. Without that it's all empty strings, not words with meaning, no matter how many connections you make.

But this idea is cool. An AI could live in the network, learn from experience, etc. The Turing Test based on the computer's experience is a neat idea (though I don't agree that the T.T. really proves consciousness--I'm a Penrose guy--but it certainly proves some kind of smarts).

Time and place--network address, filesystem, processing cycles. Sensory inputs, all the data it has access to. You still have to solve some pretty major fundamental problems, but you're in a much simpler environment and don't have to mess with robotics hardware.

As I said above (or below?) (3.00 / 1) (#12)
by 11223 on Thu Dec 21, 2000 at 12:12:21 PM EST

TT isn't much more than icing on the cake. But at some point, with enough repititions, you have to ask the question, "what's the difference between something that can effectively simulate a human's problem solving ability through a finite but indefinitely large number of TT trials and an intelligent entity?" The answer, to me, seems to be that there is one. If we can give it X problems, and it can solve every one of them, I'd have to say that it's intelligent.

--
The dead hand of Asimov's mass psychology wins every time.
[ Parent ]

AIs in different conceptual spaces (3.50 / 2) (#9)
by Pac on Thu Dec 21, 2000 at 11:55:23 AM EST

I hope you are aware that what you are proposing has already been done. Somewhere along the line, when it became clear that an open-ended AI would not be as trivial to build as some of the pioneers liked to believe in the early 60's, many people turned their attention to more closed conceptual spaces.

The net result was a group of expert systems ranging from geological analisys to disease diagnosys. More famous efforts led to chess and checkers playing systems capable of beating the best living human player. You also point to smart web crawlers (The advent of neural networks changed the field, making it possible to build far more sophisticated learning systems. The neural nets seems to show a real promisse for an open ended AI systems, systems capable of grasping larger chunks of reality).

So, I am really not sure if some sort of network administration does not exist somewhere (even inside some commercial product). And I am pretty sure that the possibility of building such a system is well within the current technology


Evolution doesn't take prisoners


Oh, no, not at all. (none / 0) (#11)
by 11223 on Thu Dec 21, 2000 at 12:03:29 PM EST

In fact, I did give the example of task specific A.I. above, but those were examples of perspective and not open-ended. This is an effort to make a full-fledged open-ended A.I., but one that's not anywhere remotely human except on the level of problem solving (hence, the test that I proposed).

--
The dead hand of Asimov's mass psychology wins every time.
[ Parent ]

Turing test (2.50 / 2) (#13)
by Dries on Thu Dec 21, 2000 at 12:21:16 PM EST

First, you refer to "Turing test" or "Turing tester" in a confusing way. Turing held that computers would in time be programmed to acquire abilities rivalling human intelligence. And as part of his argument Turing put forward the idea of an 'imitation game', in which a human being and a computer would be interrogated under conditions where the interrogator would not know which was which, the communication being entirely by textual messages. Turing argued that if the interrogator could not distinguish them by questioning, then it would be unreasonable not to call the computer intelligent.

Quote from Alan Turing's paper:

I propose to consider the question "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think."

Turing Test is meant to determine if a computer program has intelligence. Quoting Turing, the original imitation game can be described as follows:

The new form of the problem can be described in terms of a game which we call the "imitation game." It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either "X is A and Y is B" or "X is B and Y is A." The interrogator is allowed to put questions to A and B.

That's something different then what you refer to, not?

Secondly, AI is only applicable to those "things" that are computable, whatever it might be. If it would be computable that the sky is always above us, then an intelligent system should be able to figure this out. However, don't expect the impossible: a machine cannot define axioms.

Third, and correct me if I'm wrong but I never heard of the concept "an AI". I know "AI" but I didn't know you could refer to it as "an AI". Then again, English is my third language only so I'm likely to be wrong on this one.

-- Dries
drop.org
-- Dries

Ummm.... (none / 0) (#14)
by 11223 on Thu Dec 21, 2000 at 12:24:53 PM EST

First of all, what I proposed in the article was the imitation game, just like Turing himself described, but with a specific constraint put on it. That's pretty common these days; I just described a constraint for this system.

Secondly, I've been using A.I. in the very literal sense of the word - an Artificial Intelligence. Saying "an A.I." is quite correct if you expand the acronym.

Thirdly, axioms like that are exactly what I wish to avoid by putting the computer in a "natural" system; e.g. one that it understands because it's how its programming operates.

--
The dead hand of Asimov's mass psychology wins every time.
[ Parent ]

The Turing Test (none / 0) (#45)
by jynx on Sun Jan 07, 2001 at 01:56:33 PM EST

I have always thought that the Turing Test is a pretty poor intelligence test.

For example, we believe that dolphins are intelligent, but they would do know better on the Turing Test than a loaf of bread.

Does anybody know of any non-language oriented ways to test intelligence?

--

[ Parent ]

Losing the focus of A.I. (4.00 / 2) (#21)
by teeheehee on Thu Dec 21, 2000 at 01:51:36 PM EST

An interesting read, and a cool idea, but I have to disagree to an extent...

It would be a good building block, I think, to have this carried out, but I wouldn't consider it an end goal in the pursuit of any form of A.I. For me, the end goal would be to understand (not just define, but UNDERSTAND) intelligence and consciousness. I'd like to know how I'm conscious, why, are plants/animals self-aware, etc.

What this appears like to me is a sophisticated program with programming capabilities - submersed in a separated universe which would hinder it from growing to understand our universe. That hindrence bothers me a little. It reminds me of an article about what you would do if you were (a) God - create a universe in our own which would have intelligent life - possibly more intelligent than us, such that they must create a universe to answer to the question of 'Why are we here'... This machine, if given this particular breed of A.I., would be a sysadmin's dream (or nightmare), but it would revolve around us asking if that constitutes anything of merit other than being another software solution.

What happens when it discovers the rm -rf command? What if it were to accidentally delete part of it's own structure, or finds a way to augment itself so that instead of performing cursory tasks it now has become an uber-virus of sorts? HAL? Open the pod bay doors!

In theory I could see this as useful (and detrimental, obviously) in several ways, and I am reminded (can't find the link, though) of a semi-sentient program which protects a network (written in LISP) - something that recognizes all sorts of attacks on a system and reports this information to other sections of the network to prepare it for similar dealings... then it would take necessary measures to ensure that the network would remain stable and unbreached. I think they even used some Matrix-type terms like "Agent". A much scaled down version of what you're proposing, or so how I see it in my mind.

However, this I could only see as a step. Instead of giving up on all other forms of A.I. research, I would propose this to be built so as to be a modular component, a workable product which could be joined with other A.I.-based projects to form an uber-A.I. Constructicons, form Devastator!

To relate this idea to a more our-world situation, I like to consider the computer as a living entity, much like ourselves (although metal, plastic, and quit noticably slower at moving around). Our bodies are infested with cells, virii, bacterium, ... each a living force (sentient? who knows). Some cells group together to form tissues, organs, systems, and the brain is the organ to act as the fileserver. This A.I. you propose would be kind of like our subconscious layer, in charge of keeping all the systems in check (digestion, endochrine, circulatory - things necessary to stay alive). To this point we are aware that this all goes on in the background of our noticable consciousness, and perhaps I should ask if our subconscious is considered intelligent, or if it would pass a much restricted Turing Test of sorts. Perhaps it could, if only we knew the proper questions to ask....

These last thoughts lead me to your question about creating a true A.I. with more ease if we confine it to it's own living space. I believe no, it wouldn't. This A.I. would be equivalent to our subconscious, which isn't easier to deal with, but quite more difficult! In my opinion the true A.I. would be capable of dealing with the world as we know it, like animals can, because despite it's being made up of silicon and a bunch of other fancy components it's still forced to follow the Laws of Physics, and in my mind a "true" A.I. would need to associate with that instead of it's own composition.

There was more I would going to say, but I lost my train of thought - perhaps someone else can pick up where I left off ...?

(Discordia) :: Hail Eris!
Everything you've just read was poetry and art - no infringement!

rm -rf (none / 0) (#35)
by retinaburn on Thu Dec 21, 2000 at 06:51:56 PM EST

What happens when it discovers the rm -rf command? What if it were to accidentally delete part of it's own structure, or finds a way to augment itself so that instead of performing cursory tasks it now has become an uber-virus of sorts? HAL? Open the pod bay doors!

Hmmm can you say Nuclear Bombs :)

rm -rf ??? Maybe I'll give it a try <disconnect>


I think that we are a young species that often fucks with things we don't know how to unfuck. -- Tycho


[ Parent ]
Think this is on the right track (3.60 / 5) (#39)
by Anonymous 6522 on Sat Dec 23, 2000 at 08:13:50 PM EST

Although I am not an expert on AI. I've gotten the general impression that many of these projects are trying from the start to build something capable of interacting and handling things in a human way. Humans didn't come into being this way, and I think it would be very difficult to build an AI from the top down. It seems to me that if we should start with a subconscious to manage the tasks it would need to take care of itself on an internal level before creating what is necissary for it to interact with the world outside it's box.

as for the "rm -rf" command, my answer is to simply remove from the AI's use. We can't compleatly delete everything in our brains by thinking one particular thought, so an AI shouldn't be able to completly destroy itself with one command.

[ Parent ]
My AI (none / 0) (#23)
by Farq Q. Fenderson on Thu Dec 21, 2000 at 01:51:45 PM EST

I'm "growing" mine in a more computeresque environment myself. Eventually I intend to throw it in a MUD so that it can learn about human interaction.

I came to this conclusion when I realized that I had to start with instincts, then I realized that you don't need any specific environment to start training the mind...

farq will not be coming back
A book to read... (3.00 / 1) (#33)
by cr0sh on Thu Dec 21, 2000 at 05:00:50 PM EST

A good book to read on the subject:

Machine Intelligence by David L. Heisserman (I think that's spelled right)

It was published by TAB Books, and is now out of print, but you may be able to find it used. It is a little dated, but the concepts and such could still stand to be explored further. It basically treated AI as something done within the context of a simple computer generated system - in a way it was a form of artificial life. One interesting thing was that this book was the culmination of two other previous books (I can't remeber the title of one, but it dealt with a robot he built called Buster - the second book was entitled "Build Your Own Self-Programming Robot", and dealt with a robot he built called Rodney), using the same AI (or as he put it, MI) principles, but housed in a real robot body. Today, one could build the robots he details in a much quicker and cheaper fashion, but the concepts would remain the same (I say quicker and cheaper because in one of the books he details how to build a complete 8 bit computer for the robot, simply because at the time it was cheaper to do it that way).

How The Mind Works by Steven Pinker. (3.00 / 1) (#37)
by Farq Q. Fenderson on Fri Dec 22, 2000 at 11:31:48 AM EST

I found a good book that helped my a lot was "How The Mind Works." It's not about AI, it's about the human mind, but Pinker is really good at showing how things *could* work in an absense of a proof of how they do.

Basically, what I got from the book was (most of) a model that showed basically how a human mind could work. He gave a lot of evidence to show that his theories (well, mostly other people's theories, as he admits) have some weight.

What's important to me is that I got from him a model that I could adapt somewhat to design an AI. What happened was I took one notion ("use instincts, Steve") and went from there. All my work is coming closer and closer to looking like a human mind -- when I'm simply trying to model a mind that would work, not necessarily one that is humanesque (but there are some amazing features in the human mind that I'll never take for granted ever again -- like how efficient memory is.)

farq will not be coming back
[ Parent ]
Poll (4.00 / 4) (#34)
by _cbj on Thu Dec 21, 2000 at 05:09:16 PM EST

Whaddya mean let it have root? A good AI will take root.

It won't stop at your computer, either. (none / 0) (#40)
by roystgnr on Sat Dec 23, 2000 at 10:22:53 PM EST

Anyone want to estimate what percentage of systems on the internet are currently remotely exploitable? Even if you restricted yourself to published exploits, I wouldn't be surprised if it was a majority. And I'm sure the brute force computational power that your AI would get from the first few million computers would let it find and prepare exploits for all the undiscovered buffer overflows out there too. I don't know whether we'll develop the first real AI in a decade or a millenium, but I suspect the first AI to go Rampant will follow less than a year later.

[ Parent ]
Or...er... deliberately... (none / 0) (#41)
by _cbj on Sun Dec 24, 2000 at 07:59:34 PM EST

I was thinking about this a while ago, and concluded that an intelligent worm would be a far simpler task than an intelligent [almost anything else], if that's what you were aiming for. Small domain of inputs with limited possible interpretation.

First you might hardcode a toolbox of rudimentary techniques. A checklist of mostly invariant secure/unsecure questions (concerning configuration errors, I'm told) that any script kid would ask, any one of which would yield instant, cheap unpriveliged or root access. That only gets you so far, and is no better (and some worse) than the Morris Worm.

Automated searching for buffer overflows in object code sounds amenable to a host of mutating pattern matchers that have long since passed into the hands of engineers. This is the 'clever' bit. The processing will grind a single PC into rigidity for a few hours, but ought not require more than that 1 box. I got as far as genning up on file formats and linkage issues and trying to map the problem to a sensible fitness curve for Genetic Programming (and I've totally forgotten whether I succeeded there. So probably not) before being fickle and losing interest.

After that, the remaining tricky bits are how to distribute the learned exploits for most useful retrieval; how much of the entire bitch to allow to evolve, and at what rate; and whether to plea bargain.

[ Parent ]

Careful what you say... (5.00 / 1) (#42)
by roystgnr on Thu Dec 28, 2000 at 12:25:13 AM EST

I was thinking about this a while ago, and concluded that an intelligent worm would be a far simpler task than an intelligent [almost anything else], if that's what you were aiming for. Small domain of inputs with limited possible interpretation.

I was thinking about this too, last year, and came up with more conclusions. Let's call yours conclusion 1.

1. A modular, evolving worm capable of infecting the majority of the world's computers would be a relatively easy beast to program. A plugin system, to add new payloads and exploits on the fly. A few wrapper DLLs to give it a "man-in-the-middle" capability on kernel/system library services: antivirus programs are fundamentally defeatable, once you control the OS they run on. A good expansion algorithm, so that the majority of system infections remain silent and hidden, while a few take the risk of being caught by sending waves of trojan emails, poking at unupdated Windows SMB stacks and IIS exploits, whatever it takes. A networked communications system, so that the virus author could send a command to every infected computer or a select subset of computers. Throw a few remailers at various places in the loop, and commands would be essentially untraceable. Of course he'd have infected his own computer for plausible deniability.

Ok, not easy, but certainly less difficult than most of the big open source projects on which I see people making incredible advances in their spare time.

2. Given that this thing wouldn't be hard to program, and that it could be used to hold billions of dollars electronically hostage, it will be programmed someday.

3. When that day comes, everyone who's so much as said the word "virus" online is going to be hunted like an animal by the FBI. I shouldn't even be posting this right now. The Morris Worm didn't come about at a time when the world's economy was using the internet as it's nervous system, and wasn't programmed maliciously. The Melissa type emails haven't had very harmful payloads yet, and you still see international manhunts for the programmers. If an American is foolish enough to try something like this while net security is still vulnerable to it, you can bet that stuff like the Bill of Rights won't stand in the way of protecting us from the hackers.

[ Parent ]

Must an A.I. exist in our reality? | 45 comments (39 topical, 6 editorial, 0 hidden)
Display: Sort:

kuro5hin.org

[XML]
All trademarks and copyrights on this page are owned by their respective companies. The Rest © 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!