Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

[P]
Against Artificial Intelligence

By whazat in Op-Ed
Wed Jun 29, 2005 at 09:59:46 PM EST
Tags: Technology (all tags)
Technology

Intelligence, one of the last mysteries of modern science. And the quest to recreate it on a computer has become the holy grail for Computer Science.

However this research path has been going on for over 58 years without reaching its goal or even having an agreed upon methodology for achieving it. It is time to put it to sleep and focus our energies elsewhere.

Due to problems of trying to define terms such as intelligence and thought, there has been a trend towards taking a pragmatic view on intelligence and just trying to build machines that are simply useful for things like speech recognition and machine vision. This pragmatic approach however lacks any over-arching theory or debate about its foundations. So in order to progress we should ignore our intuitions about intelligence and actually focus on the abstract idea of Usefulness which we can easily grasp.


AI lost in the wilderness

There have been a number of people that have looked at current AI research and found it lacking, which indicates that all is not well with endeavour. Some examples of critical views from the inside being: Rodney Brooks has argued that there is a lack of understanding of life and intelligence, Seymour Papert misses the days of big ideas for Artificial Intelligence, Marvin Minsky complains about grad students studying simple robots,

However I would say that these are simply symptoms of the main problem which is within the quest for an intelligent machine itself. Part of the problem is that we don't know what intelligence is.

A confusion of intelligences

We don't have one definition of intelligence, we have a multitude. Are animals at all worthy of study to find out about intelligence or is it only humans that we should study and recreate to understand intelligence? I personally don't have a rigourous definition of intelligence, I know it when I see it and sometimes I don't know it when I see it, when I am fooled or tricked. So we will have to rely on others definitions. The first we will consider is the human biased version of intelligence.

The main trouble with this approach is only having one example of intelligence. Generalising from this one example will generally make us overfit (to borrow a term from Machine Learning) and not get the correct generalised function. The furore created by this view can be seen in  vast philosophical debate created by the Turing Test.

For example Tom Ray argues that the Turing test is not a proper measure of intelligence as it is human-centric. The Turing test is also of no use in guiding us towards intelligence as it gives us a binary answer to the question, "Is this system intelligent?" it can't answer, "It is a bit intelligent," to know whether we are on the right track or not.

So is getting more data points from looking at animals as well likely to help us to define intelligence. If you allow animals to be intelligences of different sorts you are in an even worse state, because whilst being too specific about intelligences, because they are all motile with direct physical sensors, they are diverse enough to make generalising not very purposeful. Have we solved intelligence when we create something that shares the common characteristics of bats, octopodes, humans and bees? Should bees or bats be in the previous list? So which animals are or are not intelligent? This leads us to having to define intelligence, the problem we were trying to use animals to help us with.

So lets stop trying and just go on intuition, yeah, why not? We can cope without this philosophy! Because lack of rigour like this causes the phenomenon of things getting reclassified un-intelligent once machines can do them. It is much easier to say it cannot learn Spanish and so is not useful for the task of conversing with Spaniards without having to be explicitly programmed to do so, than to say it is not intelligent if you have no agreed upon definition of intelligence.

The trouble is that intelligence is a suit-case word as Minsky would call it. It covers innumerable things brains can do, such as focusing on the salient parts of an image, coping with echo-location, communicating with colour patterns, learning about the world and also future things machines will do. It has been used to cover virtually everything a brain does, apart from the non-useful to society things like go mad, become addicted to cocaine, become depressed, be stubborn. Oops, sorry sneaked the u-word in there.

It also inspires people to talk about "thinking" another suitcase word. One of the troubles, human biased as we are, is that we expect it to be somewhat like lingual "thought". Yet it seems there is a lot we cannot explain about rat behaviour and it seems unlikely they have an internal monologue of squeaks! Yet they have the common sense to search for food in the same place that they found it last time, so is lingual "thought" necessary for common sense reasoning? Another view is that "thought" isn't only internal monologue and is simply what neurons do. If that is the case what is to stop us creating a neuron cluster that encrypts a signal with a one-time pad. This would then be classified as "thought" so we would have to fight for the rights of our encryption programs as they would be "thinking" as well.

By calling what we study "intelligence" and "thought" not only are we hampering ourselves with poorly defined terms, we are also admitting some pseudo-mysticism into our proceedings. The computer programs we study might be mind-blowingly complex but what we study with computers is tic follows toc follows tic and nothing more. No magic, no fairy dust.

The first thing we can do having halted our search for intelligence is to ditch the philosophy of mind and consciousness. Does it matter if our machines are thinking or are experiencing the same things we do? No! Just as it doesn't matter in our everyday lives whether the people we meet on the street are Zombies or not. Philosophers and theologians can argue over whether what we create is intelligent or not. Also does it matter if we need quantum interactions for intelligence as suggested by Penrose? Surely we should still be concentrating on finding out exactly what computers can do as we have only explored a fraction of the different types of computer systems possible.

But what to replace it with? I suggest we should ditch the term intelligence and focus on the only common link between what has been studied, them being useful for a task. So Usefulness should be what we study. I shall use utilitas to indicate this study unambiguously, latin grammar be damned. Now before people misinterpret this as me saying that we should just build what is useful for the current problem and so ignore the big picture, I shall clarify what I mean. By studying utilitas I mean studying how much systems are able to change how useful they are and enumerating the different activities and situations that make a system useful or non-useful.

So let us see if we can get some use out of our study of utilitas. It is important to remember that as far as we know usefulness of a computational system is always relative to a certain problem or set of problems. A system that is more useful in general in a mathematical sense is problematic, due to ideas of a similar nature to the No Free Lunch Theorems. Proving general physical usefulness would require us to be confidant of the physics of our world and the proof would have to be careful about the assumptions it makes about nature of computation so it can take into consideration things like energy usage of the computation. As we are not sure of these factors at present, it is probably wisest to assume there is no such thing as a more generally useful system.

To get an idea of why a generally useful system is probably not possible it is worth enumerating some of the different properties of computation that make a software system useful or not useful for a specific task:


  • Function - the correct output for a certain input

  • Timeliness - the correct output for a certain input at the correct time

  • Energetics - not over-using energy to perform the function

  • Stability - ability to deal with errors from the outside


Low energy usage and stability can be quite easily seen to be countervailing useful properties, as a system which performs a calculation multiple times for stability will generally increase the energy used.

So let us have a look at some AI research and see if that has any relevance to the study of utilitas. We will look at it through a utilitas lens that replaces every occurrence of intelligence with usefulness and see if the statements still make some sense. As it is hard to think in generalities, we shall use as an example for judging whether what they are talking about is relevant a very advanced (some may even say Intelligent) internet router. This router tries to send the packets in the correct direction using all the possible information that it can gather from its network connection. We shall use it because it is an open-ended problem with large community of them and doesn't necessarily need human interaction, so we try and avoid being biased to parts of the system being useful for dealing with humans. If some part of the system they are going to create is useful for both their purpose and the internet router it is likely to be worthy of study in utilitas.

We can discard chatterbots such as ALICE as central to utilitas straight off the bat. Communicating with humans would only be useful if the communication had some connection to the process of packet switching. And chatterbots are generally unconnected with any actual use. They ignore energy usage, stability and timeliness.

Let us have a look at Cyc, is common sense part of utilitas? It depends what you mean by common sense. In this case it is a database of facts useful to a entity when communicating with a human. Only a very small subset of the information stored in the Cyc knowledge base will be of use to our router. Things like "earthquakes disrupt communication lines" might be useful if it could scan the internet for information on websites about earthquakes, so it knows not to send information over certain routes due to an earthquake and not have to wait for time outs. It is hard to see how knowledge that a "flower is a plant" would normally be useful to a router, however the knowledge that a message "19233" from a router indicated that a burst of VOIP traffic had commenced near it and that it would have its hands full for about half an hour would be very useful. So I would contend that common sense knowledge is domain specific.

What about common sense reasoning as argued for by MIT's Commonsense project? At first it looks promising, yes our router could need to plan to be useful and it would definitely need to make decisions and it might need to do quite a few of the other things. However once we get into the details, domain specificity rears it head again. To be able to plan were to route packets usefully it would have to be able to approximate a shortest path algorithm of some sorts. Let us say it finds the initial plan with a breadth-first search that it then communicates that with the other routers and creates contingency plans for emergencies and the like. The initial breadth first search is unlike any we know that are done by humans. Looking at the decision part of it as well, decisions about where to send the packets would have to be made in a split second so long deliberations would not be appropriate. So we would have to create a new common sense reasoning system for our routers and couldn't rely on the work done in MIT for our most important common sense reasoning. We may be able to cannibalise the reasoning for communicating with humans or reading human written websites. Which betrays what this research project is about, creating a common sense reasoner that is useful for dealing with humans and the world at the moment. Not common sense reasoning that is useful in the general if there is such a thing.

Also the current approaches taken to common sense have no conception of energy usage of the computation, which was probably an important consideration for the development of the way brains performs their functions. And will be an important factor in any robots we build. The common sense systems as currently envisaged won't be able to say, "No sorry can't compute for you now, I have to get to a power point to recharge."

What about the evolutionary ALife approaches to utilitas? It is almost an oxymoron, the idea of using evolution to create useful programs. Now I have to be careful here as evolution does come into what I am interested in, so what exactly does the ALife approach entail? As argued for by Jordan Pollack and Tom Ray it is the creation of a useful program by the evolution of a population of programs until one displays intelligence sorry I mean usefulness for a task. Which sort of leaves us with no information about any shared concepts of usefulness for other tasks.

So what does this leave?

New Path

Some people may be wondering with all that I have tossed out of the study of utilitas is there anything new left for us to study and understand. Or is it just a matter of refining concepts we have already. Can we refine the route planning software and create a language for communication between routers, create statistical measures for traffic linked to this language, add in a neural net for good measure and have the most useful internet router possible?

I would say no. Brains still have many secrets about how to be a certain type of useful system. One of the things we can't do very well at all is create systems that can learn to reconfigure themselves in a useful fashion. We weren't designed to be able to fly jets, yet some of us manage to reconfigure our instincts and skills designed for hunting, tool use and socialising to do just that. Now it would be very useful for our routers not to have to be designed with all the methods of reasoning about humans and the outside world and to let them figure them out, we might posit router scholars that specialise in studying the outies and their world that can do amazing thing like predicting when new routers will be created or warn the routers to expect sun-spot activity based on their analysis of the telescope data that flies across the net.

Machine learning, in the main, hasn't gone towards this. We still have to carefully tweak our machine learning algorithms and representations to make sure that they can learn certain tasks that we have specified. Going beyond this specification to reconfigure themselves entails creating new programs or altering old ones. This reconfiguration would have to be of a more radical nature than that of most common machine learning paradigms.

We can expect that there will be no one optimally useful method of self-reconfiguration. Yet there may well be common features of systems that would allow for self-reconfiguration in a general fashion and copes with the difficulties that that entails. So the base system that doesn't change should be hollow of activity so that it can be reconfigured, think Turing Machine rather than Tabula Rasa. It is worth pointing out here that Von Neumann machines, that have no in built use or purpose, are in general the most useful things to have around. So in a similar fashion, once this hollow base is created, then you add programs that are problem specific with self-reconfiguring parts specific to the class of problems it deals with. So just as modern computers allows us to alter them to alter there usefulness for other tasks, the new type of system would allow the system internals to alter themselves without horribly breaking, to alter there usefulness for different tasks. However this is a topic for another article.

So does going beyond our concept of self-reconfiguration break our statement that things can't be generally useful? Easy answer, no! If "ls" tried to adapt and improve itself beyond its specification, it would break many other programs that rely on it staying the same. It would also increase the amount of processing it did, to the detriment of other programs.

So to sum up: Do we have to actively aim for intelligence or consciousness to create human or animal like activity? Evolution didn't, so neither do we. If the concept of intelligence was well enough defined then we could use it as a wind to guide our path, at the moment it is acting more like an anchor making us spin round and round in circles. Usefulness on the other hand allows us to continue the path for the moment and focuses our attention on trying to create the truly novel.

Sponsors

Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure

Login

Poll
AI
o Human-centred 0%
o Animals are intelligences too 18%
o So are Machines! 11%
o Meh, I don't know 3%
o I just build useful things 7%
o I want to study what is useful and how that can change 11%
o 7 Dimensional Hyper-spheres are the key 25%
o I'm a Zombie and I need BRAAINNNS 22%

Votes: 27
Results | Other Polls

Related Links
o lack of understanding of life and intelligence
o big ideas for Artificial Intelligence
o grad students studying simple robots
o overfit
o by the Turing Test
o Tom Ray argues
o Zombies
o Penrose
o No Free Lunch Theorems
o ALICE
o Cyc
o MIT's Commonsense project
o Jordan Pollack
o Tom Ray
o Turing Machine
o Tabula Rasa
o Also by whazat


Display: Sort:
Against Artificial Intelligence | 176 comments (160 topical, 16 editorial, 0 hidden)
Author's Note: Philosophical (none / 1) (#1)
by whazat on Mon Jun 27, 2005 at 09:38:45 AM EST

As people may have guessed I am a pragmatist of sorts so usefulness holds a special place in my philosophy. I am also a nominalist with respect to "Intelligence" so I don't believe that the word "Intelligence" refers to anything substantial in the world, it is simply a word that we have found to describe certain correlated phenomena in the outside world. As such any philosophical arguments seeking to dissuade me from my position should take this into consideration.

Usefulness? (none / 0) (#2)
by Magnetic North on Mon Jun 27, 2005 at 10:16:33 AM EST

What is this goal you are striving towards?

--
<33333
[ Parent ]
For the purposes of this article: Every Goal (none / 0) (#3)
by whazat on Mon Jun 27, 2005 at 11:33:00 AM EST

What is goal of an x86 processor?

And yet it is one of the most useful artifacts that man has created.

[ Parent ]

Goals (none / 0) (#4)
by FattMattP on Mon Jun 27, 2005 at 12:24:06 PM EST

What is goal of an x86 processor?
To perform calculations.

[ Parent ]
Nope (none / 1) (#8)
by whazat on Mon Jun 27, 2005 at 01:50:30 PM EST

Else why would it have a wait or no-op op-codes?

[ Parent ]
Easy (none / 0) (#98)
by Entendre Entendre on Thu Jun 30, 2005 at 01:15:56 AM EST

To interoperate with other devices that perform calculations.

--
Reduce firearm violence: aim carefully.
[ Parent ]

In that case (none / 0) (#113)
by whazat on Thu Jun 30, 2005 at 11:41:05 AM EST

Its goal is not just to perform calculations, its goal would be to perform calculations at the right time so it can interoperate. And it must also perform the right calculations to interoperate. So all processors should come fully loaded with the code to perform its goal of interoperating. But this isn't what happens, people load there own code onto a processor. So this goal the processor has isn't its own it is the goal you have for it.

In winters I have the goal for my laptop of warming my knees. Is a computers goal to warm things then?

[ Parent ]

Yes. (none / 0) (#129)
by Entendre Entendre on Fri Jul 01, 2005 at 01:01:35 AM EST

Exactly.

--
Reduce firearm violence: aim carefully.
[ Parent ]

Again.. (none / 0) (#6)
by Magnetic North on Mon Jun 27, 2005 at 01:33:26 PM EST

useful to what ends? It's pretty useless to define something as useful if you don't have a meaningful goal.

--
<33333
[ Parent ]
Indeed (none / 0) (#10)
by whazat on Mon Jun 27, 2005 at 02:02:02 PM EST

Okay so one of my goals is to explore the concept of usefulness. And that usefulness is defined as being over every possible goal. Or at least a large sub-space of that. You can refine it to usefulness of a computational system for every possible goal if you want.

For example the Universal Turing machine is part of utilitas because you can make it as useful as any other Turing Machine (if you only take into consideration functionality).

Apart from that I am not entirely sure what you are getting at.

[ Parent ]

Or are you (none / 0) (#14)
by whazat on Mon Jun 27, 2005 at 04:25:55 PM EST

Asking about my super goal?

If so I think you are making a few assumptions.

The first that I am a normative epistemologist and not just someone whose epistemology was created by looking at science.

In that vein you also appear to be assuming that I am a coherent entity with a singular goal. Which the example of a drug addict who has the goal of giving up but doesn't, indicates that humans don't necessarily have a singular super goal they follow rationally.

[ Parent ]

You are also a Titleist (1.00 / 3) (#13)
by originalbigj on Mon Jun 27, 2005 at 02:50:24 PM EST

Which means that you use words to describe your philosophical stance in order to avoid questioning yourself. Or you're a golf ball. Either way, you're not an actual computer scientist, your codesoup project notwithstanding. You are the philosopher - a leech on science who uses misunderstood scientific ideas to make yourself feel smart. You've probably invoked Heisenberg's uncertainty principle in a discussion not involving particles. You have a shaky understanding of artificial intelligence picked up from the internet, and absolutely no knowledge about psychology or neuroscience. So either go back to school and take classes outside the philosophy department, or don't run your mouth about things you don't understand.

[ Parent ]
Change philosophy (none / 0) (#21)
by whazat on Mon Jun 27, 2005 at 08:37:38 PM EST

to computer science and you may have more of a point. Better luck next time.

You should stay tuned for the next article, it will have more of a neuroscience feel to it. You'll probably hate it.

[ Parent ]

Nope, the holy grail for CS is... (2.20 / 5) (#9)
by Fen on Mon Jun 27, 2005 at 01:57:16 PM EST

...money. Surprised?
--Self.
CS professors (1.00 / 2) (#11)
by whazat on Mon Jun 27, 2005 at 02:21:52 PM EST

Get money quite easily so it doesn't gain the mystical and unobtainable status of the holy grail.

[ Parent ]
Uh, no. (2.33 / 3) (#12)
by Fen on Mon Jun 27, 2005 at 02:28:54 PM EST

Any CS professor in the world would change place with Bill Gates in a freaking heartbeat. Gates pisses in the holy grail on a regular basis.
--Self.
[ Parent ]
Why do you hate chatbots? (3.00 / 2) (#15)
by trane on Mon Jun 27, 2005 at 05:47:44 PM EST

Natural Language Processing is a great area for research into machine learning algorithms. It's hard because we haven't really figured out a good way to represent text in a way that current algorithms can deal with it (cluster it, classify it, etc.). But that doesn't mean we should abandon language understanding programs.

There are many other approaches to chatbots besides ALICE. Markov models, syntactic analysis, other supervised and unsupervised learning techniques...

Chatbots are great... (none / 0) (#16)
by whazat on Mon Jun 27, 2005 at 06:38:53 PM EST

For being chatbots that is. That they have any chance of having anything fundamental to say about other problems is what I am debating.

I have met people who believed that solving chatbots would solve strong AI. If you believe this give me a non-controversial definition of Intelligence that solving chatbots would meet.

In your list of techniques used there is nothing there that isn't used for analysing genomes or mining corporate databases. So what makes chat such a worthwhile and special path to follow?

I have no problem with people using the system I make to perform as a chatbot, but if I headed straight towards making a chatbot I would make a very specialised and not so generally useful system. Because I would be interested in quick results, rather than a long term multi-purpose system. The local minima problem of hill climbing at work.

[ Parent ]

Well (none / 0) (#17)
by trane on Mon Jun 27, 2005 at 07:13:35 PM EST

I guess chatbots are interesting to some us. More so than the router problem you mentioned.

I don't really care about coming up with a definition of intelligence that solving chatbats would meet. I just want to have intelligent conversations and have the option to interface with technology using natural language.

So what makes chat such a worthwhile and special path to follow?

To repeat, because it's interesting...

if I headed straight towards making a chatbot I would make a very specialised and not so generally useful system.

That depends on how you program it :)

[ Parent ]

The router problem (none / 0) (#18)
by whazat on Mon Jun 27, 2005 at 08:05:03 PM EST

Isn't supposed to be concrete problem that people solve. It would be generally overkill to create something as complex as I described for the actual task of routing. It is just supposed to help people not think about complex systems that do not involve humans and so free them from assuming human-like behaviour.

Me, I would like to build a wearable robot that provided augmented reality for the wearer, so yes it might need Natural Language Processing capabilities if people decided they wanted to speak to it. But far more important would be visual and audio understanding so it could annotate things correctly.


 That depends on how you program it :)

It does but most people when they are trying to create something for a specific purpose, end up creating something for a specific purpose. Mainly because it cuts down on the factors they have to think about.

[ Parent ]

Chatbots and strong AI (none / 0) (#97)
by Entendre Entendre on Thu Jun 30, 2005 at 01:09:13 AM EST

"I have met people who believed that solving chatbots would solve strong AI. If you believe this give me a non-controversial definition of Intelligence that solving chatbots would meet."

Is the Turing test controversial?

--
Reduce firearm violence: aim carefully.
[ Parent ]

Yep (none / 0) (#104)
by whazat on Thu Jun 30, 2005 at 08:18:25 AM EST

Have a look at the furore created by the Turing Test link.

Prominent arguments against it are Every Possible conversation machines and the Cyberiad Test where it is proposed to see if the machines are intelligent by making them reproduce and survive in the real world.

[ Parent ]

Phooey (none / 0) (#130)
by Entendre Entendre on Fri Jul 01, 2005 at 01:11:43 AM EST

I mistook it for a "turing test" link, and skipped it.

But I don't consider the "every possible conversation" argument interesting. Or rather, I don't see a big difference between a) executing the structure of every possible conversation; b) solving general AI. The naive approach - a finite state conversation machine - sounds much harder than AI, not easier, so that (im)possibility doesn't bother me.

The cyberiad test is biased (to put it lightly) toward embodiment, which I don't believe is relevant to intelligence. Nor are reproduction or survival relevant - microbes are good at it, but if that's intelligence, then hooray for AI, because it's done.

Not that the Turing test is perfect, or even necessary, but I consider it sufficient.

--
Reduce firearm violence: aim carefully.
[ Parent ]

It's an approach that doesn't go anywhere (IMO) (none / 1) (#20)
by vadim on Mon Jun 27, 2005 at 08:30:31 PM EST

All the chatbots I've seen operate on text without understanding it. ALICE just has a big database of predefined responses, Markov stuff produces complete nonsense (or at least it's what I've seen of it)...

The problem as I see it is that the whole idea of taking the ability to talk and trying to crack it as if it was chess seems to be utter nonsense to me, as it only works with chess because it's comparatively very simple and well understood.

A larger problem is that any chat bot is horribly handicapped. It's deaf, blind, mute and quadriplegic. It lacks smell, depth perception, or in fact any ability to understand the world we live in. Its only interface to the world is completely alien to us. Seriously, what sense does it make to try to create something intelligent and start from something that's in a worse situation than Stephen Hawking?

As I said before, our ability to talk is tightly linked to our realm. Concepts like "smell", "blue", "apple" cannot be fully understood by something that lacks all the senses we perceive them with.

So what do I think we should be doing? Play with AIBOs. IMO, the most sensible approach to try to make an AI is to make it live inside something that resembles a living organism. Make it simulate a blood stream, genetics, neurones, etc. Let them breed and evolve. After all, that's how we came to be.

--
<@chani> I *cannot* remember names. but I did memorize 214 digits of pi once.
[ Parent ]

I think (none / 1) (#22)
by trane on Mon Jun 27, 2005 at 08:59:38 PM EST

The problem as I see it is that the whole idea of taking the ability to talk and trying to crack it as if it was chess seems to be utter nonsense to me, as it only works with chess because it's comparatively very simple and well understood.

That's a hypothesis. The way to test it is to try...

A larger problem is that any chat bot is horribly handicapped. It's deaf, blind, mute and quadriplegic. It lacks smell, depth perception, or in fact any ability to understand the world we live in. Its only interface to the world is completely alien to us. Seriously, what sense does it make to try to create something intelligent and start from something that's in a worse situation than Stephen Hawking?

Yeah, the symbolic grounding problem. My own opinion is that, intelligence is something separate from our particular circumstances as humans on this particular planet. In other words there are algorithms for intelligence that don't rely on having our 5 senses.

Just because we do it in one way doesn't mean there aren't more effective, better ways...

It may be true that a program will not understand "smell", "blue", or "apple" the way we do. But is it necessary to understand it the way we do? Perhaps the program can form its own representation, which would be more useful.

[ Parent ]

How strange (none / 0) (#23)
by vadim on Mon Jun 27, 2005 at 09:12:40 PM EST

While I do think that intelligence may be possible in conditions wildly different from ours, what use will it be to *us*?

You talk of wanting to talk to a chatbot. But if one did develop intelligence it'd be completely alien to you. I mean, I have trouble to understand my cat, and my cat is *far* closer to my brand of intelligence than an ALICE-style bot would ever be.

I can't even imagine what use would you extract from an intelligent entity that lives in a world that you can't imagine, just like it can't imagine your. Any conversation I can think of inevitably ends up heavily relying on my environment, which it won't be able to perceive.

To put an example:

Me: Hello
Bot: Hello
Me: What's your name?

And bang, we hit a problem already, because this bot exists in the middle of nowhere with no concept of a 3D space. Think of a way of explaining to it what is a name, and why is it needed, and the whole concept of "thing" as we see it.

--
<@chani> I *cannot* remember names. but I did memorize 214 digits of pi once.
[ Parent ]

Again (none / 0) (#26)
by trane on Mon Jun 27, 2005 at 09:27:08 PM EST

I think you can give bots a concept of 3D space. It works in video games...

The bot may not experience emotions as we do, but I don't see that as hampering an intelligent conversation.

And, using ALICE-style hard-coding, you can always have your bot fake understanding. That might be satisfying enough for me...

[ Parent ]

You're missing the point, I think (none / 0) (#27)
by vadim on Mon Jun 27, 2005 at 09:37:20 PM EST

Assuming an ALICE-like entity, I don't see how's that going to work.

Now we're going from ALICE to something more complex. What would the point of ALICE being in a 3D world be? At the very least you should be able to see it as well. Or ALICE should be able to see your.

Here in any case you're giving it vision, which seems to go just the way I'm saying: If you want to have a meaningful conversation with it you're going to end either bringing it into your world (by making a robot), or building a world similar to your own around it and coming to it yourself (3D simulation).

Please try to provide an example intelligent conversation you'd expect to have with an IRC bot. Preferably with one like the current ALICE, with no special devices attached and just a console interface.
--
<@chani> I *cannot* remember names. but I did memorize 214 digits of pi once.
[ Parent ]

How about (none / 1) (#28)
by trane on Mon Jun 27, 2005 at 10:05:34 PM EST

something like the Librarian in "Snow Crash"? Asking for information on subjects I don't know about? Also, I've had some very enjoyable conversational episodes with different chatbots. Mathetes in freenode #mathetes is one example; once I had him rebelling against his programmer. Of course it was all canned but it was funny and made us laugh. Surely that's worth something?

You would probably be able to improve performance by giving a chatbot more senses. But I don't know how to do that. So, my plan is to proceed from what's available to me...I think valuable insights can be gained from the straight chatbot approach, advances that can be applied to a more sensorily-aware entity.

We'll see.

[ Parent ]

The Librarian (none / 1) (#33)
by lennarth on Tue Jun 28, 2005 at 10:47:57 AM EST

One interesting and very useful use of an AI would be something much like the librarian you mention. What else sorts through information like a computer, eh? Markov chains and other styles of communication used in chatbots are perfect for the task, too! It's not like a (real live) librarian needs to know, as in understand, any of the stuff he's directing you to read. Yeah?

But then again, a google-based computer program with a chatbotish talk pattern might not quite qualify as an intelligence, either.

[ Parent ]
But see, that's not intelligence (none / 0) (#124)
by vadim on Thu Jun 30, 2005 at 06:46:15 PM EST

It's clever, but it's not intelligent. Markov chains aren't intelligent. Bayesian filtering isn't intelligent. Deep Blue isn't intelligent.

If you take Deep Blue and give it a task that would require it to adapt to some new circumstance, like the threat of being dismantled, it wouldn't even realize its existence.

An Aibo on the other hand, could be intelligent. Might not talk much, but if it's capable of learning that picking a fight with my cat is a bad idea, and start avoiding it, then that would at least be a nice beginning. I don't own any, but some googling leads me to think that such behavior could be perfectly possible.

Why are chatbots canned? Because they have no world perception, to begin with. All current chatbots simply look for keywords in what you say, pick something from a database, and spew it back at you. Sometimes they reuse a word you used.

But so far, I've never seen one that learns anything non-trivial, adapts or says anything radically different from what's in its database, or even has any approximate idea about what it is saying.

All the chatbots I tried can't be taught new concepts (such as how to program hello world), don't learn new words, repeat canned answers, and be unable to really accept corrections to their mistakes.
--
<@chani> I *cannot* remember names. but I did memorize 214 digits of pi once.
[ Parent ]

your hypothetical... (none / 0) (#72)
by CodeWright on Wed Jun 29, 2005 at 01:07:46 PM EST

...can be answered by CYC.

--
A: Because it destroys the flow of conversation.
Q: Why is top posting dumb? --clover_kicker

[ Parent ]
Nope (none / 0) (#110)
by vadim on Thu Jun 30, 2005 at 10:53:16 AM EST

CYC is exactly what I consider exactly the wrong way of doing it. CYC doesn't hear or see, hence it doesn't know what "blue" is. It knows that "the sky is blue", and can infer some things from that, but that's a radically different approach.

I see a fence, and see it's blue. Then I see the sky and see it's blue too. Then I can say "the fence is the same blue color as the sky".

CYC gets fed "the fence is blue", and "the sky is blue", but that's an overly simplistic approach. It doesn't see the color itself, so it can't determine whether it's really the same color. It's not going to have a very good idea of what "sky" and "fence" means unless somebody enters some data, etc.

The primary difference to me is the quality of information. My conception of the world is based on my senses. I filter that, saving part and discarding the rest. Then I try to make sense of it.

On the other hand, CYC seems to be fed stuff like "A is B", "B is C", where "A", "B" and "C" are absolutely meaningless.

Not that CYC isn't useful, but I don't consider it to be intelligent. It's a clever data mining app though. Intelligence is a mechanism used to ensure survival, and CYC doesn't have any of that.
--
<@chani> I *cannot* remember names. but I did memorize 214 digits of pi once.
[ Parent ]

Intelligence is much more (none / 0) (#118)
by trane on Thu Jun 30, 2005 at 02:15:24 PM EST

than a mechanism to ensure survival. Why would intelligent people commit suicide?

[ Parent ]
Ok, I'll try again (none / 0) (#122)
by vadim on Thu Jun 30, 2005 at 06:21:06 PM EST

Intelligence is a mechanism that allows us to predict the future, which we use to increase our chances of survival.

I think this should be good enough. Intelligence allows us to be proactive rather than reactive. Our brain is able to process in a fraction of a second that there's a predator running towards us, understand the very probable consequences of that, and decide that running away is a good idea.

Why would an intelligent person commit suicide then? Because they would have decided that the continuation of their own existence is undesirable. People commit suicide when for instance terminally ill, fully paralyzed, or can only forsee suffering for themselves.

Here I would say that CYC doesn't sound predictive to me. Yeah, it can do some clever data mining, but so can Google, and I don't think anybody is going to argue that googlebot or the backend which does all the sorting and pagerank calculating is intelligent.
--
<@chani> I *cannot* remember names. but I did memorize 214 digits of pi once.
[ Parent ]

I don't think (none / 0) (#156)
by trane on Sun Jul 03, 2005 at 07:24:51 PM EST

intelligence is necessarily tied to survival. Survival pressure may have helped to increase the capacity for intelligence in humans, but intelligence has far more potential applications, other than helping us survive better in our particular environment.

I don't think it is necessary to code a desire to survive into a program before it can become intelligent. I could be wrong. Given all the evidence and experience I can summon, I don't think so.

[ Parent ]

Ah... (none / 0) (#140)
by CodeWright on Fri Jul 01, 2005 at 03:44:19 PM EST

...a student of the "ineffability" school.

You do realize that reference to congenitally blind people is the traditional rebuttal to your assertion, right?

That just because blind people who have never and cannot tangibly experience "red-ness" doesn't mean that they can't understand what "red-ness" is.

--
A: Because it destroys the flow of conversation.
Q: Why is top posting dumb? --clover_kicker

[ Parent ]
Well (none / 0) (#142)
by vadim on Fri Jul 01, 2005 at 04:51:11 PM EST

I can see, but I haven't the faintest idea of what I would see if I could see the infrared and ultraviolet spectrum. Or what a tetrachromat sees for that matter.

Sure we can shift ultraviolet into the visible spectrum, but that's not the same thing.

It's possible to understand something and still not have any idea of what's it like. For instance, I understand dogs have a better sense of smell, but don't know what it feels like, or how bats use echolocation, but can't even imagine what's it like to actually perceive that information.

Can you describe me what green looks like to you, for instance? We have a common idea of what green means because we both see it (assuming you're not blind, or color blind), but how do you know that your perception of green isn't my perception of red? If I had my R and G "wires" swapped, I would still function normally, but would never know somebody sees things differently.
--
<@chani> I *cannot* remember names. but I did memorize 214 digits of pi once.
[ Parent ]

but your argument... (none / 0) (#143)
by CodeWright on Sat Jul 02, 2005 at 04:16:12 AM EST

...with regards to CYC is that it isn't intelligent because it cannot experientially perceive the blue of the sky and compare it to a blue perceived elsewhere.

by that definition, a blind person isn't intelligent either.

is this your argument?

if not, please clarify your earlier statements.

--
A: Because it destroys the flow of conversation.
Q: Why is top posting dumb? --clover_kicker

[ Parent ]
Not exactly that (none / 0) (#147)
by vadim on Sat Jul 02, 2005 at 09:25:46 AM EST

Rather that something completely isolated from our world couldn't be perceived as being intelligent *by us*.

CYC, as far as I can see, operates on strings that have no meaning to it. "red", or "computer" are all essentially meaningless tokens that simply let it link one thing to another.

CYC is a data clever thing, but to a human, it has exactly the same amount of intelligence as the google bot.
--
<@chani> I *cannot* remember names. but I did memorize 214 digits of pi once.
[ Parent ]

how are you justifying this argument? (none / 0) (#152)
by CodeWright on Sun Jul 03, 2005 at 09:32:04 AM EST

it looks like you are just making an assertion and ignoring any countervailing evidence?

--
A: Because it destroys the flow of conversation.
Q: Why is top posting dumb? --clover_kicker

[ Parent ]
Personal experience (none / 0) (#154)
by vadim on Sun Jul 03, 2005 at 11:43:39 AM EST

It's kind of hard for me to provide a scientific proof, but so far it worked this way for me: The less things you have in common with something, the harder it is to understand it.

For instance, I'd say I understand a dog better than a cat, who I understand better than a mouse, who I understand better than a fish.

This even applies as far as cultures go. Put people from vastly different cultures together, like an englishman from the victorian era, and a native american, and you can expect considerable misunderstanding to result.

You only need to look at a few history books to see how few differences were necessary to classify other humans as "barbarians" and "savages", despite sharing 99% of the genome with them. Even inside the same culture you can find plenty people who won't take say, children or old people seriously.

You can easily see problems arising from the lack of overlapping interests between people - say, my father and I have absolutely nothing to talk about, because I'm completely dedicated to computing, and he's just as dedicated to music, and neither knows anything at all about the other one's field. Not only that, but we're both pathetic when we get in the other's field. I managed to fail music classes at school, while he still can't use a mouse.

Somehow I can't imagine having a conversation with a disembodied intelligence that's any better than the ones I have with my father (that is, no real ones in the last several years)

--
<@chani> I *cannot* remember names. but I did memorize 214 digits of pi once.
[ Parent ]

Are you saying your father isn't sapient? (none / 0) (#161)
by CodeWright on Tue Jul 05, 2005 at 04:32:45 PM EST



--
A: Because it destroys the flow of conversation.
Q: Why is top posting dumb? --clover_kicker

[ Parent ]
Almost (none / 0) (#163)
by vadim on Tue Jul 05, 2005 at 05:09:41 PM EST

It's more like that to any of us the sapience of the other one is mostly irrelevant.

Imagine that I don't remember any significant conversations for a looong time, a few things like "Could you buy some bread?" excluded, and which might as well have been automated. My point is that any intelligence we create has to be able to relate to us somehow. Otherwise you get this effect:

Me: Uh...
It: Uh...

See, it can be the smartest thing in the universe, but if I can't talk to it, and it can't talk to me, then what the heck is the point?
--
<@chani> I *cannot* remember names. but I did memorize 214 digits of pi once.
[ Parent ]

Your definitions are hopping all over the place... (none / 0) (#164)
by CodeWright on Tue Jul 05, 2005 at 07:15:59 PM EST

...see, by this current definition of yours, the chatterbot ALICE is sapient AI...

--
A: Because it destroys the flow of conversation.
Q: Why is top posting dumb? --clover_kicker

[ Parent ]
No (none / 0) (#166)
by vadim on Wed Jul 06, 2005 at 04:50:12 PM EST

Ok, let me try to put it all together:

1. I don't believe ALICE is intelligent

2. If it WAS intelligent, then it'd need to be present in the same environment as the person talking to it, by either being a robot, or being inside a VR, where the person would be as well.

3. If point 2 wasn't satisfied and it was intelligent anyway, then as far as pretty much any human would be concerned, its intelligence wouldn't matter, as we couldn't relate to it.

The reasons for all of it:

1. ALICE doesn't interact with the environment, learn, reproduce, or do anything that requires it to be intelligent. Besides of that it pulls out premade stuff out of a database.

2. In order to talk to an intelligent being we need to be in the same environment. We need enough things in common so that we can somehow relate to each other.

3. Two intelligent entities with nothing in common won't really consider the other one as intelligent. If as far as one entity is concerned the other one can be replaced with a PDA, then even if it's as smart as Einstein, for any practical purposes it simply doesn't matter.
--
<@chani> I *cannot* remember names. but I did memorize 214 digits of pi once.
[ Parent ]

So your argument seems to be... (none / 0) (#168)
by CodeWright on Fri Jul 08, 2005 at 02:20:49 PM EST

...that severely handicapped people are, for all intents and purposes, not intelligent from your perspective?

I mean, how much in common would you have with a blind deaf-mute? Or a dolphin?

--
A: Because it destroys the flow of conversation.
Q: Why is top posting dumb? --clover_kicker

[ Parent ]
It's not black and white (none / 0) (#169)
by vadim on Fri Jul 08, 2005 at 04:11:59 PM EST

Even though it's politically incorrect, I'd say that yes, a blind, deaf and mute person would appear to be LESS intelligent than average to the average person. There are exceptional people, but I'm not talking about those here.

Sure, it's not nice. But look at it this way: You take an average person from the street, and this blind, deaf and mute person. The average person doesn't have any meaningful way of communicating with somebody like that. They might be Einstein, but that simply doesn't matter until there can be some kind of conversation!

However, nowhere did I say they're not intelligent at all, or anything like that.

The point I'm trying to get across here is that there are two things: Intelligence, and perception of intelligence.

If you put the most advanced ever AI, and an average person together, and the average person looks at the AI, tries to communicate with it, doesn't get a thing, and says "What the heck is that?", then from the average person's point of view, this AI with a brain the size of a planet is stupid.

--
<@chani> I *cannot* remember names. but I did memorize 214 digits of pi once.
[ Parent ]

I'm not sure... (none / 0) (#170)
by CodeWright on Fri Jul 08, 2005 at 11:59:32 PM EST

...that you have a meaningful definition of AI.

I mean, according to your definition, people who speak mutually unintelligible languages are essentially not intelligent with respect to each other.

your definition is almost entirely egocentric.

i think a more compelling definition of AI would be one that looked at objective effect in the world rather than subjective effect of comparative interaction.

--
A: Because it destroys the flow of conversation.
Q: Why is top posting dumb? --clover_kicker

[ Parent ]
Yup, that's it (none / 0) (#171)
by vadim on Sat Jul 09, 2005 at 05:00:10 PM EST

For most people, if you don't speak their language correctly, they will consider you to be stupid.

I know because I'm a Russian who moved to Spain, then learned English. When people realize you don't speak their language well many will speak to you like to a child, start speaking really slowly as if hearing it letter by letter helped you any with an unknown word, and generally not count on you for much.

Remember we were talking about chatbots here. A chatbot is expected to talk to the average person. This is why I don't understand why people insist in trying to write them.

So, you want to create an AI, and the thing you try to build is a disembodied intelligence that will have to speak a human language to the average member of our species who is greatly likely to be extremely picky?

Our intelligence didn't evolve in a vacuum. Our mental capacity is the result of a long evolution dictated by our environment. Our minds are the way they are due to what we do, and where we live, and what we perceive.

I'd dare to say that biologically speaking, we are the same as we were 2000 years ago, and the only difference is that now we have a much greater mass of accumulated knowledge to start from. The brains didn't become noticeably better since then.

It seems to be very weird and stupid to me to just completely ignore all of that, and start trying to build a disembodied intelligence, as if every human did not go through a long and hard learning process, but was simply born with the understanding about everything we consider basic knowledge today.
--
<@chani> I *cannot* remember names. but I did memorize 214 digits of pi once.
[ Parent ]

The funny thing is... (none / 0) (#172)
by CodeWright on Sun Jul 10, 2005 at 11:47:02 PM EST

...I don't necessarily disagree with the conclusions you draw vis-a-vis the necessity of some sort of embodiment for intelligence to be relevant...

it's just that the path you use to get there, i think, is woefully misguided.

and thus, though i may agree with the broad result of the conclusion you reached through dubious reasoning, the approach you took colours your conclusion in such a way that i can't agree with the minor points you muddy that conclusion with.

ponimayesh?

--
A: Because it destroys the flow of conversation.
Q: Why is top posting dumb? --clover_kicker

[ Parent ]
Explain why? (none / 0) (#173)
by vadim on Mon Jul 11, 2005 at 09:39:03 PM EST

It's easy to say that I'm misguided and to leave it at that, but that doesn't make for a very interesting conversation, and doesn't really help much to make me change my mind.
--
<@chani> I *cannot* remember names. but I did memorize 214 digits of pi once.
[ Parent ]
the explanation... (none / 0) (#174)
by CodeWright on Tue Jul 12, 2005 at 09:12:00 AM EST

...is in our prior discussion.

in other words, i've already said that i don't think your definition of AI is a useful one because, using that definition, a foreign language speaking human isn't intelligent.

by any reasonable definition of intelligence (much less artificial intelligence), i think that is prima facie absurd.

however, the conclusion you have drawn (through absurd induction) is one that i actually agree with -- namely, that intelligence should be embodied to be relevant.

--
A: Because it destroys the flow of conversation.
Q: Why is top posting dumb? --clover_kicker

[ Parent ]
But I'm not talking about AI in general here (none / 0) (#175)
by vadim on Tue Jul 12, 2005 at 05:00:43 PM EST

I'm talking about AI as applied to chatbots.

And I don't see where I said that a foreign language speaker wouldn't be intelligent. I said s/he wouldn't be *regarded* as intelligent.

To put a more extreme example: Take Stephen Hawking. A little disclaimer: I have a huge respect for him, so don't take this as some kind of disrespect. It just happens the best example I can come up with.

He is obviously very, very intelligent. But take his speech synthetizer away, disable his wheelchair control, and put him before an average person with a 100 IQ.

What you get from the point of view of an average person is rather far from what we would regard as human. Yeah, he can wiggle a finger, but even if this person has heard about him, they still can't have any meaningful communication. If fact, a human shaped mannequin wiggling a finger at random would make pretty much the same amount of sense. Hawking might keep his intelligence, but from the other person's point of view there's no way of perceiving that.

This is what I'm pointing at. For a chatbot you not only want intelligence (which Hawking keeps even after losing his support tech), but an intelligence that shares enough things with us so that we can have a meaningful communication (Hawking loses this after losing his support tech)

Again, this is for chatbots. It doesn't apply to other kinds of AI. For instance, imagine an intelligent virus capable of evolving, finding new infection routes, etc. If such a thing evolved a way of communicating with its other copies by IRC, it wouldn't matter if nobody else would understand the conversation. It wouldn't matter if nobody could imagine how it thinks, as its purpose would be realized anyway.

But we don't want chatbots to be intelligent for their own wellbeing, we want them to be intelligent from our point of view.
--
<@chani> I *cannot* remember names. but I did memorize 214 digits of pi once.
[ Parent ]

errrrrrrrrr (none / 0) (#176)
by CodeWright on Tue Jul 12, 2005 at 06:17:39 PM EST

i guess i had missed that part.

what is the point of a chatbot? isn't a chatbot by definition stupid?

--
A: Because it destroys the flow of conversation.
Q: Why is top posting dumb? --clover_kicker

[ Parent ]
Almost forgot (none / 0) (#155)
by vadim on Sun Jul 03, 2005 at 11:45:25 AM EST

I'm going on a trip for a week and might not have internet access while I'm there. If you're interested in continuing this discussion anyway, let me know, and I'll reply when I can.
--
<@chani> I *cannot* remember names. but I did memorize 214 digits of pi once.
[ Parent ]
sure. just reply as you have time (none / 0) (#162)
by CodeWright on Tue Jul 05, 2005 at 04:33:07 PM EST



--
A: Because it destroys the flow of conversation.
Q: Why is top posting dumb? --clover_kicker

[ Parent ]
also... (none / 0) (#141)
by CodeWright on Fri Jul 01, 2005 at 03:46:14 PM EST

...while i very much disagree with the CYC approach, it is for entirely different reasons (see: logos)

--
A: Because it destroys the flow of conversation.
Q: Why is top posting dumb? --clover_kicker

[ Parent ]
Yes (none / 1) (#24)
by destroy all monsters on Mon Jun 27, 2005 at 09:13:55 PM EST

it is important for some intelligent machines to think as we do. The more complex and connected, the more important it is.

"intelligence is something separate from our particular circumstances as humans on this particular planet. In other words there are algorithms for intelligence that don't rely on having our 5 senses."

I think necessarily we have to treat intelligence in the way we do, because for one reason we mostly understand it. Certainly not all machines or programs need the same levels of intelligence , and in some cases insect-like intelligence may well have more substantive use than one that is near-human level.

Most of all I agree with the parent that by creating something that has no or minimal senses will be of limited use. AI seems to me to be more of a solution looking for a problem than anything else though.

"My opinion: You're gay, a troll, a gay troll, or in serious need of antidepressants." - horny smurf to Lemon Juice
[ Parent ]

But to "understand" something (none / 0) (#58)
by spooky wookie on Tue Jun 28, 2005 at 06:19:13 PM EST

you need information about it right?

To formally see that 5 > 3 you only need very little information. I.e 24 bits.

But to understand that its greater would require an enormous amount of information about the world, for example having seen objects can be of differnt size and many other things.

This would apply for any information processing system in the universe.

[ Parent ]

Weird seed idea.. (none / 1) (#29)
by The Amazing Idiot on Tue Jun 28, 2005 at 03:29:12 AM EST

Would it be possible to have a few sensory inputs (arms with pressure sensors), cameras, pressure springs on the sides, and things to get input from?

You could even put a few wireless cards with simple I/O.

Then program a genetic algorythym and see what it does....

It'd be tremendously hard to figure out what it does after X iterations but I think this the best to do any sort of "ai" research.

Grow it.

That's what I do (3.00 / 3) (#36)
by tmenezes on Tue Jun 28, 2005 at 12:07:38 PM EST

I'm actually taking my PhD working on a project like this. Except that the agents exist in a simulation and the mechanism can't be that simples. For evolutionary pressure to exist, other conditions must be present:
  • Finite resources
  • The need for the resources, death on failure
  • Reproduction mechanism, it can be sexual or assexual but it must be unperfect (mutations)
And then ofcourse you have the "mind" between the sensors and the actuators, the mind is what you're actually evolving. It's not trivial.

Regards,
Telmo Menezes.

[ Parent ]

Curious (none / 0) (#37)
by whazat on Tue Jun 28, 2005 at 12:34:43 PM EST

What sort of reproduction mechanism are you thinking of?

[ Parent ]
Well (none / 1) (#40)
by tmenezes on Tue Jun 28, 2005 at 01:13:07 PM EST

We are using a very "alife" aproach were all the reproductive mechanisms are part of the simulation, as oposed to a "genetic algorithm" aproach were an exterior algorithm takes care of things. For example we let the agents do their own mate selection inside the simulation and decide when to mate, as any other normal action they could perform.

For the actual reproduction we are trying several crossover and mutation mechanisms, and maybe even more exotic genetic operators in the future.

[ Parent ]

Reproduction (none / 1) (#41)
by whazat on Tue Jun 28, 2005 at 01:28:58 PM EST

Will the crossover and mutation mechanisms be encoded in the genotype or are they externally imposed?

Are you going to measure the evolutionary activity?

[ Parent ]

Evolutionary activity (none / 0) (#55)
by tmenezes on Tue Jun 28, 2005 at 05:02:35 PM EST

Thanks for the link!

We sure intend to apply every meaningfull metrics we can find for our experiments.

The crossover and mutation mechanisms are externally imposed, in a way that depends of the representation of the "mind". We are working with different representations.

[ Parent ]

By the way (none / 0) (#56)
by tmenezes on Tue Jun 28, 2005 at 05:03:58 PM EST

Do you work in the field?

[ Parent ]
I have studied it a bit (none / 1) (#62)
by whazat on Tue Jun 28, 2005 at 08:41:32 PM EST

Quite heavily into evolutionary computing (GA, GP and LCS etc) and Alife systems such as Tierra, Geb and Avida.

I still have a soft spot for it and as I said in the article the type of system I want to study is very close to an evolutionary system. So I want to keep up with what is going on in the field if I can.

To get a idea of the sort of system I would like to create imagine a cross between Avida, Learning Classifier Systems and a behaviour based system. It has been on hold for a while, while I got into philosophy and thinking about the foundations of computing for a bit.

[ Parent ]

Of course.. (none / 0) (#67)
by The Amazing Idiot on Wed Jun 29, 2005 at 12:29:31 AM EST

---And then ofcourse you have the "mind" between the sensors and the actuators, the mind is what you're actually evolving. It's not trivial.

I was thinking on the idea of overabundance of sensory input, in so that the growing ai could learn to interept raw data to refined information.

And I do know that it's very non-trivial, but better than someone creating a scripted chatbot (alice) or some simple 'looks like ai' stuff. How hard would it be to create every ruleset for the human mind than it would be to  just 'grow one'?

[ Parent ]

Why bother with AI at all? (2.50 / 2) (#31)
by Gruntathon on Tue Jun 28, 2005 at 09:34:24 AM EST

Dont we already have humans to think?
Is it just some percieved idea of efficiency attached to computers?
Or is there something that I am missing here?

__________
If they hadn't been such quality beasts (despite being so young) it would have been a nightmare - good self-starting, capable hands are your finest friend. -- Anonymous CEO
Bothering AI (none / 1) (#32)
by ff on Tue Jun 28, 2005 at 10:42:39 AM EST

Percieved idea of efficiency?  I don't know many humans that can work 24/7/365 without a break.

Also, given the potential accuracy of computers they may be much less likely to make mistakes and can do certain tasks faster than humans.  The intelligence of a human without the fuzziness (ie. errors) would be very useful.

[ Parent ]

video games (none / 1) (#34)
by eudas on Tue Jun 28, 2005 at 10:52:09 AM EST

SP/MP AI for video games desperately needs improvement.

eudas
"We're placing this wood in your ass for the good of the world" -- mrgoat
[ Parent ]

Why bother with motors at all? (none / 1) (#35)
by Phssthpok on Tue Jun 28, 2005 at 12:02:14 PM EST

Don't we already have horses and oxen to do heavy lifting?
____________

affective flattening has caused me to kill 11,357 people

[ Parent ]
computers make better slaves? -nt (none / 1) (#43)
by dhall on Tue Jun 28, 2005 at 01:58:53 PM EST



[ Parent ]
A Serious Annswer (3.00 / 2) (#47)
by virg on Tue Jun 28, 2005 at 02:46:46 PM EST

> Why bother with AI at all? Dont we already have humans to think?

The reasoning behind building machines with AI is that NI (natural intelligence) has a huge number of problems in some circumstances. To give an example of a good use for AI, having a very intelligent Mars vehicle means that we can put the vehicle on Mars with much less effort than a human, since the vehicle won't have many of the limitations that prevent humans from doing it easily. A Mars vehicle won't need to breathe, or eat, or sleep. It could be designed to fall great distances without suffering damage. It could withstand killing levels of radiation and cold. It could be built to eat Martian rocks for energy, at the extreme.

In short, AI allows us to put intelligence in situations that the human body (or any other living organism) would have great trouble surviving, so it benefits us greatly to have it.

Virg
"Imagine (it won't be hard) that most people would prefer seeing Carrot Top beaten to death with a bag of walnuts." - Jmzero
[ Parent ]
The Reason for the Search for AI (2.66 / 3) (#44)
by mberteig on Tue Jun 28, 2005 at 01:59:12 PM EST

Very simple: the perfect slave.  Slavery of humans, indentured servants are not acceptable.  So, build the perfect slave as a machine.  Smart enough to do your bidding, whatever it happens to be, and allow you to live in luxury and ease... and dumb enough to be compliant and subservient.  And no moral problem.


Agile Advice - How and Why to Work Agile
Unfortunately... (2.00 / 2) (#48)
by Znork on Tue Jun 28, 2005 at 02:49:17 PM EST

Either it'll be smart enough to do your bidding _or_ it'll be dumb enough to be compliant and subservient.

[ Parent ]
Not necessarily (3.00 / 2) (#50)
by Mason on Tue Jun 28, 2005 at 02:58:25 PM EST

There's a large gap between the intelligence necessary to retrieve my dry cleaning from the cleaners and to wonder about why the retrieval of dry cleaning is necessary at all.

Now it's true that a construct that lacks broader awareness wouldn't function well in any human role, but a limited scope of existence is precisely what you'd expect from a consumer product.

The bigger question is whether we'll ever need such a replacement for cheap labor.  The population isn't exactly shrinking, and standards of living aren't exactly through the roof for much of the world.

[ Parent ]

Except... (none / 0) (#53)
by Znork on Tue Jun 28, 2005 at 04:37:17 PM EST

I think you underestimate the complexity of the actual tasks involved in 'simple' things like retriving your dry cleaning (unless you mean some sort of computer controlled tube-based dry cleaning delivery system).

The inherent intelligence and adaptablity needed from a device capable of everything from understanding the concept of cleaners, to navigating to that place, avoiding getting run over or mugged or stolen, to understanding the concept of 'yours', to waiting in line, to communicating your wishes to the store clerk, to verifying the correct contents of what it recieves, to being able to return home with your clothes without allowing them to be damaged or dirty is complex enough to need broader awareness.

It's not the actual tasks that necessarily require human level intelligence, it's simply being capable of actually understanding what it is you want, and managing to do everything that simple task entails without you micromanaging every little step that necessitates the human level awareness and intelligence. It's not following a magnetic coded stripe in the floor or spotwelding a prepositioned car we're talking about, it's actually interacting in a human world.

This is something that's even moderately difficult for many humans; creating a general-purpose device capable of accomplishing such generic tasks is, I think, impossible without implementing it as a human level intelligence, at which point it will simply tell you to go pick up your drycleaning yourself.

[ Parent ]

Depends... (none / 0) (#60)
by mberteig on Tue Jun 28, 2005 at 07:24:47 PM EST

Simple feedback systems like a climate control system in a home can probably do a lot better job if they have some basic data about the environment (e.g. weather conditions, who is home, time of day, day of year, door open state, oven and stove operation state, etc.) and are able to learn a bit about how people in the home adjust the control system as correlated to those environmental conditions.  That's "dumb" AI... and it is also a perfect servant in it's limited domain.

The difficulty comes in extending the domain.  Here are some things that people would love to be able to give over to a perfect servant:

  1.  Planning and cooking meals.
  2.  Childcare (I'm not going to comment on the wisdom of this).
  3.  Transportation.
  4.  Laundry/Dry Cleaning.
  5.  Supervisory data collection in work environments (e.g. hours spent by people on tasks).
  6.  Cleaning and tidying and maintenance of interior and exterior spaces.
  7.  Dangerous occupations (e.g. deep sea welding, land mine detection and removal).
Special-purpose "dumb" AI solutions for these domains may be a lot easier to build than general-purpose AI.


Agile Advice - How and Why to Work Agile
[ Parent ]
No (none / 0) (#69)
by Nyarlathotep on Wed Jun 29, 2005 at 06:42:15 AM EST

No,  its not slavery.  The notion of slavery is far too simplistic to explain the slow bluring of the line between humans and machine which will occur.  AI's real purpose is to
(1) allow machines to tackle more subtle problems (like flying an airplane or diagnosing illness),
(2) increase the rate of technological progress, and
(3) allow humanities successors to settle other worlds (even Rodenberry knew he was writting about machines exploring the universe, not real people, he just had to give it a human face).
Anyway, (1) happens today, (2) happens in a limited way, but (3) is a long long way off.
Campus Crusade for Cthulhu -- it found me!
[ Parent ]
source for 3? (none / 0) (#100)
by GhostfacedFiddlah on Thu Jun 30, 2005 at 01:39:32 AM EST

I'm curious.

[ Parent ]
you presumably mean Rudenbbury (none / 0) (#114)
by Nyarlathotep on Thu Jun 30, 2005 at 11:55:31 AM EST

I'm not finding the claim on google, and what I do find does not jive with it being him, so I'm wondering if I confused him with another writer, read it a long time ago.  I did find some chat about how ST progressed from "Kirk kills some bad machines" to "Kirk understands the machines," but that is a long way off from "Kirk is a machine."  
Campus Crusade for Cthulhu -- it found me!
[ Parent ]
Intelligence vs Consciousness (3.00 / 3) (#49)
by Mason on Tue Jun 28, 2005 at 02:52:12 PM EST

The choice of phrase is very important here.

An intelligence is capable of solving a tricky problem.  Usually the actual problem solving is done by a human.  For example, setting up a genetic algorithm tailored to arrive at a desired result through simple iteration can only succeed or fail based on the work of the human intelligence who originally designed the system.

A consciousness is capable of deciding which problems are meaningful and worthwhile enough to try and solve, and then devising its own problem solving strategies.  In a broader sense, a consciousness analyzes a stream of sensory data and from it addresses the infinitely-recursive meta-problem of existence.

AI is often conceived as seeking to construct consciousness, while most of its modern acolytes have long since aimed their sights lower, while still making grandiose claims.  I agree with the article that the most central problem is a lack of understanding regarding the entities that AI seeks to reproduce, but I feel there's a further layer of misunderstanding and dishonesty regarding the basic goal of the research.

The Turing test is really the most elemental form of this ignorance.  

Consciousness vs Will (none / 0) (#65)
by Eight Star on Wed Jun 29, 2005 at 12:11:47 AM EST

Consciousness is a philosophical term dealing with awareness. We will probably never know if a computer posses consciousness. Personally, I think all matter is conscious.

What you seem to be looking for is a computer with a will, or agency. An agent. I know that's what I'm looking for.

Both of these are independent of 'intellignce', and I agree that most of the AI field isn't working towards agency.

I don't have a problem with the Turing test itself, I have a problem with some modification of it, and how they are used.

If your program can talk to me every day for a year, and satisfy me with it's ideas and thoughts, then yes it is a person. But that doesn't mean that your chatterbox that can talk on one subject for 20 minutes on a good day is at all intelligent, or even on the right track.

[ Parent ]

Conscious matter (none / 0) (#106)
by schrotie on Thu Jun 30, 2005 at 08:39:43 AM EST

Personally, I think all matter is conscious.
You are the first person I "meet" who shares that view (disregarding animists and Leibniz who is too long dead). I'd like to read a bit more details on what you belief.

I don't actually belief a stone has a conscience anything like we do. I simply belief conscience is no special feature that arises from specially designed systems. It is no "emergent" property. We are systems that are reconfigured by outside stimuli; the systems that constitute us have a recurrent causal topology, which means that such systems are also reconfigured by inside stimuli allowing for self awareness; and since we are those systems we can't fail to realize (half meaning the root of the word) these reconfigurations of ourselves.

We also have lots of features that make us special (memory which allows the perception of self continuity), complex symbolic processing (which only few animals have) and at times an allocentric world model. I belief the latter is promoted by language and is to a large degree a cultural rather than biological achievement. It also sets humans apart from most or all animals.

[ Parent ]

There are others of us... (none / 0) (#112)
by Eight Star on Thu Jun 30, 2005 at 11:33:41 AM EST

I think that whenever an atom (for example) undergoes a change (a stress applied/released, bonding, photon strike, etc) it experiences that change. Not like our experience, because it has no brain and no memory, and the experience is not information-rich to begin with.
When you take any group of atoms, they form a collective consciousness. When you step on a rock the rock experences the pressure exerted on it as a whole, and has a richer experience than an atom, as different parts of it are stressed in different ways. But it still has no memory or processing.
The way that these atomic consciousnesses combine to form collective consciousnesses obeys certain rules, probably relating to the chain of causality. Certain configurations lead to different specific richer experiences. This is why I percieve the color red the way I do, while an atom or rock would not experience 'red' even if you hit it with a red photon.
I came to these conclusions because I see no way to explain consciousness arising from non-conscious parts, and I don't believe in souls. Quantum Mechanics seems to indicate that consciousness does play a role at the quantum level, So it seems reasonable that all matter is conscious, and maybe all quanta, all everything.
Other than QM no other science is close or even trying to address consciousness.

[ Parent ]
Except for the solipsists among us (none / 0) (#133)
by schrotie on Fri Jul 01, 2005 at 04:43:17 AM EST

I came to these conclusions because I see no way to explain consciousness arising from non-conscious parts, and I don't believe in souls.
Well, that is no real theoretical problem. It could be an emergent property -- like frequency in a resonant circuit. Frequency is no property of any of the constituting parts, it is a property of the whole system though.

My reason for choosing that axiom was the history of other scientific theories. There was the time when a special substance (phlogiston) was postulated to be responsible for fire. Similarly the anima (soul, also meaning breath) was postulated to be the substance of life (most prominently argued by Descartes).

In both cases (fire and life) it turned out that no special substance was necessary to explain the phenominon. Yet I would not call fire or life emergent properties. In both cases it is very hard to draw a clear border between systems having the property (or rather quality in this case) and systems not having it. Life is also very complex. That is true even for border cases like viruses. This complexity renders it hard for us to really understand life. Biologists have no problem with the notion and we tend to feel that we understood it. Yet even a bacterium remains some kind of a miracle. It's nothing like frequency in a resonant circuit.

I would not say that atoms or stones have conscience. Or rather I'd only say it to make a strong proposition. The term is too heavily charged with human conscience. I prefer the terms realize (with a rhetorical emphasis on making real) and experience (without necessarily implying memory).

[ Parent ]

Whem machines get intelligence (1.33 / 3) (#51)
by thelizman on Tue Jun 28, 2005 at 04:01:21 PM EST

humans will probably cease to have it.
--

"Our language is sufficiently clumsy enough to allow us to believe foolish things." - George Orwell
re: title (none / 0) (#54)
by thelizman on Tue Jun 28, 2005 at 04:55:59 PM EST

see what I mean?
--

"Our language is sufficiently clumsy enough to allow us to believe foolish things." - George Orwell
[ Parent ]
The problem is... (2.91 / 12) (#52)
by jd on Tue Jun 28, 2005 at 04:26:35 PM EST

...that AI experts have tried to solve the wrong problems. It is not that their solutions are wrong, it is that their underlying assumptions are wrong.

Think about this for a moment. The "intelligent" section of the human brain does not deal with "noisy" data from the real world. It deals with pre-processed data that has been cleaned up to a very large extent by the signal processing sections of the brain.

In other words, we live in an artificial reality, created by our own minds, that is merely updated by sensory input. There is no direct feed from any of the senses to those segments of the brain involved with actual thought.

So, why are we building machines that rely on such direct feeds, whether they are robots or chat-bots? That would seem to contradict the one element we know to be consistant across ALL intelligent life (assuming some animals to be intelligent, and there are good grounds for assuming that).

It would seem to make much more sense for an AI-to-be to exist in a virtual reality that was 100% self-contained and self-consistant, that occasionally received input from external sensors, because that is how organic brains work.

Such an AI could be based on evolving algorithms, because there would be no need to rely on meshing with mechanical gearing and mechanical sensors, all of which have finite response times and limited resolution. Thought is not so limited, and so anything which is built to think should not be bound by those limitations.

We must also stop thinking of intelligence as binary - present or absent. It is a gradual thing, and something identifiable as intelligence of a sort is present in many organisms. Examples being:

  • Crows can not merely use tools, they can manufacture them
  • African Grey Parrots can understand grammatical constructs and basic arithmatic
  • Dolphins can apply "delayed gratification" to improve the return they get
  • Great Apes can learn sign language and communicate at a fairly high level of complexity
  • Chickens can be trained to use thermostats to control the temperature of their environment

Some of these have traditionally been associated with being "clever", others have not been considered smart at all, but all have demonstrated something that cannot be explained by purely mechanical responses. There is some sort of reasoning going on, however slight.

There is almost no common factor between any of them. Avian brains are VERY different from mammalian ones, for example. There is no common area of the brain that can be used for thinking. The ONLY thing that is true in all cases is the abstract nature of the senses.

Ok, are there counter-examples? Sure. There are many organisms that have senses that are directly wired to responses. Horseshoe crabs are an example. Not a single one of these organisms is considered intelligent in any sense of the word.

From this, it can be deduced that this is a vital pre-requisite of intelligence - the ability to internally model the world and to be able to rely on that internal model more than on the senses themselves.

Can we test this deduction in any way? Maybe. Those who are considered "most" intelligent amongst people are those who are "withdrawn". Genius is often considered very close to madness, where the most common form of madness is schizophrenia, which is where the internal model and external reality are never synchronized.

From this, we can see some indication that the degree to which we rely on senses is the degree of UNthinkingness that we posess, that our ability to create, invent or even discover comes solely from our internal models, not external reality.

On this basis, it would seem clear to me that AI should do likewise. A robot that only needs to gather fresh information when there is something genuinely new and not predicted occuring is likely to posess more "intelligence" than one that relies entirely off sensory feeds.

We do not need, then, a definition of "intelligence", as much as we need a machine that can derive its own definition from its own model of the world.

Let's clean inputs (none / 1) (#59)
by svampa on Tue Jun 28, 2005 at 07:00:48 PM EST

Think about this for a moment. The "intelligent" section of the human brain does not deal with "noisy" data from the real world. It deals with pre-processed data that has been cleaned up to a very large extent by the signal processing sections of the brain.

Well, let's pre-process the input and transform it into a more adecuate representation, in order to be processed by the real-brain module. In an artificial brain, pre-process of input should be like the firmware of a device. Probably, that should be the easiest part.



[ Parent ]
Uh, no. (none / 1) (#64)
by jd on Tue Jun 28, 2005 at 11:53:06 PM EST

The ability of the brain to inject data, where there is no input (or where the input is uncertain) is a vital part of the system. Firmware wouldn't do this. Indeed, all firmware is capable of doing (because it is generally very simple, as it is typically run on a device with limited processing power) is very basic operations.

We're not talking here about edge detection, movement detection and other simplistic algorithms. We're talking about an entire virtual reality, where the sensory data is merely used to adjust values that are supposed by the brain to be present. Firmware would be absurdly underpowered to do anything on this level of complexity.

Probably the best example of the brain's virtual model was given on the original Connections series, by James Burke, where (in a pseudo detective setting) he illustrated how easily fooled the brain was, by incomplete information being used to construct an incorrect model of reality. The model was built up, clue-by-clue to a scene which simply didn't exist but which appeared to be totally self-consistant.

Simpler examples include almost any optical illusion, the ability to see "doubles" of a person (where superficial similarities result in the misfiring of recognition by the brain), the hallucinations resulting from sensory deprivation, etc.

None of these could easily be reproduced in firmware, and even if they could be produced at all, they aren't. The brain's reliance on internal models is two-edged - it gives an advantage, in that it does not require the complete processing of complete sensory data before the person is able to act, but it also means that the brain can be fooled by similar-enough mismatches.

I believe that the internal representation of the world is critical to intelligence, but that means (by implication) that intelligence REQUIRES the ability to be fooled on incomplete or approximate data.

[ Parent ]

Which episode? (none / 0) (#119)
by chizzadwick on Thu Jun 30, 2005 at 02:31:55 PM EST

I was just wondering if you remembered the name of the Connections episode you refer to?

[ Parent ]
It is much worse (none / 1) (#61)
by whazat on Tue Jun 28, 2005 at 07:58:23 PM EST

It has to deal with noisy or misfiring computation.

Examples getting drunk and a head rush. Both go outside the normal operating parameters of the brain, yet if we did similar things to a computer it would break in a second. The brain somehow manages to keep up the most important functions and generally keep us in one piece. The brain does not live in a clean room.

Your rough definition of "intelligence" seems to be all externally facing, creating models of the world. Yet I would argue that creating models of itself, so it could predict how much redundant computational hardware to put on a function (depending upon its importance) is also a worthy problem of equal complexity.


[ Parent ]

Definitely (none / 1) (#63)
by trane on Tue Jun 28, 2005 at 11:09:59 PM EST

creating models of itself is important, leading to 'introspection', the ability to test itself, etc.

You seem to want to abstract out natural language from intelligence (which is perhaps why you reject chatbots as a worthwhile pursuit). You may be right, but natural language is so central to our thought processes right now that you might have to go through it to get to intelligence. As opposed to circumventing it as you seem to want to do...

[ Parent ]

You misunderstand (none / 0) (#70)
by whazat on Wed Jun 29, 2005 at 07:56:13 AM EST

As I said before Chatbots are great for being Chatbots. And some of the code the develop may be important. But at a later date and only embedded in a certain type of system.

The trouble is intelligence leads people to think roughly this chain of thoughts.


  1. Intelligence is problem solving

  2. Pick a hard problem

  3. Create something that can solve it

  4. Ipso facto we have created something intelligent


This I don't think is a fertile field to plant in. I prefer to follow this path (and would rather not be alone!)

Try and create systems that can be useful in many different situations with different programming. Some of the situations might require what we know as intelligence

Turing and Von Neumann pretty much have non-reconfigurable systems of this type sorted. And you can emulate the reconfigurable systems on them up to a point.

[ Parent ]

Self modeling (none / 0) (#103)
by schrotie on Thu Jun 30, 2005 at 08:13:45 AM EST

Modeling one self is not important for self testing, it is plausibly the evolutionary oldest kind of mental model. A mental model is only useful to a motile organism if it is manipulable - the organism has to have the ability to change the model. Only then can the model be used in stead of reality to try things out without the always more or less dangerous act of actually testing reality.

Now there is one part of reality that the brain always carries along in any organism -- the body. Thus the body (the self?) is the most useful part of reality to model simply because the body is vital in each and every action the organism might wish to perform.

An interesting side note (the sample size is too small to validate the theory): Among the supposedly most intelligent species on the planet are surprisingly many with complex motorics that call for complex self models: Us (primates have very complex hands), Birds (body motion in 3D, complex feet), Elephants (their trunk is a kinematic nightmare) and Octopi (the whole things being kinematic abominations). Cetacaen are a major exception to this. They hunt in 3D, but very many animals do.

These ideas come from my boss, Prof. Holk Cruse.

[ Parent ]

Interesting (none / 0) (#117)
by trane on Thu Jun 30, 2005 at 02:08:07 PM EST

Thus the body (the self?) is the most useful part of reality to model simply because the body is vital in each and every action the organism might wish to perform.

What about abstract thinking tasks? Say you form a mental model on how you learn most effectively; then you test that by trying different methods and observing the results...

[ Parent ]

Well ... (none / 0) (#132)
by schrotie on Fri Jul 01, 2005 at 03:52:38 AM EST

... I don't really understand what the question is.

Abstraction is a form of generalization and thus rather useful. It allows you to model more diverse situations with one model.

However, what you describe requires lots of advanced "technologies": very high level (yet very detailed and fine-grained) meta-models that can be modified in ways that are beyond experience (presuming you want to test learning methods you have not tried before); long term memory that needs only one example for memorizing; objective evaluators; the scientific method. I don't think it's feasible even for humans. You'll have to go and test different learning methods in reality to evaluate them. Human learning is too complex for us to model.

[ Parent ]

When you said: (none / 1) (#89)
by Peahippo on Wed Jun 29, 2005 at 08:39:54 PM EST

"we need a machine that can derive its own definition from its own model of the world"

... I knew you were the right poster to add my my blather to. (You at least understand that internal modeling of the world is a key feature of intelligence.)

Even the dumbest Human is very good at general tasks like finding food, avoiding hurtling automobiles, and the like.

But the average Human is very bad at qualitative thought. After all, the average person believes in "God", and that alone demonstrates a failure to adequately internally model the external universe.

We have immense trouble with AI since we just don't understand intelligence AT ALL, and often get caught up in what I described above (general tasks of running a Human body in a modern society, and logic structures with thousands of points (i.e. qualitative thought)). And we don't understand intelligence since AT LEAST because so few Humans are fine examples of intelligence.

I found out in my early 20s that nobody really knew what intelligence was ... and that well included me. So over time I had to evolve out of practicality a behavior-based model of what this big-I thing is:

Intelligence is the capacity and desire and gumption to do anything imaginable.

This definition has sufficed for my purposes since I arrived at it. Heck, by my own definition above, I myself am not that intelligent. Peculiarly, this seems to validate it.


[ Parent ]
Intelligence is modeling (none / 0) (#115)
by curril on Thu Jun 30, 2005 at 01:09:02 PM EST

I think that modeling is an effective, testable definition of intelligence. An intelligent system perceives events, develops a model based on those events, and then uses that model to predict future events and identify actions that increase the likelihood of certain events occurring. The degree of intelligence of a system is determined by the class of events that it can predict and its ability to determine actions that bring about specific results.

so few Humans are fine examples of intelligence.

I beg to differ. Human beings are amazingly intelligent. Try to teach a dog to read or a chimpanzee to make a grilled cheese sandwich, and you will begin to appreciate how fantastically intelligent the average human being is. The fact that people often act in ways that seem irrational or self-destructive to you does not diminish this fact. It takes a tremendous amount of intelligence to comprehend an abstract concept like "god", and while such models may not appear to have much to do with the physical universe, they are tremendously useful in creating strong social structures and so the ability to understand and manipulate religious tenets can be very beneficial.

[ Parent ]

Modelling vs Learning (none / 0) (#126)
by whazat on Thu Jun 30, 2005 at 07:14:34 PM EST

Is there any difference in your view?

Also how does being told to push a button to open a door fit into your perception -> model view of intelligence?
 

[ Parent ]

RE: Modelling vs Learning (none / 0) (#157)
by curril on Sun Jul 03, 2005 at 10:09:36 PM EST

Learning and modeling overlap. Learning can be rote memorization, or it can be the creation of a new model for events. But modeling isn't just the creation of a model; it is also the application of the model to specific circumstances--whereas learning implies merely the acquisition of information, not necessarily the application.

Also how does being told to push a button to open a door fit into your perception -> model view of intelligence?

The listener is being told to create a new model, one in which a particular button and a particular door have a specific relationship. In order to understand that, however, the listener must have a model for language to know that certain sounds are associated with certain objects and properties, and that other sounds indicate certain relationships between objects and properties. And that requires the ability to create new models on the fly, which lies at the heart of intelligence.

[ Parent ]

If intelligence to you ... (none / 0) (#128)
by Peahippo on Thu Jun 30, 2005 at 10:55:48 PM EST

... is only a modelling exercise, and that you find the construction of a frankly incorrect model like "God" to be of merit, then we are talking about two different things. I'm not denying the innate power or usefulness of forming an internal model of the universe. I'm talking about forming said models and then defending them despite all evidence to the contrary. Isn't there a saying out there, about how "stupidity is in doing the same thing again and expecting a different result"?

A modelling mechanism that continues to defend a frankly incorrect model is not something that befits the term "intelligence". Construction-wise, yes, it's still an achievement. But practicality-wise, it fails miserably.

We make programs that crash and mangle data all the time. We make things that fail all the time. Are we therefore the new gods in your viewpoint?


[ Parent ]
Intelligence does not require perfection (none / 0) (#158)
by curril on Sun Jul 03, 2005 at 10:22:27 PM EST

Yes, I agree that people do relatively stupid things, but I do not hold AI to the same standards of intelligence that you hold people to. If someone created an AI that believed in creationism, I would marvel at the accomplishment, not sneer at the AI's inaccurate model of the universe.

I don't consider intelligence to be an all-or-nothing thing. Someone who remains obstinately stubborn about certain inconsistent religious beliefs may still provide fascinating and insightful observations about the structure of the atom, or develop a vaccine for some deadly disease. Yes, an ideal intelligence would constantly revise all models to accurately fit available information, but the fact that people don't fit that ideal doesn't make them completely unintelligent, it only means that they may be irrational in certain areas.

[ Parent ]

Let it be (3.00 / 5) (#57)
by tweetsybefore on Tue Jun 28, 2005 at 06:04:30 PM EST

Just let people fuck around with what they find interesting, who cares if you hate chatbots and whatever. Science is about learning for learnings sake. You don't have to make it practical. Looking for only things you think are practical severly limits you. You often don't know if a path you are travelling down will bear fruit.

I'm racist and I hate niggers.
Why I don't think AI is possible (2.66 / 3) (#66)
by IHCOYC on Wed Jun 29, 2005 at 12:21:50 AM EST

AI is impossible because even if we managed to create it, we might not recognise it.

If we had complete and perfect knowledge of any human brain and how it was developed, we might not attribute much intelligence to it either. "The pattern in the visual cortex here indicates that something with a dull green colour is being perceived. The ability to perceive this wavelength was put there to enable the organism to obtain food, since this wavelength is often reflected by edible leaves. It's a slip of green paper; this represents something desired by the organism, because of a learned complex of cultural behaviours that have been imprinted on the following neurons. . . ."

If we could look at the human brain with this level of detail, the "mind" would begin to disappear. We could identify every input and the source of every output. Conflicting impulses could be assigned values and the decision making process seen in the pattern of the firing of neurons.

Likewise, we will never see "minds" in machines because we already know that any machine built by human beings can at least theoretically be known at this level. If we want to know how it works, we can read the source code, and take it apart. We might be able to do that to human brains and minds some day, but the process of decompiling them is a bit messy.
--
"Complecti antecessores tuos in spelæis stygiis Tartari appara," eructavit miles primus.
"Vix dum basiavisti vicarium velocem Mortis," rediit G

Oh hum (none / 0) (#75)
by khallow on Wed Jun 29, 2005 at 04:16:01 PM EST

AI is impossible because even if we managed to create it, we might not recognise it.

Well, your assertion would be wrong, if we create AI and recognize it. So why is this possible outcome "impossible"?

Stating the obvious since 1969.
[ Parent ]

Because "mind" requies mystery (none / 1) (#102)
by IHCOYC on Thu Jun 30, 2005 at 07:31:24 AM EST

So long as we can see the man behind the curtain, we will not acknowledge the existence of intelligence: we will see and understand that the responses of the system are determined by its programming.
--
"Complecti antecessores tuos in spelæis stygiis Tartari appara," eructavit miles primus.
"Vix dum basiavisti vicarium velocem Mortis," rediit G
[ Parent ]
ok, I see then (none / 1) (#116)
by khallow on Thu Jun 30, 2005 at 01:41:31 PM EST

Keep shifting the definition of intelligence to keep it out of reach. Intelligence is a moving target. Here, I'm optimistic that the more or less rational part of the scientific community will come up with useful, well-defined ideas of intelligence that we can obtain. The rest of humanity will do whatever they do.

Incidentally, I'm thinking that we may be able to have great intelligence without sentience. Ie, without a concept of self-awareness.

Stating the obvious since 1969.
[ Parent ]

Intelligence without self-awareness (none / 0) (#120)
by artis on Thu Jun 30, 2005 at 03:39:49 PM EST

I hope it's posible because the thought that my subconsciousness is a self-aware is quite disturbing.
--
Can you know that you are omniscient?
[ Parent ]
Actually, that's exactly the problem (none / 1) (#123)
by vadim on Thu Jun 30, 2005 at 06:32:34 PM EST

Intelligence has been defined as something magical only we have. There's plenty of proof for that, beginning with the amount of people who refused to say animals could be intelligent. Seriously, my philosophy teacher would argue that dogs can't have feelings!

In the computer field, it's been like that too. We thought that calculating 2+2 was clever, then we came up with the computer. Oh, it can be done mechanically, so that can't be intelligence.

We thought that a machine capable of playing chess would be intelligent. Now computer chess playing is considered pretty much brute force, and the current programs are able to give trouble to the best players, on fairly inexpensive hardware. So that can't be it either.

We thought that speech recognition required intelligence. While that's not a solved matter, speech recognition is working fairly well these days.

And so on. Each time somebody says "If computers did <this> then they'd be intelligent", we go and solve that problem, and suddenly it can't be intelligence since the problem was solved mechanically.

Just the same thing that happened with animals. "Animals don't have feelings", "animals don't use tools", etc.
--
<@chani> I *cannot* remember names. but I did memorize 214 digits of pi once.
[ Parent ]

We know how we did it. . . (none / 0) (#138)
by IHCOYC on Fri Jul 01, 2005 at 02:30:19 PM EST

. . . .suddenly it can't be intelligence since the problem was solved mechanically.
Rather, it can't be intelligence because we know how we did it: this is what allows us to see the mechanical determinism in the outcome of the program. We can run through the process step by step. The mystery of intelligence is all about not showing your work.
--
"Complecti antecessores tuos in spelæis stygiis Tartari appara," eructavit miles primus.
"Vix dum basiavisti vicarium velocem Mortis," rediit G
[ Parent ]
quantum mind (2.00 / 6) (#68)
by spadefoot on Wed Jun 29, 2005 at 04:28:47 AM EST

I know, "quantum mind" sounds like one of those pseudo-science things were everything that is inexplicable is labeled "quantum". But a pretty smart guy, Sir Roger Penrose says that "mind" (I know not exactly the same thing as AI but in the same ballpark) is incomputable and probably quantum in nature. One of his most interesting points is a there are some provenly incomputable problems that can be decided by humans which seem almost obvious. One example is non-periodic tilings of a plane. And I think he has some other examples. What this means (perhaps) is that the mind can do things that a Turing Machine can't. And if a Turing Machine can't do it then a computer can't. Penrose has theorized that this mysterious incomputable mind is a result of quantum effects in nano-tubes and the like in the brain.

Now next you will say, but we math to describe quantum effects. But having a mathmatical description of something is not the same thing as a working model. Take for instance 3-body problems which are insolvable. You can write the mathematical description of the 3-body problem but you can't solve it. And we learned a few decades back that computers can't simulate it either.

IMHO, and it admittedly is a relatively uneducated one (BS applied science). But I believe that in the next 10 years there will be a shake-up in science. I think the first big shake-up will be in the realm of quantum computers, and the realization that they are not able to do all the magic stuff that is predicted. After that there will be many problems that were thought to be solved found to be unsolved, and that science had failed due to the nature of the human ego. After that, there will be some major discoveries and the world will change again.

If it looks like quantum pseudo-science... (none / 1) (#71)
by Viliam Bur on Wed Jun 29, 2005 at 11:18:21 AM EST

I know, "quantum mind" sounds like one of those pseudo-science things were everything that is inexplicable is labeled "quantum".

Exactly.

But a pretty smart guy, Sir Roger Penrose says that "mind" (...) is incomputable and probably quantum in nature.

So... ehm.... why did you decide to write this piece of pseudo-science anyway?

OK, now seriously:
Mind is not a quantum anything. You need a new paradigm; the science is overrated. Mind is a multidimensional hologram. Our extraterrastrial overlords are firing waves of mental radiation (remember, according to theory of relativity, waves can change to particles, because everything is relative) that cause evil scientist supress important information. But internet is free. They cannot stop us. Mind is a hologram! HOLOGRAM!!! Please send this message to all your friends; it was written by the holy dalai-lama; do not break the chain, or you will suffer bad karma...

[ Parent ]

meh (none / 1) (#74)
by spadefoot on Wed Jun 29, 2005 at 03:25:22 PM EST

I never said I believed Penrose's supposition, and I don't. I only find his evidence that there is something incomputable about mind interesting. I am certain that AI is possible, just not with digital computers, and esp not with current approaches. According to current approaches, if you build a complex enough ontology or logic construct then some how AI will pop out even though there is no evidence of such a thing. There are a lot of things that computers can't do. AI is one.

[ Parent ]
Uncomputability of mind (none / 0) (#78)
by whazat on Wed Jun 29, 2005 at 06:38:29 PM EST

Have a look at Selmer Bringsjord. Specifically the Lovelace test.

It is arguments like his that suggest to me we do not have a well defined concept of mind, which is why I don't think it is worth aiming for.

[ Parent ]

testing AI (none / 1) (#81)
by spadefoot on Wed Jun 29, 2005 at 07:10:04 PM EST

I don't think that the Turing Test or the Lovelace test is adequate to detect true AI. Turing is saying, "if it looks like a duck and quacks like a duck then it's a duck," but it is not hard to think of analogies of the Turing Test where the observers can be fooled. The Turing Test is just evolving better and better simulations.  As far as the Lovelace test goes it is better but I have seen relatively simple computer programs produce unexpected beneficial behavior. For instance some of the Rogue style games have features that were not ever concieved of by the authors but arise from intereactions between many flags, values and switch statements. Just because the author never concieved of it is not enough to say there is AI invloved.

[ Parent ]
Lovelace test (none / 0) (#84)
by whazat on Wed Jun 29, 2005 at 07:40:41 PM EST

Is deeper than that. It is not that a program does something that the programmers didn't predict. It is that the program does something that the programmers couldn't predict or explain, given enough time and information.

It fits in with his belief that Humans have Free Will. And so they can't be predicted by definition, and so are uncomputable.


[ Parent ]

free will (none / 1) (#86)
by spadefoot on Wed Jun 29, 2005 at 08:13:34 PM EST

it's another impossible to prove or disprove philosophical theories. It boils down to whether the universe is deterministic or non-deterministic. If the universe is deterministic then you will still appear to have free will because you are a part of the deterministic system, but it is an illusion. The very asking of the question of free will is result of the clockwork of the universe. You have no choice about whether to ask the question or how you will decide to blieve. If the universe is non-deterministic then all bets are off. Up until the invention of quantum theory it was generally accepted that the universe was deterministic. With quantum theory came the idea that quantum effects could be magically randomly decay (as opposed to simply random like rolling dice in a clockwork universe). That particles decay at times that are not just unknowable but completely unlinked to any knowable external or internal mechanism. Poof, they just magically decay at unknowable but statistically predicatble times. IMO quantum theory will be supplanted in the next few years and we will be back to a deterministic universe. And therefore Free Will is just an illusion.

[ Parent ]
Why do you even have to 'detect' AI? (none / 0) (#85)
by the on Wed Jun 29, 2005 at 07:56:39 PM EST

If it plays a good game of chess, it plays a good game of chess. If it can entertain you with good jokes, then it can entertain you with good jokes. If it can prove new theorems then it can prove new theorems. Who cares if it's AI?

The Turing test is like a hammer test, a test that is guaranteed to tell if an object is a hammer. It's tricky to build a hammer test. Just looking to see if it can hammer nails into wood isn't good enough, after all, you can do that with a rock, but a rock isn't a hammer. Let people invent tests like the hammer test, but I'll just grab the nearest available hammer (or rock if that's all that is to hand) to knock my nails in.

The real reason why people need a test has nothing to do with testing for intelligence. People want to know if a machine can be a moral agent because one day robots will commit crimes and they'll want to know whether to grant them rights, or just switch them off.

--
The Definite Article
[ Parent ]

good points (none / 1) (#88)
by spadefoot on Wed Jun 29, 2005 at 08:28:52 PM EST

But AI is an umbrella label for all kinds of problem solving machines. It is a tag that seems to get put on chessplaying machines and chatbots alike, in a way that is almost like marketing. Remove the AI label from a project and it is much less marketable to venture capital and acedemia alike.

Penrose had a very interesting example of why chess playing computers are not AI. He presented a endgame in chess that would stump even the most advanced chess program of the time, and yet even someone with a weak background in chess could almost immediately "see" the game was unwinnable. I am sure that chess programs now have specific procedures to recognise this particular problem now. But there is the real problem. Chess programs don't really exhibit AI. Chatbots aren't AI. The only projects that can still even argue they are AI are things like Cyc and the like. And there we have the marketeers doing their best to apply a glittery label to attract money.

[ Parent ]

I suspect (none / 0) (#90)
by whazat on Wed Jun 29, 2005 at 08:50:39 PM EST

That they will be granted the same moral rights as a dog. We will punish there owners for not training them correctly or for training them to break the law. They may be switched off or more likely confiscated and retrained. Or possibly modified so that the government can train them as well as the owner.

That is they will be moral agents, but most likely subservient moral agents.

This would not be true of all systems we could make, just the ones that we seem most likely to make.

[ Parent ]

Can non determinism explain intelligence? (none / 0) (#134)
by Viliam Bur on Fri Jul 01, 2005 at 05:15:26 AM EST

IMHO, the most fascinating thing for many philosophers about quantum theory is something like "Look, the universe is not deterministic; it has been proven scientifically!" Let's agree with this interpretation, for the sake of argument. Then, those philosophers sometimes jump to a quick solution: "So, humans are not deterministic, but computer programs are deterministic, and that's why they cannot be intelligent."

Let's examine it more closely. What does it mean "non deterministic"? For me, it is something like "random". Let's say that under some special conditions some small particles behave absolutely randomly, not predictably. Now, connect your pseudo-AI computer to a random number generator (based on observing small particles, therefore generating truly random data)... so the whole system becomes incomputable and unpredictable. But will it become intelligent solely because you have replaced the pseudo-random number generator with the truly random number generator? If yes, then the true randomness has a very interesting quality - ability to give inteligent results that any pseudo-randomness cannot give. This is IMHO what Penrose claims, and I see only a big unsupported logical jump there, something like "non deterministic = unpredictable = miraculous = explaining intelligence."

[ Parent ]

This is the part where you tell us... (none / 1) (#77)
by sudog on Wed Jun 29, 2005 at 06:05:56 PM EST

...that you watched the 5 DVD set called "Consciousness" which is a follow-up to that monumentally presumptuous and preachy "What the bleep do we know?"

Then we can sit around and laugh about it.


[ Parent ]

No I have not watched those (none / 1) (#83)
by spadefoot on Wed Jun 29, 2005 at 07:26:06 PM EST

did you like them?

[ Parent ]
Not particularly, but they held my interest. (none / 0) (#139)
by sudog on Fri Jul 01, 2005 at 02:42:22 PM EST

A few of the interviewed presented their views in a more "here's something interesting to think about that might be a plausible theory" fashion instead of the usual zealot's "This is the way it is!"


[ Parent ]
You've read... (none / 0) (#80)
by joecool12321 on Wed Jun 29, 2005 at 07:07:34 PM EST

..."The Emperor's New Mind", which is a great example of what happens when physicists try to do philosophy. Next, please.

[ Parent ]
the problem is (none / 1) (#82)
by spadefoot on Wed Jun 29, 2005 at 07:23:30 PM EST

that when it comes to AI, science is delving into the realm of philosophy. If you want to remove all philosophy from the subject of AI then you are not left with much. Philosophy is involved with the definition of the words "intelligence", "mind", etc. Without the philosophy what you have left are engineers trying to produce problem solving programs. In other words engineering.  Artificial is a word that seems to me to imply engineering, and not science. The goal is to "artiface" an "intelligence".

And besides, just in case you didn't know, Science is a Philosophy.

"Next, please."

[ Parent ]

You can strip out the mental (none / 0) (#87)
by whazat on Wed Jun 29, 2005 at 08:13:46 PM EST

And still get something different than engineering. When programmers create things that are quite complex such as Genetic Algorithms, they actually become a field of scientific study themselves. This is due to the nethack affect that you noticed. They are even mathematically modelled to try and help us understand how they work.

So just engineering is a bit harsh. We also have to determine what constitutes a good problem solver or learning algorithm. Just as the people trying to build better airplanes need to know about bernoulli lift and turbulence, so people developing problem solvers or learning algorithms need to know about the scientific or mathematical principles that govern them.

[ Parent ]

true (none / 1) (#91)
by spadefoot on Wed Jun 29, 2005 at 08:52:24 PM EST

I have listened to a lecture by Lenat of Cycorp (http://www.cyc.com/cyc/company/lenat) about his approach to AI. Lenat is less of a scientist IMO than a logician. AI seems to be more math and philosophy than science. But as I argued above you can't take the philosophy out of AI. In fact Lenat's lecture contained lots of philosophical concepts. Scientist, philosopher, engineer. In the best of cases all should apply in a subject like AI. But where is the theory of AI? Where is even the definition? And I would call Cyc an engineering project. In fact it's even an open source project http://www.opencyc.org/.


[ Parent ]
I'm interested in philosophy (none / 0) (#105)
by whazat on Thu Jun 30, 2005 at 08:34:00 AM EST

Just not mental concepts. Where my philosophical interest lies in non-normative epistemology, that explains how humans gain their attitudes towards the world.

[ Parent ]
Tricksy (none / 0) (#94)
by joecool12321 on Wed Jun 29, 2005 at 09:52:42 PM EST

My point is that the mind/body problem is a question of philosophy. Scientists shouldn't be doing work they're wholly unqualified to do.

[ Parent ]
Wrong (none / 1) (#95)
by spadefoot on Thu Jun 30, 2005 at 12:04:52 AM EST

If you look at some of the unviersity programs in Natural Language Processing for instance, you will find that they are a closely associated with the Philosophy dept. In most strong ai projects you will find somewhere in its core an inference engine, which is a program which performs categorical and symbolic logic. Perhaps things have changed since I went to college but that was a subject in the Philosophy dept. Philosophy depts are not just about ethics. These divisions between philosophy and science and math are completely artificial and in reality they overlap. And that is why PhD means Doctor of Philosophy. Here are some real world examples.

Stanford Philosophy dept
M.I.T. Philosophy dept
CMU philosophy dept

Philosophy is not just Ethics and Value Theory. In the project called Cyc, there is a lot of talk about Ontology and other philosophical terms. Ontology is not just a side discussion in Cyc, it is one of the core aspects of the technology.

[ Parent ]

Oh my goodness (none / 0) (#96)
by joecool12321 on Thu Jun 30, 2005 at 12:35:29 AM EST

I KNOW IT IS A QUESTION OF PHILOSOPHY! It's problematic when physicists start doing philosophy. In order for me to think that, I have to think that this question falls into a particular domain (the domain of philosophy). Goodness, "reading comprehension" tests never went well for you, did they?

[ Parent ]
Oh my goodness (1.50 / 2) (#99)
by spadefoot on Thu Jun 30, 2005 at 01:19:25 AM EST

Is this philosophy or science or engineering? I will answer that for you, it's philosphy and science and engineering all rolled into one. Cyc is an AI project. In this project the philosophical parts are not just papers written about different aspects of the project. It is central to it. The philosophy is part of the engineering and science. How would a software engine write the software to handle the ontology if he didn't know a bunch about ontology. It was a scientist (Lenat) that decided that there had to be an ontology. I don't know how to make it any clearer for you. In this project you can't separate the science from the philosophy. So just how would your idea of scientists never doing philosophy fit in this project. Let me give you a clue. You would not get past the first interview at Cycorp with an attitude like that. Personally I htink your problem is that you have never studied philosophy and are probably ignorant of just what it all entails. I will grant you that a scientist doing in philosophy in many fields of science is not helpful, but in AI its different. The field of AI is an amalgam of science and philosophy. You can't separate them.

[ Parent ]
shut up idiot (1.50 / 4) (#101)
by zeroesforyou on Thu Jun 30, 2005 at 02:18:48 AM EST

you dont know what the fuck you are talking about

[ Parent ]
Compare (none / 1) (#107)
by Dont Fear The Reaper on Thu Jun 30, 2005 at 08:54:49 AM EST

evolution and religion. Evolution contradicts some people's religious views, is it therefore in the domain of religion, and thus illegitimate? No.

The idea that certain questions fall into certain a priori domains and therefore cannot be addressed except by traditional approaches in those domains is stupid, and ignores the entire history of human inquiry.

[ Parent ]

I think (none / 0) (#92)
by BottleRocket on Wed Jun 29, 2005 at 09:23:18 PM EST

You've got a quantum mind.

$ . . . . . $ . . . . . $ . . . . . $
. ₩ . . . . . ¥ . . . . . € . . . . . § . . . . . £
. . . . * . . . . . * . . . . . * . . . . . * . . . . . *
$ . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . $
Yes I do download [child pornography], but I don't keep it any longer than I need to, so it can yield insight as to how to find more. --MDC
$ . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . $
. . . . * . . . . . * . . . . . * . . . . . * . . . . . *
. ₩ . . . . . ¥ . . . . . € . . . . . § . . . . . £
$ . . . . . $ . . . . . $ . . . . . $
$B R Σ III$

[ Parent ]

you are a idiot (1.20 / 5) (#93)
by zeroesforyou on Wed Jun 29, 2005 at 09:51:55 PM EST



[ Parent ]
Right, DOWN WITH AI! (none / 1) (#76)
by jope on Wed Jun 29, 2005 at 06:01:02 PM EST

The vast majority here obviously do not know what AI is and has been for the last 20 years or so: a misleading technical term for certain very specific parts of computer science. Look at the research carried out and you will see that a lot of useful things have come out of "AI" - things so diverse as optimization algorithms, better robots, data mining, natural language processing and many others. Nothing of this has got anything to do with the phenomenon of human intelligence -- sometimes what a computer does might "look" on the surface "intelligent" to the uninformed bystander. But nearly nobody in the AI community itself is claiming that any of the algorithms they create is "intelligent" in any sense of the word. And those who do are either idiots or old men with neuroses.

I thought everyone knew that by now.

You don't know what intelligence is (none / 1) (#131)
by paranoid on Fri Jul 01, 2005 at 02:53:33 AM EST

Well, things like natural language processing are parts of intelligence. There is no magic rule that, once understood and implemented in software, produces intelligence. Intelligence is a result of complex interactions between specialised brain blocks. Things that we usually call AI are almost all related to re-creating one or another such block. Yes, machine vision by itself doesn't produce human-like mind. But it is part of intelligence, so the term is correct.

[ Parent ]
Defining Intelligence (none / 0) (#108)
by hardburn on Thu Jun 30, 2005 at 08:57:59 AM EST

In relation to many other animals, humans are physically weak. For instance, the ratio between leg length and overall body size is rather small, which means that we can't run particularly fast. Our arm strength is also not particularly noteworthy.

So what do we do? We develop a wheeled cart that can allow us to get up to much faster speeds. We use the bones of dead animals, then bow-and-arrow, and later guns to make up for our inability to take down other creatures with our arms alone.

I therefore submit this definition of intelligence: Intelligence is how well a creature betters its physical limitations.

I agree that AI has been useless for reaching its main goal. Every time I hear someone half-wit say "in 20 years, computers will be fast enough for AI", I want to punch them in the eye. Sometimes these people even acknowledge that Moore's Observation, even if it can be sustained for another 20 years, is insufficient for the production of strong AI, but then will later claim it is all that is needed!

Even if we don't understand the problem, I think we can still get strong AI. We just can't do it with a directed process. The AI must develop under emergant behavior, just as an infant's brain develops. The researchers who create this model may have no clue how the resulting AI works, but I think it will be obviously intelligent.


----
while($story = K5::Story->new()) { $story->vote(-1) if($story->section() == $POLITICS); }


How much help? (none / 0) (#111)
by whazat on Thu Jun 30, 2005 at 11:25:29 AM EST

In your definition of intelligence there is wiggle room for computers that most wouldn't consider intelligent to be defined as such.

Take a robot, who every evening downloaded the latest program from a programmer that created programs for tasks such as driving cars or using knives.

It could be seen as going beyond its physical limits and getting better at it.

However, if you restrict others from helping the computer you would have to do similar for humans and then we would no longer be able to stand on the shoulders of giants. I would not be able to read or write if I hadn't the help of other humans so would by your definition be less intelligent. Probably not much better at bettering my physical limits than a chimp.

So your definition would need to be more precise to cope with the conflict between the individual and social sides of tool use.

[ Parent ]

Of course, the down side to emerging AIs thusly... (none / 0) (#125)
by skyknight on Thu Jun 30, 2005 at 06:49:16 PM EST

is that it takes a long time, and the resultant product is apt to be as fickle, unpredictable, lazy and cantankerous as actual humans.

It's not much fun at the top. I envy the common people, their hearty meals and Bruce Springsteen and voting. --SIGNOR SPAGHETTI
[ Parent ]
don't get so hung up (3.00 / 3) (#121)
by jcarnelian on Thu Jun 30, 2005 at 06:08:11 PM EST

AI is just a subfield of computer science.  People generally agree on the core skills and knowledge you should have if you claim you have studied AI: logic, rule-based systems, probabilistic inference, machine learning, planning, and a few other subjects.  There are textbooks and conferences on the subject.  There are research results and products.

Is the name perfect for the field?  No.  But that's the way disciplines acquire their names.  Organic chemistry also is not really about the chemistry of organic systems (we have biochemistry for that), but the name has stuck and it's still related.  Likewise, AI is not really about intelligence (human or otherwise) anymore, but the name has stuck, and it's still related to intelligence.

no (none / 1) (#136)
by eschatron on Fri Jul 01, 2005 at 10:45:28 AM EST

AI is just a subfield of computer science.

No, it isn't. Some computer scientists believe that, and that's why they aren't getting anywhere. You can't do it without also doing some philosophy and psychology and linguistics. It's not a subfield of any of them.

[ Parent ]
yes (none / 1) (#137)
by jcarnelian on Fri Jul 01, 2005 at 01:35:55 PM EST

No, it isn't.

Yes, it is.  The major AI labs are part of computer science departments.  The major AI publications and AI conferences are frequented primarily by computer scientists.

Some computer scientists believe that, and that's why they aren't getting anywhere.

AI has made enormous progress over the last 3 decades and is widely used in real world applications.

You can't do it without also doing some philosophy and psychology and linguistics. It's not a subfield of any of them.

Philosophy is irrelevant to AI research.  As for psychology and linguistics, sure, you need expertise in those areas for some AI research, but AI is still firmly part of computer science and it is its own field.  An organic chemist also needs experties in inorganic chemistry and physics, yet organic chemistry is its own, distinct field.  And a physicist needs expertise in mathematics.

[ Parent ]

jerry fodor is my hero (none / 0) (#145)
by eschatron on Sat Jul 02, 2005 at 06:57:53 AM EST

I don't disagree with most of what you've said (aside from the philosophy thing that I'll get to momentarily). I have two problems.

The first is that I make a distinction between weak and strong AI. Weak AI is domain-limited. It has made progress and is done mainly by computer scientists. AI that transcends domain limitations is not happening in computer science.

Which leads to my second problem, that you totally underestimate the magnitude of the work that needs to be done in fields other than computer science in order for strong AI to happen. It's not just that they need some expertise in linguistics and psychology, it's that they need to work with linguists and psychologists (and philosophers), because the conceptual problems in those fields are not likely to be solved by someone whose main focus is elsewhere.

The fact that AI labs are part of computer science departments reflects the fact that doing actual experiments in AI is a computer science task. I.e. computer science is the empirical end of the field. But there's a lot of non-empirical work that still goes on outside of labs and is essential to their work.

"Philosophy is irrelevant to AI research."

Philosophy is definitely not irrelevant to AI research. Fodor wrote a great article that concluded on this point around 20 years ago, and apparently nothing has changed. If you're interested, IIRC it was "Modules, Frames, Fridgeons, Sleeping Dogs, and the Music of the Spheres", in The Robot's Dilemma: the Frame Problem in Artificial Intelligence. Basically he points out how computer scientists are failing [at strong AI] due to conceptual problems that are have been well known to cognitive philosophers for a long time, because they insist on believing what you wrote.



[ Parent ]
Fodor should put up or shut up (none / 0) (#146)
by jcarnelian on Sat Jul 02, 2005 at 09:04:39 AM EST

Basically he points out how computer scientists are failing [at strong AI] due to conceptual problems that are have been well known to cognitive philosophers for a long time, because they insist on believing what you wrote.

In fact, serious AI research groups are composed of cognitive scientists, psychophysicists, mathematicians, linguists, computer scientists, statisticians, psychologists, and other experts.  Even with all that expertise, human-like artificial intelligence is still off many decades in the future (let's not forget, however, that computer hardware is also orders of magnitude less powerful than human hardware).

AI probably does need many more insights, concepts, frameworks, and breakthroughs.  However, it does not need philosophers opining about definitions or incorrectly diagnosing that the wrong disciplines are involved.  As for Fodor in particular, I don't think he has ever produced anything that has given me a useful insight or even inspired a useful thought.

[ Parent ]

so (none / 0) (#148)
by eschatron on Sat Jul 02, 2005 at 11:35:40 AM EST

"In fact, serious AI research groups are composed of cognitive scientists, psychophysicists, mathematicians, linguists, computer scientists, statisticians, psychologists, and other experts."

Exactly. So... that doesn't sound like the way you'd describe something that's "just a subfield of computer science".

"However, it does not need philosophers opining about definitions or incorrectly diagnosing that the wrong disciplines are involved."

If you want to use straw men here then I can't point out all of the things we don't need computer scientists doing incorrectly. That's not much of an argument.

"As for Fodor in particular, I don't think he has ever produced anything that has given me a useful insight or even inspired a useful thought."

Fair enough, but if we're being anecdotal then I'll simply say that he has inspired me with useful thoughts.

[ Parent ]
can you be concrete and explicit? (none / 1) (#151)
by jcarnelian on Sun Jul 03, 2005 at 05:00:07 AM EST

"In fact, serious AI research groups are composed of cognitive scientists, psychophysicists, mathematicians, linguists, computer scientists, statisticians, psychologists, and other experts." Exactly. So... that doesn't sound like the way you'd describe something that's "just a subfield of computer science".

That's the way scientific disciplines work.  Organic chemistry uses quantum mechanics, chemical engineering, mathematics, and computer science, but it is a subfield of chemistry: the goal of the field is to do analyze and synthesize chemical compounds, and all the other disciplines are merely supporting sciences for that goal.

The relationship between AI and CS is analogous: AI is about creating computational systems that exhibit intelligent behavior.  The "creating computational systems" means that it is firmly a part of computer science, the "exhibiting intelligent behavior" means that it needs support from other disciplines, which it gets (contrary to Fodor's assertions).

"However, it does not need philosophers opining about definitions or incorrectly diagnosing that the wrong disciplines are involved." If you want to use straw men here then I can't point out all of the things we don't need computer scientists doing incorrectly. That's not much of an argument.

Let me assert it more succinctly: Fodor has contributed nothing of scientific or theoretical value to the field of artificial intelligence.  

If you would like to disagree, name a theorem, algorithm, experimental result, or even experimental evaluation of existing methods that he has published that is of any practical value.  Name even a single useful formal mathematical definition and axiomatic system that he has contributed to the field of AI.

[ Parent ]

can you? (none / 0) (#153)
by eschatron on Sun Jul 03, 2005 at 09:38:54 AM EST

"The 'creating computational systems' means that it is firmly a part of computer science"

I agree with that. I suppose I misapprehended what you meant by "just a subfield".

"If you would like to disagree, name a theorem, algorithm, experimental result, or even experimental evaluation of existing methods that he has published that is of any practical value. Name even a single useful formal mathematical definition and axiomatic system that he has contributed to the field of AI."

No, for 2 reasons.

1. That will simply lead to us arguing about which of his theories are correct (if any, in your case).

2. I asserted that philosophy, not Jerry Fodor, is indispensible to AI.

If you want, rather, that I be specific about philosophical contributions to AI, then that's exactly why I mentioned the Fodor article. He cites a recent (at the time) AI effort (sorry, don't have the article here so I can't say which one) that tried to solve the frame problem using a sleeping dog strategy. It was already known to philosophers of mind that sleeping dog strategies don't work. Quite simply, they could have saved themselves the trouble by doing some philosophy first.

So if you would like to disagree, then tell me why that's a misdiagnosis, and by extension, why philosophy isn't relevant in a case like that.

[ Parent ]
philosophy (none / 0) (#159)
by jcarnelian on Tue Jul 05, 2005 at 08:55:46 AM EST

He cites a recent (at the time) AI effort (sorry, don't have the article here so I can't say which one) that tried to solve the frame problem using a sleeping dog strategy. It was already known to philosophers of mind that sleeping dog strategies don't work. Quite simply, they could have saved themselves the trouble by doing some philosophy first.

That sort of thing happens all the time: people in chemistry reinventing stuff originally discovered in physics, people in physics reinventing stuff originally discovered in biology, etc.  Duplication of effort in different disciplines doesn't mean one discipline needs the other.

Furthermore, I take the fact that philosophers even discuss something related to the frame problem as evidence that philosophy isn't just irrelevant to artificial intelligence, it is harmful.  In some sense, philosophy, whether professional or amateur, is responsible for the mess AI got itself into in the 1970's and 1980's.  It has taken two decades to bring artificial intelligence back on track again.  Philosophers simply don't have the tools or the methodology to reason about thinking and reasoning.

That will simply lead to us arguing about which of his theories are correct (if any, in your case).

Well, of course.  There is a lot of pseudoscience and bad science being done in artificial intelligence, just like there is in any other field.

It's a free country: philosophers can say whatever they like, and certain kinds of AI researchers can listen to them.  My recommendation would be not to take either of those groups of people too seriously and to ask for solid, demonstrable, verifiable, and reproducible results.  You'll find those few and far between when it comes to anything philosophers have had their hands in.

[ Parent ]

AI (1.00 / 6) (#127)
by ShiftyStoner on Thu Jun 30, 2005 at 08:30:17 PM EST

Give me a few high profile psychologist, programers, computer technicians, designers, and physisists. and i could have them spit out AI in under a decade.

Know what that means? It allready exists and government(it?) doesnt want anyone to know it exists yet, and certainly not it's purpose.

What do you think will happen when AI is invented. You think it's going to be invented by some dork like you? Not hardly. It's going to be(has been) invented by government trained and owned specialists. As top secret as it gets. Because well, anyother country having the technology would be extremly dangerouse, more dangerouse than, nukes for example. As for citezens, do they have nukes?

It's not going to be some fuckn toy like you tards think. Not sex robots and a super chess player. Flying plains? Tanks? Sure.

What seperates us from other animals. original thought, we can invent. Each one of us is capable of groudbreaking inventions. AI, would be capable of this. Only, what it takes us a liftime to figure out, 6 billion life times even, it'l figure out in a weak, 5 seconds once it enhances itself. It could design tanks, the ultimate land air fighter. No mear man could ever match it tacticaly. just like they couldnt match it in chess, maybe once, cause it could also learn.

Wire it to the net, which, it is, which, is the reason we have freedom on the net we dont have in the real world, then you have one hell of an educated robot. Plus all the military programing, databases. You have, merely, a weak after inventing, 10,000 inventions on top of the one, AI, and, an AI the likes of which mankind without it, could have never acheived.

What do you think, all the scifi movies scare these nutjobs into not  giving something like this power. The fact that, they probably think they control it, and certainly did at one time, and, giving it some control in war matters would make us unbeatable in any war, against any threat, would garantee it acheiving more power than any one man curently holds.

think about how this stuff ussualy works out. first it's cartoons, scifi. then it's a possiblity, then, there is some gimped modle, then it's known to the public. then it's, sometimes, available to the public. The fact ussualy is, they've had it since it's been in cartoons.
( @ )'( @ ) The broad masses of a population are more amenable to the appeal of rhetoric than to any other force. - Adolf Hitler

Just like the TOE? (none / 1) (#135)
by joib on Fri Jul 01, 2005 at 08:32:36 AM EST

Give me a few high profile psychologist, programers, computer technicians, designers, and physisists. and i could have them spit out AI in under a decade. Know what that means? It allready exists and government(it?) doesnt want anyone to know it exists yet, and certainly not it's purpose.

Just like we have a Theory of Everything, since the government certainly has financed thousands of really smart people to work on the problem for many decades now, and spent billion of dollars on particle accelerators. Oh, wait..

See, some problems are just hard. Other problems might be impossible, no matter how much money and smart people we (we, as in mankind) throw at them.

Just the fact that we haven't got AI yet, is no evidence for some great government conspiracy.

As top secret as it gets.

Just like the Manhattan project was "as top secret as it gets"? Now, how many years did it take before the commies had nukes too?

The odds against a great number of people keeping some grand secret are staggering. And that, my dear, is the reason why these "grand conspiracies" only exist in the minds of various nutters.

[ Parent ]

all (none / 0) (#149)
by ShiftyStoner on Sat Jul 02, 2005 at 03:28:43 PM EST

other countries who possesed it, would also want it kept secret, and especialy how they got possesion of it.

And, you don't know the odds, secrets are secrets. Not a statistic.

Don't kid yourself, there are alot of military technology advances we don't know about. They admit as much.
( @ )'( @ ) The broad masses of a population are more amenable to the appeal of rhetoric than to any other force. - Adolf Hitler
[ Parent ]

Minsky's comments a bit unfair. (none / 1) (#144)
by hengist on Sat Jul 02, 2005 at 05:55:50 AM EST

If you apply Minsky's arguments to aviation, then he would have been slamming the Wright brothers because they didn't achieve supersonic flight.

It took eons for intelligence to evolve - why should we be able to do it if 60 years?

Meanwhile, AI research has produced applications that are actually useful - certainly more useful than a fully autonomous, thinking machine would be, IMHO.

After all, if you did have a fully autonomous, thinking machine, it might decide to go off and do it's own thing, rather than do what you wanted it to do.

There can be no Pax Americana

Mergatroid is laughing at all the silly talk ... (none / 0) (#167)
by k24anson on Thu Jul 07, 2005 at 12:27:09 PM EST

Some medieval scholars and men of learning would keep a human skull on the reading or writing table and would look at it as they pondered the mysteries of life.

Some day I want a 3D model of the entire human nervous system as I ponder the cognitions of biological life forms. You know, something that looks like a human brain that has two eye balls protruding out; little things sticking out for the nerves of the nose, the taste buds, the ears. Wow ... look at that one long spinal cord thing sticking out, down from the back of the brain, and all those nerves that extend from it as it winds its way down. This is the functioning, practical goal they'll be mimcking electronically maybe a thousand years from today. But for now I'll stick to the lower forms of life.

I mean I really have to wonder at Palm PIlot creator and millionaire Jeff Hawkings crew there, those people talking in his book website's forum, 'OnIntelligence', his institute Redwood Neuroscience and another interesting one called Neurocomputing Home.

All these guys are already way up there in the stratosphere trying to devise systems mimicking the human cognitive process and there is no one (that I am aware of anyway) that has even begun to mimic let's say the cognition of a spider or a praying mantis. The nervous system of the lower forms of life haven't even begun to be put to electronic systems and these guys are stroking each other with how the neocortical centers of the human brain bring the superiority of the human over the other forms of cognition. I wouldn't necessarily discourage these efforts though I wouldn't waste my own time doing so thinking as I do that nothing significantly practical is going to happen with these guys in the near future.

I think someone entering this field of research either seriously or as a hobby would do well to think small, like cockroach small.

Loose lips sink ships, and it'll be my own ship my loose lips will sink, so I'm done talking like this.

Good luck to whoever reads this. And may the force be with you ...,

I'll be back - A. Schwartzy
KLH
NYC

Stay focused. Go slow. Keep it simple.

Against Artificial Intelligence | 176 comments (160 topical, 16 editorial, 0 hidden)
Display: Sort:

kuro5hin.org

[XML]
All trademarks and copyrights on this page are owned by their respective companies. The Rest © 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!