Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

[P]
AI Breakthrough or the Mismeasure of Machine?

By Baldrson in Science
Fri May 27, 2005 at 02:22:40 PM EST
Tags: Software (all tags)
Software

If a computer program took the SAT verbal analogy test and scored as well as the average college bound human, it would raise some serious questions about the nature and measurement of intelligence.

Guess what?


ADVERTISEMENT
Sponsor: rusty
This space intentionally left blank
...because it's waiting for your ad. So why are you still reading this? Come on, get going. Read the story, and then get an ad. Alright stop it. I'm not going to say anything else. Now you're just being silly. STOP LOOKING AT ME! I'm done!
comments (24)
active | buy ad
ADVERTISEMENT
Introduction

Artificial intelligence with human-level performance on SAT verbal analogy questions has been achieved (warning: PDF) using corpus-based machine learning of relational similarity. Peter D. Turney's Interactive Information Group, Institute for Information Technology of the National Research Council Canada, achieved this milestone.

The timing of this achievement is highly ironic since this is the first year that the College Board has given the SAT's without the verbal analogy questions.

For the last hundred years many researchers have claimed that analogy tests are among the best predictors of future performance via their strong correspondence with the g factor or general intelligence, while others claimed this is a mismeasure of man with severe political ramifications and questionable motivations.

Is this a true breakthrough in AI or is it just the mismeasure of machine?

The Achievement

Dr. Turney's group developed a technique called Latent Relational Analysis and used it to extract relational similarity from about a terabyte of natural language text. After reading a wide variety of documents, LRA achieved 56% on the 374 verbal analogy questions given in the 2002 SAT. The average college bound student score is 57%. These are statistically identical scores.

Everyone is familiar with attribute similarity -- when we say two objects are similar we usually mean they share many attributes such as employer, color, shape, cost, age, etc. An example of a statement about attribute similarity is "Mary has the same employer as Sally." Relational similarity -- when we say two pairs of objects have similar intra-pair relationships -- is only a little less familiar. An example of a statement about relational similarity is "John's relationship to Mary is Thor's relationship to Mjolnir." (Perhaps John was the unnamed 'employer' in the attributional statement.)

We can see two things from this example:

  1. Relational similarity underlies analogy.
  2. Relational similarity underlies metaphor.
The study of relational similarity usually cites Dedre Gentner's Structure Mapping Theory, summarized as follows:
The basic idea of Gentner's structure-mapping theory is that an analogy is a mapping of knowledge from one domain (the base) into another (the target) which conveys that a system of relations which holds among the base objects also holds among the target objects. Thus an analogy is a way of noticing relational commonalties independently of the objects in which those relations are embedded.

But a mathematical theory of relational similarity was to have been the crowning achievement of the 1913 publication of the final volume of Principia Mathematica -- something Bertrand Russell called "relation arithmetic".

Russell was adamant that without relation arithmetic people are prone to misunderstand the concept of structure and thereby fail in the empirical sciences:

I think relation-arithmetic important, not only as an interesting generalization, but because it supplies a symbolic technique required for dealing with structure. It has seemed to me that those who are not familiar with mathematical logic find great difficulty in understanding what is meant by 'structure', and, owing to this difficulty, are apt to go astray in attempting to understand the empirical world. For this reason, if for no other, I am sorry that the theory of relation-arithmetic has been largely unnoticed. Bertrand Russell "My Philosophical Development"
Unfortunately, their formulation of relation arithmetic had a defect.

I've had a career-long interest in subsuming information systems in a relational paradigm. When contracted to work on Hewlett-Packard's E-Speak project, I was able to hire (only after threatening to resign when told I had to hire only h-1b's from India for this work -- but that's another story) a science philosopher named Tom Etter, whose work I had heard of from Paul Allen's Interval Research. I set Tom to the task of reformulating relation arithmetic for use in HP's E-Speak project. As a result of this work, lasting a few months before before the E-Speak project ran into trouble, he was able to produce a paper titled "Relation Arithmetic Revived" wherein he describes the new formulation:

Here is relation-arithmetic in a nutshell:

Relations A and B are called similar if A can be turned into B by a 1-1 replacement of the things to which A applies by the things to which B applies. Similarity in this sense is a generalization of the algebraic concept of isomorphism. If, for instance, we think of a group (as defined in group theory) as a three-term relation x = yz, then isomorphic groups are similar as relations. The relation-number of a relation is defined as that which it has in common with similar relations. Relation-arithmetic was to be the study of various operators on relation-numbers.

For reasons that will become clear below, we'll substitute the word shape for Russell's term relation-number. Thus, in our current language, the shape of a relation is what is invariant under similarity. Note that these three words have analogous meanings in geometry.
...
If we substitute congruence for similarity in the [Russell's - JAB] definition of relation-number, then operators like product and join can in fact be defined in an invariant way, and Russell's conception of relation-arithmetic makes sense. Since Russell's definition of these words is not in general usage, this substitution should not produce confusion, so let us hereby make it:

A relation-number is defined as an equivalence class of partial relations under congruence.

In other words, relational congruence provides relations in context that can be composed to yield new relations -- and relational similarity provides relational shapes whose importance is more abstract. Russell and Whitehead failed because they were trying to come up with a way of composing shapes out of context. (The context-dependent relation numbers of Etter's relation arithmetic are a more general form of "attribute similarity" described above.)

Given this understanding of Russell and Whitehead's work, Turney's group has, at the very least, made a major advance toward bringing practical natural language processing into greater consilience with a wide range of science and philosophy, and conversely, brought those ranges of science and philosophy closer to practice.

Controversy In 'g'

For the last century a controversy has raged over the significance of something cognitive psychologists call the "g factor" or "general intelligence". Indeed, Charles Spearman invented factor analysis to test for the existence of an hypothesized general factor underlying all of what we think of as intelligent behavior. Spearman used a variety of tests for intelligence and then looked for correlations between them. He invented factor analysis so he could find common factors between these correlations. Spearman was strongly influenced by Charles Darwin's cousin, Francis Galton. Galton was one of the earliest proponents of eugenics, and invented the statistical definition of correlation to study the degree of heritability of various phenotypes, including intelligence. Eugenics is a highly controversial field so we should be unsurprised that the g factor, originating as it did with such a controversial area of research, has resulted in a long-standing dispute.

What is not in dispute is that analogy tests correlate most strongly with Spearman's g. What is in dispute is whether verbal analogy tests are culturally neutral enough to be a fair measure of g independent of education. In other words, no one disputes that a high score on verbal analogy tests are evidence of high g -- they merely dispute whether low scores on verbal analogy tests imply low g.

Most objections to the use of analogies tests to measure general aptitude claim they are reducible to little more than "rote memory" tasks. Quoting the Victoria, BC health site on autistic savants:

In all cases of savant syndrome, the skill is specific, limited and most often reliant on memory.

This sounds a lot like the objections raised by the opponents of the use of verbal analogies tests. Finding an autistic savant whose specialized skill was to do exceedingly well on verbal analogies would go a long way toward validating this view of verbal analogies and hence the view that Turney's accomplishment is not the AI breakthrough it might appear to be.

On the other hand we must remember that a sufficiently compressed "rote memory" might be indistinguishable from intelligence. A genuine AI program, assuming it could exist, itself can be seen merely as a compressed representation of all behavior patterns we consider "intelligent" and the Kolmogorov complexity of those behaviors might not be as great as we imagined them. Taxonomies of intellectual capacity which place analogy and metaphor along-side critical thinking are quite possibly compatible with a sufficiently compressed description of a very large "rote memory".

The most widely-read attack against the g theory to date has been Stephen J. Gould's The Mismeasure of Man. Gould summarizes the objections to g theory:

"[...] the abstraction of intelligence as a single entity, its location within the brain, its quantification as one number for each individual, and the use of these numbers to rank people in a single series of worthiness, invariably to find that oppressed and disadvantaged groups--races, classes, or sexes--are innately inferior and deserve their status" (pp. 24-25).
Recently this on-going controversy has boiled over into college admissions with the College Board removing verbal analogies from the SATs as of 2005. Ironically there is now an argument raging over whether this change biases the SATs against whites and for blacks and Hispanics or whether it biases the SATs against blacks and Hispanics and for whites. We can certainly expect this debate to continue without resolution since it seems rooted as much or more in ethnic politics than science.

And of course none of this has stemmed the century-long pattern of on-gong research indicating that analogy tests are highly predictive of future performance as well as disputations of the validity of such research.

It is precisely the sustained acrimoniousness of this debate that renders Turney's accomplishment so refreshing -- for regardless of your viewpoint, machines are not a voting bloc. Either this work shows itself to be a turning point in the progress of artificial intelligence, or it will merely lead to mundane benefits such as better search engine results. This is just the start of what will undoubtedly be a long series of measurements of artificial intelligence quality.

The question before us now is whether Latent Relational Analysis' human-level performance on verbal analogies truly represents an artificial intelligence breakthrough or whether it merely represents the mismeasure of machine.

Sponsors

Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure

Login

Related Links
o SAT verbal analogy questions
o been achieved (warning: PDF)
o corpus-bas ed machine learning of relational similarity
o Peter D. Turney
o without the verbal analogy questions
o analogy tests are among the best predictors of future performance
o the g factor
o mismeasure of man
o about a terabyte of natural language text
o Thor's
o Mjolnir
o Dedre Gentner
o summarized
o Principia Mathematica
o Bertrand Russell
o My Philosophical Development
o career-lon g interest in subsuming information systems in a relational paradigm
o E-Speak project
o Paul Allen's Interval Research
o Relation Arithmetic Revived
o isomorphis m
o isomorphic groups
o Whitehead
o consilienc e
o g factor
o Charles Spearman invented factor analysis
o Francis Galton
o Victoria, BC health site on autistic savants
o Kolmogorov complexity
o analogy and metaphor along-side critical thinking
o The Mismeasure of Man
o against whites and for blacks and Hispanics
o against blacks and Hispanics and for whites
o analogy tests are highly predictive of future performance
o disputatio ns of the validity of such research
o artificial intelligence quality
o Also by Baldrson


Display: Sort:
AI Breakthrough or the Mismeasure of Machine? | 171 comments (156 topical, 15 editorial, 0 hidden)
Fascinating article (3.00 / 4) (#6)
by esrever on Thu May 26, 2005 at 09:17:55 PM EST

I think that one of the main reasons for the acrimony in the debate over "What is AI" is the term "AI" itself.  Peter F. Hamilton neatly recognises and defuses this debate in his latest book Pandora's Star by referring to his intelligent programs as "SI" or Sentient Intelligence (and they are, actually, sentient), and the merely highly sophisticated programs as "RI" or Restricted Intelligence (whereas these are merely anthropomorphic).  This clearly dileneates and removes the ambiguity around the word "Intelligence" which is at the root of most of the disagreement over the term "AI".

People associate Intelligence (rightly or wrongly) with sentience, and therefore denounce "AI" as a pipe-dream.  Meanwhile, many AI researchers and pundits are not much better; conflating the rise of Intelligent programs automatically with the concomitant rise of Sentience in said programs.  Which leads us to such ludicrously wrong-headed nonsense as "Should Intelligent Machines have Rights" (don't have link, but this made big news on Wired a year or so ago, IIRC).


Audit NTFS permissions on Windows

A common theme in science fiction. (3.00 / 2) (#8)
by forgotten on Thu May 26, 2005 at 09:40:18 PM EST

Not just for computers, either. I've read stories where electronic copies of a human brain after death could be made either sentient (they believed they had been brought back to life) or non-sentient (could answer questions, etc, but had no concept of self.

I'm starting to sound like Sen here.

--

[ Parent ]

All this tells us is... (2.00 / 3) (#9)
by BJH on Thu May 26, 2005 at 10:05:40 PM EST

...that the SAT is not a useful measure of intelligence.
--
Roses are red, violets are blue.
I'm schizophrenic, and so am I.
-- Oscar Levant

How? (none / 0) (#10)
by Baldrson on Thu May 26, 2005 at 10:51:33 PM EST

How did you eliminate the possibility that this is a genuine breakthrough in artificial intelligence?

-------- Empty the Cities --------


[ Parent ]

Well... (3.00 / 3) (#12)
by BJH on Fri May 27, 2005 at 12:18:28 AM EST

Every few years, somebody says "We won't have true AI until computers can do X!", where X is some task that at the time was thought to be difficult to represent as a program.

And every now and then, somebody else comes along and says "We've created a computer that can do X! We've found AI! Hurray!", and every single time it's been found that the particular problem solved has very little bearing on general intelligence, thus requiring a new definition of X.

If you ask me to believe that true AI has been created, you'd better come up with something other than a program that can perform reasonably at a single, fairly well-defined task.
--
Roses are red, violets are blue.
I'm schizophrenic, and so am I.
-- Oscar Levant

[ Parent ]

Ah, I see... (none / 0) (#13)
by Baldrson on Fri May 27, 2005 at 12:24:05 AM EST

You're asking me to believe you when you say this is not genuine AI and when queried for justification you claim that I'm asking you to believe the converse. Brilliant.

-------- Empty the Cities --------


[ Parent ]

It's called "Occam's Razor". (none / 0) (#14)
by BJH on Fri May 27, 2005 at 12:56:56 AM EST

Look it up.
--
Roses are red, violets are blue.
I'm schizophrenic, and so am I.
-- Oscar Levant

[ Parent ]
Can't you just (3.00 / 6) (#15)
by monkeymind on Fri May 27, 2005 at 01:01:19 AM EST

Give me the simplest explination of it?

I believe in Karma. That means I can do bad things to people and assume the deserve it.
[ Parent ]

Wrong! (3.00 / 2) (#11)
by TronTron on Fri May 27, 2005 at 12:00:54 AM EST

It also tells us that the average college-bound SAT taker is an incompetent moron.

[ Parent ]
Just wait ... (none / 1) (#19)
by Peahippo on Fri May 27, 2005 at 04:00:50 AM EST

... until these tests will have to be constructed for different states in the USA, to account for some populations that will not be educated in such wacky areas like evolution *. I'm sure in those states, general literacy levels will be equally depressed. I wonder what those tests will be called? "Red tests"?

* "Remember, class, evolution is only a theory. Now put away your thin biology texts and take out your Bibles for the mid-day Patriotic Rally for our beloved President, who -- by the grace of God -- keeps us safe from terrorists and Democrats!"


[ Parent ]
It could still be a good measure of human (none / 0) (#117)
by eraserewind on Sun May 29, 2005 at 10:29:15 AM EST

It could still be a good measure of (certain aspects of) human intelligence if you are guarenteed that the subject is human and not a perl script with an internet connection. Not that I think it is a particularly good measure, but there are better reasons why not than a computer being able to pass it.

[ Parent ]
AI is not about SATs (2.00 / 2) (#16)
by monkeymind on Fri May 27, 2005 at 01:03:56 AM EST

When the AI can go into a bar after the test and chat someone up, then you will have reached you goal my son.

I believe in Karma. That means I can do bad things to people and assume the deserve it.

dude, (none / 1) (#82)
by sophacles on Sat May 28, 2005 at 01:30:32 PM EST

There are a lot of humans who can't do that.


[ Parent ]
Exactly [nt] (none / 0) (#92)
by monkeymind on Sat May 28, 2005 at 05:51:51 PM EST


I believe in Karma. That means I can do bad things to people and assume the deserve it.
[ Parent ]

AI is always chatting people up in bars (none / 0) (#87)
by livus on Sat May 28, 2005 at 04:27:36 PM EST

and getting them to pull its handle.

---
HIREZ substitute.
be concrete asshole, or shut up. - CTS
I guess I skipped school or something to drink on the internet? - lonelyhobo
I'd like to hope that any impression you got about us from internet forums was incorrect. - debillitatus
I consider myself trolled more or less just by visiting the site. HollyHopDrive

[ Parent ]
Maybe this just highlights a flaw in the tests (3.00 / 3) (#17)
by StephenThompson on Fri May 27, 2005 at 02:34:59 AM EST

The PDF uses this example as an analogy: mason:stone :: carpenter:wood This relationship is trivial because they can be analyzed purely syntactically. The semantics of the words, such as what is wood is irrelevant, just the syntax of the dictionary definition is enough to see the relationship. Thus, if 56% of the test is only simple syntactical substitution we shouldn't be surprised of the results. Computers are excellent at syntax and humans aren't so much. How well would the system when no amount of syntactical analysis can find the right answer, but any bubba [of the right age group hehe] would get the answer easily: mounds:almond joy :: Loni Anderson:? a) Sybil b) Richard Simons c)Burt Reynolds d)Mr Goodwrench

Your analogy tests something completely different (3.00 / 2) (#78)
by curien on Sat May 28, 2005 at 11:54:50 AM EST

The whole theory behind the argument that standardized tests can be biased is based on the non-universality of the subject matter. In brief, if a particular analogy question requires the knowledge of nautical terminology, rich white kids from the coast are much more likely to be able to deduce the correct answer than a poor American Indian from Wyoming.

In general, the implications of your argument is that you don't actually believe g to be a valid measure of intelligence. That's an interesting (and not completely unsupported) position, but it's a little out of the scope of this article.

--
This sig is umop apisdn.
[ Parent ]

no no no (none / 0) (#96)
by StephenThompson on Sat May 28, 2005 at 06:54:58 PM EST

You have misinterpreted the point. That the question requires that specific knowledge domain isnt important [its something I just made up for k5 readers, not SAT takers]. What is important is that it does use a non-syntactical domain. Syntactical analogies are trivial, semantical ones are harder, and I'm saying computers will fail on these types of analogies because they only have a syntactical understanding. Thus, a human will pick Burt Reynolds, but a computer would probably pick Mr Goodwrench because he is both a man and deals with nuts.

[ Parent ]
That would involve (none / 0) (#100)
by trane on Sat May 28, 2005 at 09:35:30 PM EST

a pretty shallow semantic analysis. Harder might be if you changed Mr. Goodwrench to Mr. Goodbar...

[ Parent ]
I think you underestimate the complexity (none / 0) (#142)
by curien on Tue May 31, 2005 at 06:31:34 AM EST

For example, I went to a random SAT prep site, and got this one:
  tenet:theologian::
(A) predecessor : heir
(B) hypothesis : biologist
(C) recluse : rivalry
(D) arrogance : persecution
(E) guitarist : rock band

I think that requires more than just syntactical analysis.

--
This sig is umop apisdn.
[ Parent ]

that's always the case (none / 0) (#135)
by mpalczew on Mon May 30, 2005 at 12:02:30 PM EST

rich white kids who bought a study guide and memorized certain words that are repeated every year in the SAT's but are not used at all otherwise will do better(sorry for the run-on).  There is no way to unbias these tests.  why do people consider memorization of vocabulary a measure of intelligence?
-- Death to all Fanatics!
[ Parent ]
This was somewhat addressed in the article (none / 0) (#141)
by curien on Tue May 31, 2005 at 06:22:49 AM EST

It's not just vocabulary, it's applied vocabulary, so there is some mitigation. The theory goes that while an analogy-baseed test might underreport g for some people, it will never overreport it.

Oh, and there are wordlists available for free. If nothing else, check your local (or school) library. But that still leaves room for the argument that a kid who has to work a job to help support the family doesn't have time to study the word list.

--
This sig is umop apisdn.
[ Parent ]

You missed.. (none / 0) (#171)
by StangDriver on Fri Jul 22, 2005 at 03:30:02 PM EST

The example used was " if a particular analogy question requires the knowledge of nautical terminology, rich white kids from the coast are much more likely to be able to deduce the correct answer than a poor American Indian from Wyoming".

By this, he didn't mean the rich white kids are better prepared to study, but that the question would be in line with their lifestyle. That would be unfair for a poor kid who has never seen a boat in person.

[ Parent ]

Some thoughts (3.00 / 4) (#18)
by strlen on Fri May 27, 2005 at 02:42:47 AM EST

Jeff Hawkins goes into that subject, somewhat in his book On Intelligence where he discusses that the proper way to think about is intelligence is the ability for predictive reasoning, i.e.: seeing the pattern, than merely memorization (he suggests that as the solution to the Chinese Room problem [google for it, for those who don't know it -- very widely known]).

This is where my idea with SATs comes in: you can cram for SATs, by studying the vocab that goes along with it and you can cram, through examples, the verbal analogy questions.

Now if the verbal analogy questions were randomly generated and more complex, it would be a good test of intelligence. Also, from what I've heard they've replaced the verbal analogy session with critical thinking essays. Now, given the standard for the ways essays/papers are graded in college -- this is likely going to be lax, but at least in theory writing a paper should be a more comprehensive test of analytic reasoning than simple analogy questions.

What is also ignored is that there's several kinds of intelligence. One needs mathematical reasoning to suceed in a computer science program, while one needs a high verbal ability and overall good analytic (not necceserily purely mathematical) reasoning skills to succeed in a history program.
Given that people who take the SATs aren't even decided in terms of what they will be majoring -- there is no way that the SAT can measure their capacity to succeed in any specific field of study (unlike the LSAT/GRE/MCAT), thus it may as well be pointless. Of course high school GPA is probably far more pointless, but that's a different story (Note to geeks reading this: you can gain entry into much more competitive universities and avoid the SATs all together by transfering from a community college -- at least in California (and the UC system)).

Jeff Hawkins also discusses in his book, that one of the problems behind AI is that it uses behaviorism, which is an outmoded psychological model. E.g.: a machine that can ace the SATs may be behaviorally equivalent to an SAT aceing student when it comes to the SATs, but can such a machine be used for other intelligence related tasks?

In short, I do tend to agree with the idea of existance of the g factor, and of genetics playing a role in IQ. I do believe that proper analytic tests can be designed, so on.. what I don't believe, is that A) IQ is fixed throughout an individual's lifespan (even excluding the obvious physiological factors such as dementia) B) that there's one specific kind of intelligence (i.e: there's different sorts of aptitudes for different tasks). Both sides of the issues are deeply polarized: we have the Gould types (to whom you seem to be referencing in the story title) on one side, fearing that any acknowledgement of genetics in intelligence would automatically mean justifying eugenics and racism and well... and the types who take the idea of genetics having any influence to mean genetical determinism (as well as assuming that racial groups are going to be more or less homogenous when it comes to intelligence -- and ignoring both cultural (e.g.: was the child trained with puzzles at the early age?) and physiological (e.g.: diet, child rearing) factors).

Excuse the somewhat rambling tone of the comment, I am in the middle of procastinating exam study and merely wanted to express some [somewhat random] thoughts I've developed over time on this issue.

--
[T]he strongest man in the world is he who stands most alone. - Henrik Ibsen.

why do geeks get to define intelligence? (2.33 / 9) (#20)
by SIGNOR SPAGHETTI on Fri May 27, 2005 at 04:18:02 AM EST

Intelligence is by strange coincidence those cognitive skills possessed by people in society that have or serve power. That would be the technocrats currently, because the measure of civilization has become elaborate weapons, designer erections, ersatz sugars, ipods and the market mechanisms and computer functionaries in which to flog all this glittering bullshit. I for one have yet to meet a geek that was smarter than a cocktail waitress. Anyway, ONE.

--
Stop dreaming and finish your spaghetti.

How do you measure it then? (2.50 / 2) (#36)
by Have A Nice Day on Fri May 27, 2005 at 09:58:59 AM EST

You say intelligence is something possessed by the intelligent (suprise) and then that you've not me a geek as smart as a cocktail waitress?

What do you mean "smart" when you don't mean intelligent? How is the waitress smartere than those who have got ahead in life and invented stuff?

Back up your bullshit, troll.

--------------
Have A Nice Day may have reentered the building.
[ Parent ]
Measure what? (2.33 / 6) (#54)
by SIGNOR SPAGHETTI on Fri May 27, 2005 at 06:13:19 PM EST

Here's what IQ measures: your position in the hierarchy of authority; your degree of impersonality, so that for example dispassionate voices in debate are given undue consideration for their "reason"; how well you conform to received rules of thought and conduct, so that for example (a) green activists and anarchists are morons for their relationship to property and (b) geeks will tirelessly remind stupid people that correlation is not causation even though outside classical rhetoric causality does not exist and correlation is all there is; promotion based on profitable achievement; how specialized is your niche in the division of labor, i.e. savant; finally, your efficiency in goal-oriented organizations designed around the rational principles of profit. In other words, if there is AI, we are becoming it.

There was a time, not so long ago but before standardized tests became an industry that today peddles billions of dollars worth of folklore annually, when people did not call each other stupid or rank their children around silly tests that purport to measure "intelligence" instead an arbitrary quality that exists only in the realm of invidious malarkey, but instead acknowledged their mutual and complementary _differences_. There literally was no word stupid -- there still isn't in many primitive cultures -- or if there was it meant "gratuitous insult" instead of "scientific description" or "members only".

--
Stop dreaming and finish your spaghetti.
[ Parent ]

You rule. [n/t] (none / 0) (#57)
by kcidx on Fri May 27, 2005 at 06:34:43 PM EST



[ Parent ]
-1 bullshit and waffle. (none / 1) (#75)
by Have A Nice Day on Sat May 28, 2005 at 07:55:54 AM EST

If there's no such thing as intelligence then your original post about not having met a geek as smart as a waitress is meaningless as "smart" is also meaningless.

But do keep trolling. Some people seem to find it entertaining.

--------------
Have A Nice Day may have reentered the building.
[ Parent ]
intelligence is a semantic quality (none / 0) (#85)
by SIGNOR SPAGHETTI on Sat May 28, 2005 at 03:10:05 PM EST

it means different things to different people and the word intelligence, though distinct, cannot accurately convey those meanings.

--
Stop dreaming and finish your spaghetti.
[ Parent ]

So now there's no way to convey the meaning (none / 0) (#137)
by Have A Nice Day on Mon May 30, 2005 at 05:15:42 PM EST

And still you haven't explained how the waitress is 'smarter'. First by saying intelligence is meaningless semantics and second by saying the word is ambiguous.

Basically your top post was a load of hot air and crap and your replies to my challenge have been more hot air and posturing. Well done.

--------------
Have A Nice Day may have reentered the building.
[ Parent ]
I was unclear, I see that now. (none / 0) (#152)
by SIGNOR SPAGHETTI on Tue May 31, 2005 at 08:07:23 PM EST

I didn't mean to give the impression I believed that which you keep referring to as thing was instead semantics and the word for which ambiguous. In fact I don't believe that is the case for intelligence. I believe that is the case for everything. It's just that in the instance and vicinity of 'chair', for example, we have democratically agreed its chair-ness (a flickering manifestation of the eternal ONE) were less important than teh sitting down on teh fucking thing. To sequester intelligence, presumably in order to make excuses for classist policies, is to reveal the fascist's inclination to suffocate others under the weight of his enormous buttocks. I'm afraid Local Roger has an unkind article in your future, Mr. Have a Nice Day.

--
Stop dreaming and finish your spaghetti.
[ Parent ]

Oh for god's sake man (none / 0) (#158)
by Have A Nice Day on Wed Jun 01, 2005 at 07:52:19 AM EST

I'm not asserting there is such thing as intelligence or smarts or anything, nor am I asserting the negative. I'm simply questioning how the hell you can say that you haven't met a geek "as smart" as a waitress immediately after having claimed there's no such quality.

--------------
Have A Nice Day may have reentered the building.
[ Parent ]
huh? (none / 0) (#161)
by SIGNOR SPAGHETTI on Wed Jun 01, 2005 at 05:36:11 PM EST

I didn't say I never met a geek as smart as a waitress, implying waitresses were smarter. I said I never met one smarter than a waitress, denying the hierarchy of smart. In the former construction there is no possibility a geek could be smarter than a waitress, whereas in the latter the smartest geek can still be as smart as the blondest waitress, but not smarter.

... immediately after having claimed there's no such quality.

I never met a waitress more spifferific than a geek, either.

--
Stop dreaming and finish your spaghetti.
[ Parent ]

Re: why do geeks get to define intelligence? (2.57 / 7) (#39)
by 175 4 7r4p on Fri May 27, 2005 at 10:42:44 AM EST

Actually, I have yet to meet a geek who has any real power. That is of course, using the more recent usage of the term geek that I've noticed - "one who spends the majority of his/her socializing time out-of-character", such as is done with telephones, all online messaging activity, forums/blogs, role playing/D&D, and computer games. Usually these types are too busy maintaining their own reality to be too concerned about this one.

If intelligence was defined by those in society who have power, then IQ tests would measure the ability to bullshit consistently and eloquently. It is the prerequisite for politics, law, and upper management, and is beneficial to some degree everywhere. Those who are adept at it will always do better than otherwise equal peers.

To me, intelligence is how quickly you learn, how quickly your learning speed decreases as a function of information complexity, and how well you can put learned information together to form new information and new ideas(analogies).

Learning complex material quickly doesn't imply geek to me, and v.v.

Although, assuming for a second that is what you actually meant, what other measure would we use? Academic aptitude? Typing speed? Number of bites per troll?

[ Parent ]
Ha!, (2.00 / 4) (#66)
by Sesquipundalian on Fri May 27, 2005 at 11:33:56 PM EST

Actually, I have yet to meet a geek who has any real power.

You might want to define "power" before you write that.

Do you mean physical prowess? A lot of geeks I know take martial arts. Is it sex appeal? I've fucked a lot of really hot women (who usually pay for everything, too). How about persuasive power? I've met some pretty persuasive I.T. sales reps. Money? Bill Gates. Weapons? You should see some of the home brew ordinance my engineer friend makes. Perhaps you meant political connections; ever hear of Roger Penrose? Maybe you meant access to mindshare, or did you forget about Rusty? The ability to hurt people; how about Ted Kaczynski. Since most geeks love computers, you couldn't have been talking about electrical power. How about secret power? Geeks have been behind just about every secret society since ancient Egypt. Nothing beats the teachings of Buddha for sheer jerk-off power (look ma, no hands!). Anthony Robbins seems to have the "power of language" pretty much aced (he calls it Neuro Linguistic Programming if you can believe that).

These guys are all extremely technical in their approach, I find it hard not to call them geeks of one sort or another. So.. what did you mean by "power"?


Did you know that gullible is not actually an english word?
[ Parent ]
Political power (3.00 / 4) (#68)
by damiam on Sat May 28, 2005 at 01:59:11 AM EST

is more than just having some political connections. It is being in a position of power yourself. I know of no real geeks in the US House or Senate, in the Bush Administration, or on the federal bench. I assume things are pretty much the same in othe countries.

As for economic power, Bill Gates is the exception. Most geeks aren't billionaire CEOs (and you could argue that Gates isn't much of a geek anyway). And if you want to argue "access to mindshare", I think Ted Turner and Rupert Murdoch have Rusty beat.

[ Parent ]

Oh, I get it! (none / 0) (#109)
by Sesquipundalian on Sun May 29, 2005 at 03:35:49 AM EST

You mean "born with a silver spoon in your mouth" kind of power (or maybe you should have come up with different examples).

See; problem is; that kind of power is just too easy to co-opt. Michel "cocaine and prostitution" Chretien anyone? Or ~heh, how about Roger Clinton?

See what happens to these guys is that some tough "acts real hip and connected kind of geek" geek from the projects comes along and co-opts that kind of power. It's just too easy to snow these losers, because they were usually raised as front line fodder in some kind of cult (which is basically how their parents got access to the kind of social capital that they could put the silver spoon into their gullible-assed mouths, in the first place).


Did you know that gullible is not actually an english word?
[ Parent ]
what a coincidence (none / 0) (#47)
by khallow on Fri May 27, 2005 at 02:56:21 PM EST

Intelligence is by strange coincidence those cognitive skills possessed by people in society that have or serve power.

It's not so strange when you think about it. And what about geek cocktail waitresses? Maybe we should have them rule the world?

Stating the obvious since 1969.
[ Parent ]

A see a distinction here. (3.00 / 4) (#21)
by A Bore on Fri May 27, 2005 at 04:56:42 AM EST

On one hand you have the adaptable human brain which, as a result of its general intelligence, is able to do these verbal analogy tests. On the other you have a computer expressly programmed to crack a particular problem - whether it be chess, verbal analogy, random, normal sounding conversation - which is not solving these problems as part of an indicator of general intelligence, but rather through number crunching and analysis of a single problem.

It kind of misses the point. Any test aims to measure intelligence by correlation. If you had an idiot savant vegetable and scroed him on a variety of tests, you would have perhaps mathematic based ones showing him as highly intelligent, social situations showing him retarded etc. etc. Until a computer is developed which scores across the board on a variety of different tests, even ones it has not been specifically designed for - that is an actual measurement of AI.

Specific IQ tests are called the "next challenge for AI" because they are the most difficult to number crunch a solution to. Well, someone has managed here. It isn't the breakthrough you present it as, anymore than deep thought beating Kasparov was a breakthrough. It just showed that human ingenuity can eventually force computers to closely model human responses to some of the most complicated tests we can devise.

Autistic Savants (none / 1) (#26)
by Baldrson on Fri May 27, 2005 at 08:09:34 AM EST

Any test aims to measure intelligence by correlation. If you had an idiot savant vegetable and scroed him on a variety of tests, you would have perhaps mathematic based ones showing him as highly intelligent

This is the best line of argument I've seen yet against the "breakthrough" view of Turney's work.

I'll address it in the article thusly:

Quoting the Victoria, BC health site on autistic savants:

In all cases of savant syndrome, the skill is specific, limited and most often reliant on memory.

This sounds a lot like the objections raised by the opponents of the use of verbal analogies tests. Finding an autistic savant whose specialized skill was to do exceedingly well on verbal analogies would go a long way toward debunking the view that Turney's accomplishment is an AI breakthrough.

-------- Empty the Cities --------


[ Parent ]

Intelligence or imitation of an human? (none / 1) (#42)
by Ptyx on Fri May 27, 2005 at 01:22:33 PM EST

I also think that adaptability is a key criteria for intelligence - however if you want an AI to perform like an human on a board of tests designed for humans, do you ask it to be intelligent or do you ask it to be human?
-- "On voudrais parfois être cannibale, moins pour le plaisir de dévorer tel ou tel que pour celui de le vomir... " Cioran
[ Parent ]
For me (2.66 / 3) (#24)
by whazat on Fri May 27, 2005 at 06:53:00 AM EST

Some of the necescary but not sufficient things an AI has to do are.

Alter its code to improve the following


  • Performance on a task

  • Increase robustness of important code

  • Regulate energy usage and heat production so that important code is more likely to be able to perform its task

So I think it is a mismeasure.

Bloom's Taxonomy (3.00 / 3) (#25)
by minerboy on Fri May 27, 2005 at 06:54:48 AM EST

One common way to rate the cognitive difficulty of problems is Bloom's Taxonomy, which categorizes tasks as knowledge, comprehension, application, analysis, synthesis, and evaluation. Analogies were thought to be fairly high on this scale, but I suspect that this is a mistake, and what the success of the AI on analogies shows that analogies have been overrated wrt Bloom. I suspect that there are alot of tasks that are thought to be the "higher order thinking skills" that can simply be broken down into knowledge and algorithms.



Interesting perspective (none / 0) (#32)
by Baldrson on Fri May 27, 2005 at 09:19:12 AM EST

I added a couple paragraphs about rote memory, Kolmogorov complexity and critical thinking to address this general topic.

-------- Empty the Cities --------


[ Parent ]

Terabyte of text for a machine... (3.00 / 3) (#27)
by dimaq on Fri May 27, 2005 at 08:13:30 AM EST

how would you estimate the amount of text an average SAT participant has read in their entire life?

terabyte is like a thousand books... I certainly never read that much literature in my life, and I wonder if I ever read that much text of any kind in my life.

next difference - what sort of memory (size, compare to human) does the mahine in question have and what sort of efficiency is achieved in the storing algorithm?

i.e. what is the probability that an average person remembers something (word, phrase, semantics) they've read only once? and what is it for a machine? my bet a machine could be programmed to remember a lot better, giving it an unfair advantage.

Terabyte of talk for a person... (2.66 / 3) (#29)
by Baldrson on Fri May 27, 2005 at 08:25:44 AM EST

Remember the machine in question doesn't have the verbal input of a human. SAT participants have each received at least a terabyte of verbal input throughout their lives.

-------- Empty the Cities --------


[ Parent ]

Yeah, but how much do they listen to ? (nt) (none / 0) (#33)
by minerboy on Fri May 27, 2005 at 09:19:32 AM EST



[ Parent ]
They do. (3.00 / 2) (#37)
by Dievs on Fri May 27, 2005 at 10:11:31 AM EST

 The basis of our knowledge is the sum of all the millions of little things that we hear and see during the early childhood years. While children often disregard advice for considering consequences, and do things that seem fun for the moment, they do listen to the answers of all the million 'WHY?' they ask.
  All this simple knowledge of how birds behave, what a car is, etc - it's a large factor of Turing tests. Every human child is a big hoard of such information that the current computers cannot readily access anywhere.

[ Parent ]
numbers fun (2.66 / 3) (#44)
by pauldamer on Fri May 27, 2005 at 01:53:47 PM EST

Assuming that you can record verbal input as plain text here is back of the envelope calulation for verbal input.
12 sentances/minute
*
60 minutes/hour
*
16 hours/day  (you gotta sleep)
*
365 days/year
*
17 years (age when taking SAT)
*
100 bytes/sentance
-----
~= 7 Gigabytes.

Nowhere near a terabyte.

Of course you could argue that bodylanguage/tone/ambient humidity all factor into how a human learns language.  You might even be right.  But then comparing numbers with a digital computer is kind of silly since a human processes at least a terabyte of 'data' every second.

[ Parent ]

Oops... you're right. (3.00 / 3) (#46)
by Baldrson on Fri May 27, 2005 at 02:30:59 PM EST

So this is a really interesting figure. If you do a more realistic calculation its conceivable that humans get about as much input, within an order of magnitude or so, from verbal as they do from their genetic data (ignoring external genetic inputs like viruses, etc.)

-------- Empty the Cities --------


[ Parent ]

Inputs (3.00 / 2) (#83)
by levesque on Sat May 28, 2005 at 02:31:34 PM EST

Verbal input rests on input from all the rational senses and genetic code spins rational biochemistry. In this context descriptive verbal and genetic enumerative similarities, pale in comparison to the total rational processes of being. If machine intelligence is defined as the ability to model one aspect of humans: verbal responsive behavior, at a level approaching 50% of human ability then machines could be called intelligent.

With intelligence tests we are measuring a state and not a potential or implied limit of behavior and therefore even if it is reliable it is not in any sense descriptive beyond that the more you profile the more you observe profiled individuals.

How this reflects on the value of SAT tests is possibly linked to our perceived value of using a punctual measure of human verbal intelligence and its relative use in promoting futur increases of coherence and well being in human behavior in general.

[ Parent ]

Yes, but if it was in mp3 format, (3.00 / 2) (#65)
by Sesquipundalian on Fri May 27, 2005 at 11:08:22 PM EST

it would be 300,000 bytes per minute instead of 1,200 so that would total slightly less than 2 terrabytes. That's the "self talk" factor, ie; almost all people narrate what they read (if only in their head), in order to convert sentences into meaning.


Did you know that gullible is not actually an english word?
[ Parent ]
re: Terabyte of text for a machine... (3.00 / 2) (#98)
by interstel on Sat May 28, 2005 at 07:25:10 PM EST

terabyte is like a thousand books... I certainly never read that much literature in my life, and I wonder if I ever read that much text of any kind in my life.

Ah, you may not have read a terabyte. But my book collection (which I've read every item of ate least 1 time and maybe 15% of two times or more) is over 1,500 books in size. The rest of my library compromised of periodicals is probably over 4,000 issues in size and I've read maybe 50% or more of every issue. And I have a comic book collection that stretches from 1970-1997 which is over 40,000 issues in size and I've read everyone of those.

The bulk of all of that reading was in the first 30 years of my life. In last 7-10 I've read alot less because I've by online constantly reading and I have no real idea how much information I've absorbed in that time. So I think I could rightly say that I've dealt with at least 4 terabytes of written information. If I spend the next 30 years at 25% of the pace of the first 30 I will end up with nearly 6.

Interstel

[ Parent ]
the estimate's wrong (none / 1) (#126)
by Polverone on Sun May 29, 2005 at 06:13:13 PM EST

1,000 books is not nearly a terabyte of textual information. War and Peace, for example, occupies ~3.1 MB. 1000 very long books would therefore occupy ~3.1 GB, but most books are considerably shorter than War and Peace. If you are literate by age 5 and die at 80, that means you have to read about 12 copies of War and Peace (or the equivalent) every day of your life to reach 1 TB by the time you die. This means reading ~6.5 million words per day. If you devote 16 hours a day to reading, you need to read at about 115 words per second. I got a little carried away but you can see that 1 TB is a lot of text.
--
It's not a just, good idea; it's the law.
[ Parent ]
1,000 Books (none / 1) (#131)
by vile on Mon May 30, 2005 at 06:52:36 AM EST

Only defines the textual input that you've received. You're not factoring in things, such as a relational mappings and other inputs the human mind creates while reading.. and associates to your input. If it was strict math, humans take in x to understand y, then yes, you are correct. But, it's not.

~
The money is in the treatment, not the cure.
[ Parent ]
but he was talking about ttextual input (none / 1) (#136)
by Polverone on Mon May 30, 2005 at 03:27:51 PM EST

I've dealt with at least 4 terabytes of written information -- he's talking about a few thousand books. This was based on an estimate that a book contains a gigabyte of textual information, when it doesn't even come close. Since human brains don't work like digital storage systems, I'm not sure how reasonable it is to quantify the effects of new inputs in terms of bytes, but I can certainly put some boundaries on the inputs themselves.
--
It's not a just, good idea; it's the law.
[ Parent ]
The brain doesn't encode information in binary... (none / 0) (#153)
by DonQuote on Tue May 31, 2005 at 08:29:52 PM EST

I don't really know if that's a very useful comparison, as humans and computers store information in vastly different ways. I'm sorry, I don't have a source for this, but on my psychiatry rotation we learned that memory is fuzzy. The doctor there was describing a study where they asked people relatively soon after Princess Diana got killed what they were doing that day, and then they went back and asked them the same questions a few years later. They discovered that many people gave very different answers than initially. Usually the general details were right (I was at home, etc.) but often the specific details (eg. how / from who you found out) were all wrong. The very interesting finding there was that how certain you were that you were remembering correctly had absolutely no bearing on whether you actually did remember correctly! Anyway, a computer completing a similar task would either give you back the same answer it had given you the first time or tell you that it didn't remember (the data had been erased to make room for other data). It wouldn't recall generalizations and interpolate details like humans do.

-DQé
... Use tasteful words. You may have to eat them.
[ Parent ]
it's possible both are true (3.00 / 6) (#40)
by Delirium on Fri May 27, 2005 at 11:29:34 AM EST

It's possible that simultaneously human performance on analogy tests is well-correlated with general intelligence, yet this performance by machine does not indicate a generally-intelligent machine.

In particular, it may be the case that humans who are able to build up the sort of mental structures to excel at analogy tests are in general able to solve many other reasoning problems as well. With a computer program specifically constructed for the analogy test, that may not be the case: It may be good at taking SATs and not good at the other things that, in humans, the SAT is a pretty good predictor for.

Excellent (2.50 / 2) (#43)
by vadim on Fri May 27, 2005 at 01:46:28 PM EST

I impatiently await for the arrival of cute androids who say "Chii!" somewhere in the next 20 years ;-)

More seriously, I'm not sure how much practical stuff will come out of this. IMHO, intelligence is tightly bound to the environment, and pretty much impossible to separate from it.

So, I think that we'll only get real intelligence when we start with a human-like robot and try to enginner intelligence into it, but probably will not get very far if we continue to obsess on one particular detail and ignoring everything else.
--
<@chani> I *cannot* remember names. but I did memorize 214 digits of pi once.

tightly bound to the environment? (none / 0) (#115)
by eraserewind on Sun May 29, 2005 at 10:25:07 AM EST

What do you mean exactly? The environment of the human body, or something else external?

[ Parent ]
I mean (none / 0) (#119)
by vadim on Sun May 29, 2005 at 11:03:29 AM EST

That intelligence is the mechanism that allows us to survive. It doesn't make sense for it to exist on its own, since there wouldn't be a point in it.

I think that in order to be able to relate to something you both have to have many things in common. For instance, take my cat. I kind of understand my cat, but not completely. I don't feel exactly the same things as she, so I often don't know what she's listening for, or what makes her nervous.

Same goes for AI, IMO. How would you talk to an AI that has never seen the same world as you? It might know that blue and red are colors, but it won't know what they really are unless it also can see.

Things like "blue" are pretty much impossible to explain. How do you explain a person blind since birth what makes blue different from red? For a dog, blue is just a shade of gray, and for some other animals red could be two different colors.

The problem here is that "blue" is not a fixed concept, it's just our interpretation of a chunk of the spectrum that has absolutely nothing special about it.

So, I don't think we'll ever make something we can see intelligent unless it perceives the world in a way very similar to our own. Even if you could make an intelligent program that ran on a laptop with no periphals attached, how would you talk to something that can't see, hear, taste, feel or perceive the 3D space we live in, and whose environment is made of pages on the Internet?
--
<@chani> I *cannot* remember names. but I did memorize 214 digits of pi once.
[ Parent ]

Even if their AI is at the Rainman level... (2.50 / 2) (#45)
by Russell Dovey on Fri May 27, 2005 at 02:25:46 PM EST

...that's not too bad. I'd expect the early AIs to seem like brain-damaged humans for quite a while. After that, they'll seem like weird, emotionally immature people. After that, they'll emulate humanity so well we won't know just how fucked-up crazy they really are.

"Blessed are the cracked, for they let in the light." - Spike Milligan

Until... (none / 0) (#56)
by kcidx on Fri May 27, 2005 at 06:27:10 PM EST

...they emulate humanity even more efficiently and kill us all.

[ Parent ]
Why should they??? (none / 0) (#74)
by spacebrain on Sat May 28, 2005 at 07:35:16 AM EST



[ Parent ]
Why should they??? - here comes the text, sorry (none / 0) (#76)
by spacebrain on Sat May 28, 2005 at 08:10:26 AM EST

Provided they will eventually be able to emulate humanity better than we do ourselves (which sounds pretty much like an oxymoron anyway) then I think it's far more probable that they'll use us as their slaves, which in turn might not be that surprising for the people living in those times, since the transition will be gradual. I mean, our dependence on machines and technology in general started already a LONG time ago and has evolved pretty "far" by now...

Sure, you can argue that dependence is not necessarily slavery, the former being passive while the latter being active but I see this as a purely rhetoric distiction. ;-)

And yet, I think that it's getting harder and harder to tell who/what is dependent/enslaved by whom/what as people are connected more and more closely by technological means.

IMHO human civilisation seems to be evolving into some kind of superorganism in which humans will play their role as GOFAI systems and possibly androids or other autonomous agents will too, how "intelligent" they ever will be considered/perceived (again, by whom/what ?).

[ Parent ]

the true mismeasure of machine (2.90 / 10) (#48)
by Polverone on Fri May 27, 2005 at 02:56:45 PM EST

Artificial intelligence is everything that (some) humans can do well that no machines or other animals can do well. As soon as you make a machine perform reasonably well on an AI task, it's discovered that the task has no bearing on intelligence.

Let's look at some of the things that it turns out are not at all intelligent:

-SAT verbal analogies
-Playing checkers, chess, or backgammon
-Optical character recognition
-Symbolic equation manipulation

This has some interesting consequences. It turns out that Go players must be more intelligent than chess players, because there are no really good Go machines. Likewise, polo players are even more intelligent than Go players, because the best polo-playing machines are worse than the best Go-playing machines. Polo and Go players of modest ability are both far more intelligent than the man who is merely good at symbolic algebra. If we really wanted to admit the creme de la creme to institutions of higher education, we would ignore machine-accessible pseudointelligence measures like verbal analogies and logical reasoning, and instead organize a giant polo championship.
--
It's not a just, good idea; it's the law.

It's because... (3.00 / 6) (#50)
by Znork on Fri May 27, 2005 at 04:14:29 PM EST

... many put similar values into 'intelligence' and 'magic'. It's magic until it's explainable by science.

Of course, until you accept that there is no magic, nor any magical intelligence, only as yet undetermined and unreproducible mechanistic and physical phenomena you'll keep moving the line, retreating to protect your emotional investment in something giving a sense of self-importance. People go to great extents to that effect, to make their uniqueness unequalled, ranging from ideas about souls to neural quantum connections with the cosmos.

The human brain is a neural network of immense complexity, before adulthood taught over a period of decades of constant input processing and inheriting the collected experience of a record-keeping society. Any machine or manipulated animal with a similar neural network and similar teaching can achieve the same level of intelligence at any specific task. The ability to reproduce pretty much any facet of human intelligence is not particularly surprising.

Like most 'intelligence' tests, the SAT's are an indicator of how well an average person might perform based on their results.  A program designed for, or a neural net taught for one specific test is not an average person, any more than a human kept in a box since birth and trained for that one specific test would be.

Machines beating humans at specific intellectual tasks means no more or less than machines beating humans at specific physical tasks. Complaining about the validity of the SATs on the basis of what a task-specific device can achieve is like complaining about the usefulness of the olympic sprint because a racing car could go faster (ooh, look, relation arithmetic).

[ Parent ]

Strawman (none / 0) (#73)
by ElMiguel on Sat May 28, 2005 at 07:28:36 AM EST

As soon as you make a machine perform reasonably well on an AI task, it's discovered that the task has no bearing on intelligence.

Not exactly. As soon as a machine performs reasonably well on an task that in humans is some sort of indication of intelligence, it's discovered that you can perform the same task in a different way without needing intelligence. Is it that surprising?

Let's suppose, for example, that you take an average student who has scored 56% in SAT and ask him to explain how he arrived at his answers. Do you think this AI system is anywhere near being able to do that?

[ Parent ]

more a joke than a strawman (none / 1) (#88)
by Polverone on Sat May 28, 2005 at 05:17:54 PM EST

Not exactly. As soon as a machine performs reasonably well on an task that in humans is some sort of indication of intelligence, it's discovered that you can perform the same task in a different way without needing intelligence. Is it that surprising?

Once something is explained in sufficient detail that a machine can perform reasonably well at the task, it ceases to be called intelligent by many people (or at least people won't call the computer's behavior intelligent, even if they'd call a human doing the same thing intelligent). A lot of people apparently want intelligence to be solely open to humans and can't offer more abstract descriptions of intelligence that machines cannot encroach upon. Of course the much-maligned Turing test is one that humans do very well at, and computers very poorly, so it is a litmus test of intelligence that will remain unreddened by computers for a long time. Still, there are many things besides conversation that humans consider intelligent among their own kind, and I see no particular reason to debase those tasks just because computers can do them as well. If you discover that apparently intelligent tasks can be done without "real intelligence" (a human brain), this amounts to a constant moving of the goalposts for machine intelligence.

Let's suppose, for example, that you take an average student who has scored 56% in SAT and ask him to explain how he arrived at his answers. Do you think this AI system is anywhere near being able to do that?

Look, you're moving the goalposts yourself. I agree that the machine wouldn't be able to offer explanations for its own reasoning, at least not in words. But that's not a part of the SAT.

I think it will be a very long time before machines can compete with human intelligence in a general way. But I also think it's funny how people are quick to pooh-pooh the significance of new computer achievements. Oh, the machine can summarize documents, complete word analogies, play chess, and solve equations? Well let's see it seduce and make love to a beautiful woman!
--
It's not a just, good idea; it's the law.
[ Parent ]

moving goalposts (none / 1) (#111)
by ElMiguel on Sun May 29, 2005 at 06:11:31 AM EST

Human intelligence works in a very specific way and we possess abundant information about it. In the case of SAT verbal analogies, that knowledge is used to design a test that correlates well with human intelligence, and also has other unrelated advantages (such as being easily and objectively quantifiable). This test was not designed to be a good indicator of general (including non-human) intelligence, so it's no surprise that does poorly at it.

Look, you're moving the goalposts yourself. I agree that the machine wouldn't be able to offer explanations for its own reasoning, at least not in words. But that's not a part of the SAT.

There is already an objective test that is designed to measure non-human intelligence: the Turing test. But since current AI systems don't do well in that one, the AI community prefers to focus on tests that were not designed for, and are not good at, measuring non-human intelligence. Who is moving the goalposts again?

It's true that explaining the reasoning behind the answers is not a part of SAT, but why? Because it would be impractical for evaluation, and anyway it is expected that any human who can choose the correct answer can also explain why it is correct. If it was included in SAT I expect that human scores would be left without any substantial changes while machine scores would plummet. Isn't that significant?

Now let me finish with the contrived and confusing analogy de rigueur: let's suppose there is a test, used in determining which horses are suitable for racing, that involves checking that they have healthy teeth. Then you have some dog breeder who says: "I'm breeding my dogs to be racehorses, and I'm making good progress, since all of them have excellent teeth!". But when he is asked to have one of the dogs compete in a race with the horses, he says: "No way! That's moving the goalposts!" See?

[ Parent ]

maybe (none / 0) (#120)
by Polverone on Sun May 29, 2005 at 03:41:30 PM EST

It's true that explaining the reasoning behind the answers is not a part of SAT, but why? Because it would be impractical for evaluation, and anyway it is expected that any human who can choose the correct answer can also explain why it is correct. If it was included in SAT I expect that human scores would be left without any substantial changes while machine scores would plummet. Isn't that significant?

Not to derail this conversation too much further, but I believe that people who score near the median on the verbal analogies of the SAT may not be able to explain all of their answers very well. I would expect to see a fair amount of "A and D were obviously wrong, so I took a chance and guessed C" and other mediocre reasoning that led to the correct answers. With some modifications, I expect the computer could explain its reasoning similarly: "it seemed that the following sentences from my training corpus were relevant to the question, and based on the probabilites determined by equations 1, 2, and 3, I picked C."

Of course I would expect a median human test-taker to sometimes offer much more concise and meaningful explanations of his actions, and I would expect a 90th percentile human test taker to often offer more concise and meaningful explanations.

Regarding dogs and horses, I think that (some) horse breeders and racers have said "no dog will ever keep up with race horses." Now a dog breeder has produced a dog that finishes in the middle of the pack of horses, at least in one particular type of race. The horse breeders are quick to point out that the dog is too small, has the wrong kind of teeth, can't hold a jockey, acts nothing like a horse, and utterly fails at many other types of racing. I think the dog breeder's accomplishments should be acknowledged as significant even though I also acknowledge a huge gulf between horse and dog.

Of course motorcycles are far faster than horses and dogs, so the racing metaphor isn't as exciting as the reality that people have produced and continue to produce machines that perform tasks formerly accessible only to human intelligence.
--
It's not a just, good idea; it's the law.
[ Parent ]

Hmm (3.00 / 3) (#49)
by SiMac on Fri May 27, 2005 at 03:46:16 PM EST

I work for the Center for History and New Media at George Mason University. Recently, one of my colleagues created a little tool to take multiple choice tests using the information contained on Google. (See our H-BOT tool, which has a slightly different purpose, but is based on the same premise.) On the national proficiencies in U.S. History, we got over 80%. These were fact-based questions, not analogies, but it's still a bit disturbing that school funding is being measured this way.

silly wabbit (none / 1) (#51)
by modmans2ndcoming on Fri May 27, 2005 at 04:31:38 PM EST

The statement about relations from A->B being 1-1 is not a generalization of isomorphism. isomorphism is already the abstract concept of equivalence.

;-)

i think you misunderstood (none / 0) (#64)
by forgotten on Fri May 27, 2005 at 10:13:34 PM EST

the statement in the text was correct.

--

[ Parent ]

I did not say it was incorrect (none / 0) (#122)
by modmans2ndcoming on Sun May 29, 2005 at 03:56:14 PM EST

but isomorphism is already an abstract concept. you cannot abstract it any more than that.

[ Parent ]
yes you can! -nt (none / 0) (#133)
by forgotten on Mon May 30, 2005 at 09:07:19 AM EST


--

[ Parent ]

how? (none / 0) (#154)
by modmans2ndcoming on Tue May 31, 2005 at 09:22:43 PM EST

one object is isomorphic to another if there is 1 to 1 and onto mapping.

how more abstract can you get than that?

[ Parent ]

see the passage in the text. -nt (none / 0) (#155)
by forgotten on Tue May 31, 2005 at 09:36:23 PM EST


--

[ Parent ]

the passage in the text (none / 0) (#160)
by modmans2ndcoming on Wed Jun 01, 2005 at 04:55:13 PM EST

is isomorphism, it is not an abstraction of it.

[ Parent ]
wow! great article! (1.50 / 4) (#52)
by CodeWright on Fri May 27, 2005 at 04:44:09 PM EST

this is VERY relevant to my work.

--
A: Because it destroys the flow of conversation.
Q: Why is top posting dumb? --clover_kicker

The true horror of it all (2.80 / 10) (#53)
by LilDebbie on Fri May 27, 2005 at 05:07:59 PM EST

The SATs are not a mismeasure of intelligence (however loosely defined) and AI is fast approaching humanlike capabilities. All the handwringing about rote memory and critical thinking results from a shared delusion and fear among humanity: we're beginning to discover that we're not as cool as we thought we were (or pretended to be).

Newsflash people: you have no soul. You have no "higher self" either for all you secular humanists out there. Your thoughts, emotions, and behavior are simply a complex data set, some of it hard-wired, some of it acquired, some of it reasoned out (and yes, machines can do this - play with prolog some time if you want to see for yourself). The illusion of consciousness is merely another behavior that gives structure to the vast amount of knowledge stored in your cranium and genes.

Inspiration does not come from the divine. It comes from the right data accumulating in one organism (be it an individual, a research group, or society at large) so that a new conclusion can be drawn from that set. In the early years of man when we didn't know much, this happened frequently at the individual level. Men like Archimedes and Plato made mind-blowing discoveries. People started to get full of themselves and decided that they were part of the Divine, that we are the very image of God.

We're not. At best, all of creation is a reflection of God, but why should we care? As the data set becomes more complete and complex, discoveries by individuals become less and less frequent and all progress is driven by groups, eventually giving way to all progress being done by machine. We can only pray that the machines, like us, allow the previous iteration (ours being animals) some level of freedom and/or existence.

My name is LilDebbie and I have a garden.
- hugin -

Aye. (none / 0) (#163)
by Harvey Anderson on Thu Jun 02, 2005 at 04:36:06 PM EST

You touch on something I think is going to be very important: The fact that we are moving towards a world where there is such a thing as too much knowledge.

Do we want to live in a place where we are taught from birth that every emotion we have and every desire is 'simply' due to this or that?  Do we want the world to analyze deeply why they are how they are or why they do what they do?

I can't imagine that as being anything other than totally depressing and would probably spell the end of human progress. ('Why should I do X, that's only because of this trigger Y?')

[ Parent ]

analogy questions (2.00 / 2) (#55)
by mpalczew on Fri May 27, 2005 at 06:23:20 PM EST

Those analogy questions are not a measure of intelligence.  I can honestly say that I and most people I know would easily get each of those questions right if I had memorized what every word in the english language actually meant.  That would take a huge amount of effort and would be very useless except for getting good scores on the SAT, and for trying to sound smart.  The way they pull words our of their ass right now, some of the questions may as well be in a foreign language.  It's a test of memorization.
-- Death to all Fanatics!
Latent Relational Analysis (none / 1) (#58)
by God of Lemmings on Fri May 27, 2005 at 06:35:25 PM EST

The question before us now is whether Latent Relational Analysis' human-level performance on verbal analogies truly represents an artificial intelligence breakthrough or whether it merely represents the mismeasure of machine.
How about neither? What we have here is a way for computers to understand what we mean on a much higher level. In my opinion, this represents just another misdirected effort towards trying to create a viable intelligence, however it does has its uses in agents, expert systems, and even better parsers.

Breakthrough is relative (3.00 / 2) (#59)
by schrotie on Fri May 27, 2005 at 07:07:49 PM EST

If one doesn't believe in the immortal divine soul or some such tale, one has to acknowledge the possibility that intelligence is not a monopoly granted us by eating from the wrong (?) tree, but is only maybe a monopoly and even that only by chance. AI will very likely be constructed some decade, century or millennium down the road - if we don't eradicate ourselves before. Currently there is no obvious reason why it should be in principle impossible to build a computer that has enough computing power to emulate dozens or millions of people in real time, synchronously.

Such a computer would seem vastly more intelligent than humans in terms of IQ, because IQ tests are usually conducted under a lean time limit. But is intelligence only processing speed? Maybe, I don't know. An interesting fact: people rating high on IQ tests have less active brains for a given task than lower ranking people - seems to be at least partly about optimizing certain pathways. Anyway, a specialized computer beat a humans. Again. Again it used brute force (terabytes is a hell of a lot).

I don't understand the technology used for the task. Indeed I don't understand much of the story at all - even though I work in AI research, but then I'm from the bottom up fraction and don't know expert systems at all. I can't see how this helps with computer vision or any other sensor processing (might help with sensor fusion and high level analysis though). The technology might be a breakthrough like Hopfield networks (backpropagation), which were a huge milestone in AI research. Or like Bayseian networks which are also rather significant (not just for filtering your spam). But neither made machines "intelligent" all of a sudden. They are tools for specific classes of problems and this new tech will likely also be. Thus we have pattern matching and complex dynamic attractors (Hopfield), plausibility estimation (Bayes) and what (Etter)? Correlation of causal topologies? Tile by tile machines learn aspects of human intelligence. Remember that a "computer" used to be a human who calculated not so long ago (it was another age though). Humans suck at calculating when compared to even humble computers. Same for logic and increasingly many other tasks. Its kind of nice that the process is so slow. Copernicus, Darwin Freud: they all took our illusions by formulating one theory and throwing it in our faces. The AI crowd do it step by step. Thanks for that.

And don't expect interesting conversation from your toaster any time soon.

terabytes is only what it read (none / 0) (#61)
by Eight Star on Fri May 27, 2005 at 08:23:26 PM EST

It read terabytes of human languge, and inferred the relationships. The data it remembered was probably much smaller.
This is the big reason I see this as a step forward, and not just brute forcing, the program learned these relationships by reading. They weren't programmed in, the were learned.


[ Parent ]
terabytes is still a lot (none / 0) (#62)
by schrotie on Fri May 27, 2005 at 09:10:27 PM EST

I understand that. Short of super clusters there is no way of digging through terabytes in any reasonable time. But terabytes is awfully much. I did no calculations but I don't think any college student has nearly had as much training data. And humans do the correlation analysis in passing while using the data for other purposes. Maybe that is the human magical trick, using the data rather than analysing it. Throwing 10,000 training samples at a Hopfield net to make it recognize flowers is brute force. Humans need much less samples (orders of magnitude less, like 10). Throwing terabytes of text at this correlation system is brute force. Don't get me wrong, I'm impressed by the result - if it is indeed true. The possibilities are fascinating, the whole thing smells conspiciously like reasoning. But the magic of intelligent humans is the ease and elegance. I have a two year daughter who is rapidly learning to talk. It's a miracle. Humans are so amazingly good at finding patterns . My boss is fond of saying humans are association or correlation machines. If that is true, the SAT breakthrough might indeed be very significant. Time will tell. But it was done with brute force anyway. And why not? Evolution took its time to design us, I wouldn't expect a small team of researchers to beat evolution so fast and easily.

[ Parent ]
Depends (none / 0) (#67)
by Eight Star on Sat May 28, 2005 at 12:12:26 AM EST

I'd guess that most SAT takers have read less than a gigabyte, but in terms of raw data (vision mostly), the question is how many terabytes does a human get in per day. The useful number is somewhere in between. I agree that analyzing the data on the fly is helpful, but I think that's because it actually inflates the amount of 'text' we get. We tell stories about things that happen, we play. We don't have to read stories to know that balls fall down. That is a major disadvantage this program has, ANY program that has to learn only by reading, no matter how smart it is*, is going to need alot more text than humans read to compensate.

*I can't be sure of that, if it were very very smart, it might be able to infer alot of things more quickly than we would guess.

[ Parent ]

There are two distinct issues here. (2.00 / 2) (#60)
by jd on Fri May 27, 2005 at 07:25:18 PM EST

First, if it is possible for a non-intelligent machine to score well in a test, the test is flawed. It should be impossible, using only predicate logic, to pass an exam. One possible split would be to specifically design tests where 25% was on logic and deductive reasoning, 25% was on lateral thinking and interpretive reasoning, 25% on conceptualizing and modelling, and 25% on semantics.

The idea here is that pure rules-based engines should score an average of 25% and a maximum of 50%. So should individuals who do but don't think. 75% should be achievable only by the application of ALL forms of intelligence, and 100% only by the mastery of ALL forms of intelligence.

This would be as true in the "hard sciences" as in the arts. If you can't apply all of your brain to the problem, then you cannot have all of the skills and therefore should not have all of the marks.

This leads onto the second issue, of "intelligent" machines. There is no reason, so far proposed, for why machines could not become intelligent. However, the Turing Test is simply too vague to be a good measure of intelligence, and is useless when you want anything more than a yes/no answer or a study of non-humanlike intelligence.

The breakdown I proposed analyzes different forms of intelligence, not one form alone, and no one form can be used to compensate for the lack of another. This would allow you to test a machine's intelligence by studying its ability to reason on different levels and allow for a study of non-humanlike intelligence as it is not a relativistic system.

You would still need something akin to a Turing Test, this would not replace it, but rather it would extend it to allow you to get a measure of intelligence rather than a mere binary result.

IQ tests, as they stand, are useless for measuring intelligence, as different schools of thought use different types of test and different scales. There is no way to use the result to get a useful, understandable, measure.

The tests also tend to be very culturally-oriented, so different cultures will score differently on the same test. Americans will generally do badly on UK tests, and the British generally do badly on American tests. Who is smarter? Logically, neither, but a single test result would not enable you to prove that.

Why? (none / 0) (#71)
by JaxWeb on Sat May 28, 2005 at 06:10:18 AM EST

I don't see why a machine should not be able to score well. Surely when we think rationally and optimally, we think in a purely logic way. All the non-logical parts of our think are incorrect.

I think using predicate logic, with background information, should be enough to pass a test. Otherwise it must be asked, "What is the human doing which is better?" - Logic is our most powerful and correct tool.

[ Parent ]

No, logic is A tool. (none / 0) (#94)
by jd on Sat May 28, 2005 at 06:31:39 PM EST

Lateral thinking (see virtually anything by Edward De Bono - no, he's not the lead singer of U2) is highly distinct from logical thinking, but allows the solving of problems that logic alone is quite incapable of solving efficiently. Logic is great when you can apply the data to hand to an algorithm to get an answer, lateral thinking is great when you have data but no algorithm.

Then there is herustic thinking, which is used in cases where there is insufficient data. As such, it is really the mirror-image of lateral thinking.

In order to think WELL, you MUST be capable of applying each of the tools (logic, lateral thinking and herustic thinking), AND know when each is the correct method to apply to a problem.

For example, chess players think laterally, for the simple reason that a logical solution to chess is non-computable. Technically, it is a full-information game, so an algorithm exists, but nobody has the faintest idea what the algorithm is. What you solve is combinatorial problems. Combination plays, probability theory and opening books are the key to chess, not logic. Logic won't get you anywhere.

Now, if you want to solve a non-trivial maze (ie: there are walls not connected to the outside wall and you are trying to get to some point other than another outside-wall point) then both logic and lateral thinking are useless. You have no data to work with, by definition, and as Sir Arthur Conan Doyle's fictional Sherlock Holmes repeatedly pointed out, logic is useless without data. Here, you apply algorithms to here-and-now situations. No data is required, other than the immediately available. That is herustics, an area of game theory that applies to these kinds of situations.

In mathematical terms, the breakdown is simple. Logic works with computational problems only. Nothing else. If it isn't computable, then it isn't solvable by this method. That rules out everything that is NP-Complete, which is 99% of life.

NP-Complete problems can then be broken down into two sub-categories - data-incomplete (the travelling salesman problem is an example of this when in a real situation, as you don't know the state of the roads and can therefore make no allowance for what is probably the biggest part of the problem), and algorithm-incomplete (the packing problem and chess are two examples of this, where you have all the data, but no algorithm to apply it to).

Quantum Mechanics is another area that is NP-Complete, but is both data-incomplete (the Uncertainty Principle prohibits you from having all the information) AND algorithm-incomplete (we have no QM models for gravity, for example). It doesn't help that the systems are ALSO chaotic (another branch of non-computable problems, where the sensitivity to initial conditions is so great that even when full information AND full algorithms exist, you STILL don't know enough.)

Any test of intelligence MUST take into consideration these three classes of thinking, as they are all Rational Thought, but very different forms of Rational Thought, involving very different techniques.

Any test that only applies to pure logic is purely testing your ability to handle the computable, and at this time, there isn't actually any solid proof that human intelligence is itself computable, which means that such a test is not even guaranteed to test that intelligence.

[ Parent ]

Point taken (none / 0) (#114)
by JaxWeb on Sun May 29, 2005 at 07:58:20 AM EST

Point taken that heuristic thinking does have a role (but a role that I think can and should be avoided in ideal worlds). Lateral Thinking I just think of as logic. I don't think it is distant really.

From here on I disagree with the correctness of what you say.

"In mathematical terms, the breakdown is simple. Logic works with computational problems only. Nothing else. If it isn't computable, then it isn't solvable by this method. That rules out everything that is NP-Complete, which is 99% of life." - I'm not sure what you are talking about there? Logic works on non-computable stuff, for sure, and NP-Complete doesn't mean non-computable.

Quantum Mechanics is not NP-Complete. NP-Complete means it has been proved that it is in a class of problems which can be solved in polynomial time on a non-deterministic turning machine, and its solution would be equivalent to any other NP-Complete problem. Quantum Mechanics (not being a problem, but a field of Physics, for a start) certainly does not fall into this category).

Chaotic is certainly not a non-computable problem (chaos is just a description of a behaviour some systems exhibit). If you know the algorithm for a chaotic system, that is enough. The problem, physically, it is mostly impossible to go from the data to the algorithm, since it looks so complicated.

[ Parent ]

Thanks for a nice article that got me thinking... (none / 1) (#63)
by Oldest European on Fri May 27, 2005 at 09:12:06 PM EST

After reading the article I couldn't get one question out of my mind: What is artificial intelligence and why do we call it that way?

And while I continued thinking about it, it seemed more and more obvious to me that artificial intelligence is just an euphemism for not intelligent.

All we have achieved in the field of artificial intelligence yet is basically creating more or less suffisticated tools.

Is a car intelligent because it has ABS?
No it isn't.

Is a computer program intelligent, that can perfectly translate a text from one language into another?
No it isn't.

So what is lacking?

Self-awareness - and coming with that a will to survive.

As long as those 'artificall intelligences' don't have self-awareness, one shouldn't call them intelligent but instead just highly suffisticated tools.

And another thing about intelligence: intelligence is self-adapting to new situations and enviroments!

The electronic parts of my car will never learn how to play chess, they will never learn how to write a poem or how to be a good football player.

And someone or something either is or isn't intelligent.

And if a machine gets to the point where one can call it intelligent, I will call it truly intelligent not artificially intelligent.

And in that case a good term might be computer based intelligence or maybe silicon based intelligence - in contrast to carbon based intelligence.

I think there is a good chance that we will really see truly intelligent machines one day.

And this might also prove that we humans are not halve as intelligent as we think we are, because if we create such machines, we will create predators, that might just be responsible for our future extinction.

Or if we are a bit more lucky, we will simply become their slaves - wouldn't that be ironic?

Extinction or slavery (none / 0) (#77)
by spacebrain on Sat May 28, 2005 at 08:47:01 AM EST

See my comment http://www.kuro5hin.org/comments/2005/5/26/192639/466/76#76

[ Parent ]
Symbiosis vs slavery (none / 0) (#80)
by Baldrson on Sat May 28, 2005 at 01:02:50 PM EST

I find it a fascinating phenomenon that people usually deny ethnci genetic interests legitimacy on the grounds that symbiosis is the primary result of mixing differing human types -- the premiere example of such human ecological symbiosis usually being given as the gift of the Jews -- and people about as frequently express fears that highly intelligent robots are likely to enslave us if they come to exist among us.

Isn't it more likely that a symbiotic relationship would arise between entities that have entirely different material foundations, even if one is profoundly more intelligent and powerful than the other, than for a symbiotic relationship to arise between members of the same species that are by their very nature going to contend over the resources upon which their very life is founded?

-------- Empty the Cities --------


[ Parent ]

Semantics (none / 0) (#149)
by An Onerous Coward on Tue May 31, 2005 at 04:08:55 PM EST

The traditional definition of "artificial intelligence" has always been "teaching a computer to perform some task that,  if performed by a human,  would be considered clever."  We have plenty of examples of artificial intelligence.  What we lack is a general-purpose AI which can solve problems as varied as the ones solved by humans.

I think you've latched on "artificial" as a dismissive term,  which basically precludes an intelligence from being taken seriously.  But "artificial" just means man-made,  and I see no conflict between something being artificially intelligent and being truly intelligent.

A quick example:  An artificial reality is necessarily a fake reality.  But an artificial voice should probably be considered a real voice.

[ Parent ]

What is Artificial Intelligence? (none / 0) (#165)
by Mudlock on Fri Jun 03, 2005 at 02:48:37 PM EST

The prof for my undergrad AI course had, what I consider, to be the most accurate definition of AI I've ever heard. Paraphrased:

"People used to say 'We'll have AI when a computer can do X'. Well, we did 'X'. Then they said, 'Well, no, we meant "We'll have AI when a computer can do Y." Well, we did 'Y', and they said 'What we actually meant was...' Enough! AI is using a computer to do anything that no one else thinks you can do with a computer."
--
But everybody wants a rock to wind a piece of string around.
[ Parent ]

Simple Methodology to Solve Analogies (2.50 / 2) (#69)
by asolipsist on Sat May 28, 2005 at 04:43:30 AM EST

"Dr. Turney's group developed a technique called Latent Relational Analysis and used it to extract relational similarity from about a terabyte of natural language text. After reading a wide variety of documents, LRA achieved 56% on the 374 verbal analogy questions given in the 2002 SAT. The average college bound student score is 57%. These are statistically identical scores."
I'm not sure how impressive of a feat this is. I developed a simple methodology in 15 minutes that scored 100% on the example SAT analogy questions listed at the URL posted above:
http://www.freesat1prep.com/sat/verbal/analogies/analogy_questions.htm
This methodology could be implemented using google, a digital dictionary and probably 20 lines of perl (10 allowing for obfuscation).
1.) BIRD : NEST ::
(A) dog : doghouse
(B) squirrel : tree
(C) beaver : dam
(D) cat : litter box
(E) book : library


Methodology is as follows:
Search for the first and second question term in google using wild cards like.
"bird * * nest"
Find the first highlighted phrase that's in a sentence and includes at least a noun or a verb. In this case google finds "bird built her nest" Search for matching cases using answer choice terms.
In this case:
"dog built her doghouse"
"squirrel built her tree"
"beaver built her dam"
"cat built her litter box"
"book built her library"
In this case google matched 0 for each term. Replace pronoun 'her' with other pronouns and try search again.
After pronouns replaced the tally was:
a) 0
b) 0
c) 7
d) 0
e) 0
Since we found positive result, choose the one with the most hits, Choose answer C.
If still 0 results, remove any adjectives and try again. If still 0 results, change any articles and try again.
I tried this methodology for the next two questions.
2.) DALMATIAN : DOG ::
(A) oriole : bird
(B) horse : pony
(C) shark : great white
(D) ant : insect
(E) stock : savings

First highlighted phrase in a sentence:
"Dalmatian is not an ideal dog."
No results on first pass, try again with adjective removed. A scores 1 hit, choose A
Question three search yields:
"Doctor from outside the hospital"
On first pass C gets 6 hits, everything else 0. Choose C.
Score: 100%.

I'm sure this method will fail on some questions, but as simple as it is, it might beat Dr. Turney's technique "Latent Relational Analysis". I'm sure with some tweaking it could beat the hell out of it Dr Turney is only getting 56%. What does that say about this type of problem, is it really a 'hard' AI problem? What does it say if a 20 line perl script can beat analysis you've decided to name with capital letters?

I disagree (none / 1) (#72)
by JaxWeb on Sat May 28, 2005 at 06:16:08 AM EST

While it is nice your method works, it is 'cheating' by searching the data whilst the test is in progress. It isn't actually having any idea about what is happening, it is just relying on what other people have written.

The odd question is: if it 'learnt' the whole of Google before hand, and then used data from a stored version of Google, would it still be cheating? Since then, you can just argue it is using its knowledge, something which is acceptable (your not going to do very well in this test without a bit of knowledge, are you?)

[ Parent ]

But isn't this a "sentient" network? (none / 0) (#81)
by dr zeus on Sat May 28, 2005 at 01:14:57 PM EST

One of the choices for developing super-AI is a sentient network - wouldn't a script like this be an appopriate beginning?

[ Parent ]
thats the point (none / 0) (#89)
by asolipsist on Sat May 28, 2005 at 05:22:06 PM EST

"Dr. Turney's group developed a technique called Latent Relational Analysis and used it to extract relational similarity from about a terabyte of natural language text. "

I'm not sure how their approach is much better or 'real AI', it seems like they're doing something similar just with more up front processing. My point is that if the problem can be solved by a dumb tool, ie google, then it's not really that hard a problem and it's hard to say if 'Latent Relational Analysis' is really doing anything very interesting.

[ Parent ]
Yea (none / 0) (#113)
by JaxWeb on Sun May 29, 2005 at 07:48:17 AM EST

Yes I suppose in that respect you are correct.

[ Parent ]
Measuring AIQ or Artificial Intelligence Quality (none / 0) (#91)
by Baldrson on Sat May 28, 2005 at 05:50:39 PM EST

You're ignoring the AI's AIQ or artificial intelligence quality. Its a very severe problem that deserves a major technology prize award. Its at least as deserving of funding as any technology prize. I've devoted a web page to such a prize. I may write a separate article for the prize but here's the text of that web page available via the article's concluding link:

The C-Prize
The most crucial technology prize of all.
By Jim Bowery
Copyright May 2005
The author grants the right to copy and distribute without modification.

Since all technology prize awards are geared toward solving crucial problems, the most crucial technology prize award of them all would be one that solves the rest of them:

The C-Prize -- A prize that solves the artificial intelligence problem.

The C-Prize award criterion is as follows:

Let anyone submit a program that produces, with no inputs, one of the major natural language corpuses as output.

S = size of uncompressed corpus
P = size of program outputting the uncompressed corpus
R = S/P (the compression ratio).

Award monies in a manner similar to the M-Prize:

Previous record ratio: R0
New record ratio: R1=R0+X
Fund contains: $Z at noon GMT on day of new record
Winner receives: $Z * (X/(R0+X))

Compression program and decompression program are made open source.

Explanation

A very severe meta-problem with artificial intelligence is the question of how one can define the quality of an artificial intelligence.

Fortunately there is an objective technique for ranking the quality of artificial intelligence:

Kolmogorov Complexity

Kolmogorov Complexity is a mathematically precise formulation of Ockham's Razor, which basically just says "Don't over-simplify or over-complicate things."

Any set of programs which purport to be the standards of artificial intelligence can be compared by simply comparing their Artificial Intelligence Quality. Their AIQs can be precisely measured as follows:

Take an arbitrarily large corpus of writings sampled from the world wide web. This corpus will establish the equivalent of an IQ test. Give the AIs the task of compressing this corpus into the smallest representation. The AIQ of an AI is simply the inverse of the sum of its length (remember, the AI is a program and each program has a length in bits) plus the length of its compressed representation of the corpus. Presuming the same AI is used to decompress as to compress the corpus, and the compressed corpus is included in the data section as a string literal, the resulting program has no inputs. The length of this program is, in fact, the formal definition of the Kolmogorov Complexity of the corpus as estimated by the AI. The lower the Komogorov Complexity estimated by the AI, the higher the quality of the AI.

Mechanics

The C-Prize is to be modeled after the Methusela Mouse Prize or M-Prize where people make pledges of money to the prize fund. If you would like to help with the set up and/or administration of this prize award similar to the M-Prize let me know by email.

-------- Empty the Cities --------


[ Parent ]

AIQ? (none / 0) (#95)
by asolipsist on Sat May 28, 2005 at 06:42:22 PM EST

How does Dr Turneys solution have a significantly higher "AIQ" than my solution?

"Kolmogorov Complexity

Kolmogorov Complexity is a mathematically precise formulation of Ockham's Razor, which basically just says "Don't over-simplify or over-complicate things." "

Shouldn't I have just won your prize? My program is significantly less complex and preforms at least 30% better.

Saying there is such a thing as AIQ and that my solution doesn't address it seems analogous to hand waving. Didn't Dr Turneys group use over a terabyte of seed data?

I think the onus is on a supporter of this research to show why:

1) the analogy problem is hard

This is going to be tricky since I demonstrated a 20 line perl program that can solve this problem 30% better than Dr. Turney's group.

2) the solution demonstrates 'intelligence' rather than brute force statistical analysis

Dr. Turney's group used over 1 terabyte of data?

I'm not saying this is not ground breaking research, I'm pointing out that solving these analogies is easy given a large data set and that the problem and solution need to be elucidated in away that shows why Dr. Turney's method is interesting.

Your original statement was:

"If a computer program took the SAT verbal analogy test and scored as well as the average college bound human, it would raise some serious questions about the nature and measurement of intelligence.

Guess what?"

Well guess what, I developed a method in 15 minutes that can solve SAT verbal analogy test problems significantly BETTER than the average college bound human, still not sure why this is meaningful.


[ Parent ]

"with no inputs" (none / 0) (#97)
by Baldrson on Sat May 28, 2005 at 07:03:41 PM EST

Your program uses the world wide web as an input.

If you want, you can include the world wide web as part of your program and add to the size of the program accordingly.

-------- Empty the Cities --------


[ Parent ]

Ok (none / 0) (#101)
by asolipsist on Sat May 28, 2005 at 10:42:44 PM EST

I bet if I restricted google to a 500 GB pile of text, like the library of congress, my method would still do well; this is significantly smaller than the 1 terrabyte of data Dr Turney used. Also, you didn't address any of the other points.

[ Parent ]
oops, terabyte (none / 0) (#102)
by asolipsist on Sat May 28, 2005 at 10:43:59 PM EST



[ Parent ]
His program is a lot less than a terabyte. (none / 0) (#103)
by Baldrson on Sat May 28, 2005 at 11:08:09 PM EST

The fact that Turney mined a terabyte of data to construct his program doesn't mean his program is a terabyte.

His program is on the same order of magnitude as a thesaurus.

Feel free to construct a similar program.

-------- Empty the Cities --------


[ Parent ]

I agree there is a large difference (none / 0) (#105)
by asolipsist on Sun May 29, 2005 at 12:02:50 AM EST

I understand the difference b/t dr tourney's method and my own and why his is far more sophisticated and potentially interesting; I was being slightly factitious in the above posts.

My main point was that I was surprised by how trivial the original problem was to solve, especially considering your assertion in the article that solving this type of problem is a major breakthrough.

It is hard for me to deduce if 56% on multiple choice SAT analogies is a very significant or interesting result when regex and some simple rules can beat it; it might be interesting, or the method might be on the same level as all the other symbolic logic systems like alice that are fairly useless. It would be interesting to see if this 'web of relationships' can do things that google and regex cannot.

[ Parent ]

Reread the article's mention of Komolgorov (none / 0) (#107)
by Baldrson on Sun May 29, 2005 at 01:45:29 AM EST

You didn't comprehend the section on Komolgorov complexity. I reiterated it and explicated it for a very good reason and you really should try to understand the point for it does address the point you raise.

Think about it like this:

Imagine a contest where the task was to compress the world wide web to a minimal program. How many lines of Perl would it take you to do it given your cheat?

-------- Empty the Cities --------


[ Parent ]

Komolgorov complexity-schmexity (none / 0) (#108)
by asolipsist on Sun May 29, 2005 at 03:09:34 AM EST

I think AI researchers worry far too much about issues like compression and skip over real AI problems, such as a program that can surpass the general learning ability of a hamster or a program that can differentiate between a fence post and the road.

The above is a good example of a problem that isn't nearly as interesting as it seems unless a lot of conditions are met, such as the program really 'understanding' the relationships. Dr Turneys method might produce some of this, but solving SAT analogy problems isn't much of a measure of real 'understanding' in of itself.

[ Parent ]

Then you don't understand compression. (none / 0) (#116)
by Baldrson on Sun May 29, 2005 at 10:25:49 AM EST

Compression requires prediction. The better the prediction, the better the compression. This is a definition of information going back to Shannon. If you can predict what someone is going to say, you have modeled them -- hence their mental processes.

-------- Empty the Cities --------


[ Parent ]

I understand shannon and nyquist all too well (none / 0) (#121)
by asolipsist on Sun May 29, 2005 at 03:51:20 PM EST

Focusing on compression is probably why AI is in the sorry state it is today. I think it is you that fail to understand that the SAT analogy problem isn't in and of itself very hard, which was empirically demonstrated, and is has about as much correlation with 'g' in machines as playing chess does. The interesting bit isn't that my method uses a program orders of magnitude larger, the interesting bit is that the original problem isn't much of a measure of 'intelligence'.

[ Parent ]
You have it precisely backwards... (none / 0) (#123)
by Baldrson on Sun May 29, 2005 at 04:06:24 PM EST

AI has not focused on compression -- it has focused on coming up with a priori rules from "experts" and the like hoping to create a better model than they think you can from the original data. Look at the Cyc project for a perfect example of this failure mode. Starting with the data and applying Ockham's Razor is exactly what they have not done.

-------- Empty the Cities --------


[ Parent ]

Who says intelligence has to be "hard"? (none / 0) (#124)
by Baldrson on Sun May 29, 2005 at 04:13:13 PM EST

There are many many examples throughout history of people barking up the wrong tree for a long time before they finally get a basic concept or two correct. A good example is the way physics was barking up the wrong tree until the concept of including momentum in the state of a moving body was created. Once that problem was resolved progress was very rapid. Relational similarity/congruence may be to AI as momentum was to mechanics. There is good reason to believe it is simply from the strong association with 'g' in human tests.

-------- Empty the Cities --------


[ Parent ]

Corpora (none / 0) (#104)
by KWillets on Sat May 28, 2005 at 11:09:36 PM EST

From a fellow latin-mutilator.

[ Parent ]
Losslessly? (none / 0) (#112)
by whazat on Sun May 29, 2005 at 06:33:04 AM EST

It sounds like you are expecting AI to be some super lossless compression technique. This approach however would fall fowl of the mathematics of compression that state you can't compress everything all of the time.

[ Parent ]
Yes losslessly but you miss the point. (none / 0) (#118)
by Baldrson on Sun May 29, 2005 at 10:32:24 AM EST

The point isn't to compress everything all the time. The point is to do so as well or better than a human could have done.

Compression requires prediction. The better the prediction, the better the compression. This is a definition of information going back to Shannon. If you can predict what someone is going to say, you have modeled them -- hence their mental processes.

-------- Empty the Cities --------


[ Parent ]

I was seeking clarification (none / 0) (#125)
by whazat on Sun May 29, 2005 at 05:59:52 PM EST

So if the goal isn't to compress everything all the time, what should you compress and why?

What makes the body of knowledge from the internet the correct arbiter of intelligence? Why is that a better test than say trying to compress enigma signals?

I'm pretty sure I could design a program that was specialised for either task and smaller than one that I would characterise as intelligent that could alter itself (by interacting with other intelligences) to be able to compress either.

[ Parent ]

Goal: Maximum compression ratio (none / 0) (#127)
by Baldrson on Sun May 29, 2005 at 10:09:23 PM EST

You compress text so as to achieve the maximum overall compression ratio. The text from the internet is simply a sampling of human knowledge which is the product of human thought processes. If it is biased in some way that should be reflected in the compression rules. Enigma signals aren't an attempt to articulate and communicate -- they are an attempt to obfuscate.

Think about it like this -- if a bunch of data describing the altitude of a falling object is listed coupled with the time of the corresponding altitude you'd be able to compress that by "discovering" the law of gravity, referring to that and then list whatever residue is left. Discovering the law of gravity is important for the same reason discovering any law is important -- it lets you more effectively generalize your model and deal with the world beyond your sample.

-------- Empty the Cities --------


[ Parent ]

Hmm (none / 0) (#132)
by whazat on Mon May 30, 2005 at 07:06:47 AM EST

But both the understanding of text from the internet and understanding enigma are acts of intelligence... Even if the enigma code is obfuscated. Understanding the internet is more common, but that doesn't mean much to me.

How would you cope with over-fitting? Would you have  test sets and training sets. We do not try to losslessy predict gravity, our prediction is very much a lossy proposition. As I expect will intelligence in general will be.

Also bear in mind that as the length of string tends to infinity it become losslessly uncompressible.

Are you familiar with the Goedel machine, you might find it interesting.

[ Parent ]

Try decryption if you like... (none / 0) (#138)
by Baldrson on Mon May 30, 2005 at 10:17:51 PM EST

The problem isn't to maximally compress everything to the theoretic Komolgorov complexity, but to provide a metric that allows one to rate AIs against each other for their quality. Sure, if you come up with a decryption algorithm for Enigma strings and there are a lot of Enigma strings in the sample for some reason you'll win -- all else being equal. But this is really getting tendentious.

And I have no idea what you're referring to when you say "as the length of string tends to infinity it become losslessly uncompressible". Are you sure you worded that correctly?

-------- Empty the Cities --------


[ Parent ]

You get what you measure for (none / 0) (#143)
by whazat on Tue May 31, 2005 at 10:04:48 AM EST

So like the Turing test which gets chatterbots this test would get compression algorithms suited to the domain you are testing on (and so not general AI's). And that is what I meant by overfitted. As you couldn't be sure they would be any good in other fields, compressing motor impulse sequences for example.

[ Parent ]
Domain of the Web (none / 0) (#144)
by Baldrson on Tue May 31, 2005 at 11:10:17 AM EST

What is the "domain" of the web?

-------- Empty the Cities --------


[ Parent ]

Well (none / 0) (#151)
by whazat on Tue May 31, 2005 at 08:03:56 PM EST

Static Text, Static Video, Static Sounds

Unless you are somehow getting it to play web games as well.

As such you are missing

Smells, Reactive Video (as in turning the head affects the video stream), Reactive Text (As in conversations), Reactive Sounds, Motor control signals (both human and otherwise) and Touch Sensor feedback. And a gamut of other scientific data that we don't generally put on the web.

[ Parent ]

Overfitting vs Lossless Compression (none / 0) (#139)
by Baldrson on Mon May 30, 2005 at 10:30:57 PM EST

When you do lossless compression you aren't "overfitting" ... you are fitting. Ockham's Razor doesn't say throw out data. You are allowed to keep all the residue around labeled as "noise" or "error" if you like.

-------- Empty the Cities --------


[ Parent ]

A damn fine article (2.50 / 2) (#70)
by Scrymarch on Sat May 28, 2005 at 05:46:47 AM EST

Usually this comment would be redundant and bad form, but I've been very critical of Baldrson's bias and lack of disclosure in the past.  For what it's worth, this article is both fascinating and scrupulously evenhanded.  Thanks for writing it.

I blame it on Descartes (none / 0) (#79)
by dollyknot on Sat May 28, 2005 at 12:43:14 PM EST

Plato said "One should seperate reason from passion", because of that, I suspect he would take issue with Descartes's 'Cogito Ergo Sum'. We do not *think* alive - we *feel* alive (therein lies the lonely mystery of qualia :)

Instead of looking for artificial intelligence, we should be looking for artificial consciousness, we think we are capable of measuring intelligence, when we seem to not have a clear idea as to what intelligence is.

Personally I think intelligence relates to *how* whereas wisdom relates to *why*.

Human beings are very much goal orientated, to eat , defecate, procreate, assimilate, so on and so forth, these desires motivate our actions and it is how we interact with others and the environment. The fact that some people appear to be more efficient (I mean efficient in terms of taking more than one gives (sort of an economic power to weight ratio:)) at these processes than others, I would suspect will not lead us to true AI or AC :) I fear that it will lead to a more efficient enslavement of the ordinary people by the multi-nationals, more efficient killing machines with which to wipe out those who do not conform to the hegegmony of consumerised brainwashed capitalism, motivated by profit and profit alone.

BTW Baldrson, very nice article, perhaps you will find an analog for AI, but beware you do not end up with Spock AKA Leonard Nimoy.


They call it an elephant's trunk, whereas it is in fact an elephant's nose - a nose by any other name would smell as sweetly.

Hrmm... (none / 0) (#90)
by araym on Sat May 28, 2005 at 05:43:28 PM EST

I don't know if I'd call defecating one of my goals, though it does seem to happen fairly often.

-=-
SSM

[ Parent ]
Don't do it for a while (none / 0) (#110)
by monkeymind on Sun May 29, 2005 at 03:46:57 AM EST

Then see who quickly it does become a goal you eagerly want to complete.

I believe in Karma. That means I can do bad things to people and assume the deserve it.
[ Parent ]

define consciousness so it can be found [nt] (none / 0) (#93)
by boxed on Sat May 28, 2005 at 06:02:02 PM EST



[ Parent ]
Index2, Nervous system, 2.5 D, rational, ~D (none / 0) (#99)
by levesque on Sat May 28, 2005 at 08:11:40 PM EST

/

[ Parent ]
great article (none / 0) (#84)
by transient0 on Sat May 28, 2005 at 02:55:55 PM EST

but statistical nitpick.

"Statistically identical scores" is a really sloppy term. i know it's too late to edit the article, but i would have liked to see some reference to the standard deviation, even if in a footnote.
---------
lysergically yours

I'm not impressed. (none / 0) (#86)
by SIGNOR SPAGHETTI on Sat May 28, 2005 at 03:50:16 PM EST

What's next, artificial ONE? Preposterous!

--
Stop dreaming and finish your spaghetti.

The elephant in the living room (none / 1) (#106)
by Fen on Sun May 29, 2005 at 01:36:57 AM EST

As usual, the elephant is not mentioned are considered. That being English. It is an ambiguous, deeply flawed language--like every other natural language. There is an alternative--lojban. This can be parsed like c or Java.
--Self.
ROFL. (none / 0) (#140)
by Lisa Dawn on Tue May 31, 2005 at 02:42:46 AM EST

Yeah, that's what we need, a language susceptible to Godel Incompleteness, as a feature.

Nay, ambiguity is the feature. Pedandtry is not what we need, except in court.

[ Parent ]

Syntax & Inane Turing Worship (none / 0) (#128)
by twestgard on Sun May 29, 2005 at 10:24:26 PM EST

I can already see that there will be a period of "adolescence" in these machines, where they'll be used to try to "predict" and "deduce" things in real life. Police and prosecutors will get ahold of them, and their output will be used to determine who gets arrested and whose houses get searched. But they won't be really all that good. People will get arrested for using figurative speech like jokes or sarcasm that the computer didn't understand properly. But I guess that's not really any more random than skin color or family income, so maybe it's not worse than the current system.

But that brings me to my other point - I'm mystified by this Turing Worship. This is unscientific navel-gazing. Turing was a smart man with an amazing career coupled to a compelling and ultimately tragic personal story. Fascinating historical figure. But if the Limeys hadn't driven him to suicide, I guarantee you he wouldn't be spending all this time wondering if a particular set of tools were "intelligent" by this or that standard. Nor would he want be seen as someone who inspired that kind of pointlessness. Turing was a very practical man. He'd be coming up with interesting ways to use these tools to help people. Taking an expired version of the SAT is about the least useful application one could devise. So the premise of this story is about the same as "Caveman bangs stick against rock until one breaks." Waste of energy, time, sticks & rocks. Caveman should have been making a stone axe. What should we be making? Not this.

Thomas Westgard
Illinois Mechanics Liens

Now, Here's a Real Task for AI! (none / 0) (#129)
by twestgard on Sun May 29, 2005 at 11:31:08 PM EST

On one Kuro5hin page is an article about some halfwit who spent god knows how much time teaching a machine to take an expired version of the SAT. Teaching a machine is a fine idea, but what a pointless task this machine was assigned!

On another Kuro5hin page is another article who sees cell phones replying to stimuli in the realworld environment, but he expects that stimuli to be preprogrammed. What's missing here!?

I want a cell phone that uses AI to analyze unprogrammed realworld stimuli. Now, that's a goal! How about a phone that provides advice like this: "Last time your blood pressure was this high and you had this little sleep, you ended the day by breaking two years of sobriety." Or, "The person in the green shirt by the potted plant is staring at you and the body language indicates sexual receptivity." Now, that would be an advance in technology.

Thomas Westgard
Illinois Mechanics Liens
[ Parent ]

AI and Turing (none / 0) (#130)
by zakalwe on Mon May 30, 2005 at 05:24:18 AM EST

The idea of AI would be around irrespective of Turing, and probably rightly so given the both the philosophical and practical ideas involved. For that same reason, it would also have the same share of bogus predictions and misrepresented results.

Turings sole contribution to AI is the Turing test which I think is still a pretty sensible measure. The only problem is that people confuse a test with a signpost. The Turing test might tell us when we're there, but it doesn't give any clue as to how to get there. Unfortunately some people think that creating a better chatterbot or solving some restricted subdomain of the problem means that they are getting closer, when fundamentally they're wandering down dead ends (Possibly practically useful dead ends, but still dead ends as far as AI is concerned.)

The fact that performance in such analogy tests is indicative of human intelligence is completely irrelevant. Performance in mental arithmetic is probably also so indicative, but no one would claim the computers superiority to man here is evidence of intelligence. Ultimately this says nothing about either progress towards AI or about the analogy tests relevance as a measure of intelligence.

[ Parent ]

Turing test reasonably useful, but not an end. (none / 0) (#134)
by HuguesT on Mon May 30, 2005 at 11:37:23 AM EST

From a distant perspective, the idea of a machine able to pass the Turing test sounds reasonable and useful.

However researchers have tried to come up with machines whose only goal is to pass that test, and they try to do it by brute force (learning language patterns) and deception (changing the subject, cute answers, etc), instead of trying to develop "true AI" (whatever that means).

Therefore people have started to realized that the Turing test is by itself not that useful as a means of measuring machine intelligence progress. However we would not have known that had people not tried to essentially cheat on that test.

However I'm not mystified myself by the interest in the TT. This is because the TT carries a prize and an annual competition, and it is perhaps better than doing nothing. If you have any productive idea on how to better proceed then please share it with the AI community!

[ Parent ]

Measurement of Intelligence is Useless (none / 0) (#145)
by RadiantMatrix on Tue May 31, 2005 at 11:40:39 AM EST

The "exact measurement" of intelligence is a pipe dream, and it is likely to remain so.  It's like measuring how sexy someone is -- it depends far too much on a definition that requires subjectivity.

The question "what does it mean to be intelligent?", or more appropriately, "what does it mean to be more (or less) intelligent?" has no objective answer.  And, if we try to assign an objective answer, not only are we likely using circular reasoning but we are defining intelligence in a way that almost everyone will disagree with to some extent.

"Intelligent" is a subjective adjective, like "big", "small", or "sexy".  People who drive a Hummer think my VW Jetta is "small", but people who drive a Geo Metro think the same car (the Jetta) is "big".  Lots of people think Kate Winslet is sexy, I think she's boring and not sexy at all.  It's all subjective, and it has a lot to do with what the speaker's experiences are.  Intelligence is the same way: how many times have you heard (especially in arts) someone say "that person is either a genius or an idiot"?

I consider my wife and I to both be of above-average intelligence.  I can code well in several languages, my wife has a bit of trouble with any kind of programming - on first meeting, many geeks would think she wasn't that bright.  However, she is a highly talented Classical musician as well as a budding composer; those in her field think she is very smart and I'm a bit of a twit (I can barely even play an instrument).

Who is right?  Are we both intelligent?  If so, why do we have so few common mental abilities?  If not, which one of us is "really" intelligent?

The answer is, not suprisingly, that we are both intelligent in our own way.  Any attempt to ultimately define intelligence as anything more than an abstract concept is a waste of time.  Not only that, but such a definition will ultimately lead to the repression of talented individuals -- someone we deem to be "unintelligent" will be denied opportunities, and their potential may never be realized.  The arts -- and the sciences -- are full of stories about discoveries that almost never were.  The lesson we should learn from these stories is that everyone should be given opportunity, because we have no way to know what someone is capabale of.
--
I'm not going out with a "meh". I plan to live, dammit. [ZorbaTHut]

AI is far from human brain (2.00 / 9) (#146)
by Kitch on Tue May 31, 2005 at 12:16:34 PM EST

You know, all these stuff are just models of how human brain works, but nobody really knows what's going on inside it. And obviously we can't buid such model. Why? The answer is simple. Because of non-algorithmic basis of the brain. How can we linearize something that is non-linear without data-loss?

Which is it? (none / 0) (#148)
by An Onerous Coward on Tue May 31, 2005 at 03:50:51 PM EST

First,  I would point out that nobody is actually claiming that the program in the story is performing its achievement by mimicking the way the human brain would go about it.

You say that nobody knows how the human brain works,  yet you claim to know for certain that the human brain is "non-algorithmic".  The two claims are logically contradictory.  In order to prove that the brain was non-algorithmic,  it would have to be shown that the processes of the brain cannot be performed on a Turing machine.  I don't see how you could prove such a thing without understanding what the brain is actually doing.

The second bit,  about linear vs. non-linear... well,  I'm not really sure what you're getting at there.  Care to go into detail?

[ Parent ]

linear nonlinear (none / 0) (#150)
by dollyknot on Tue May 31, 2005 at 05:02:54 PM EST

I would guess linear/nonlinear is a mathematical construct not a linguistic construct. Some equations resolve some don't. PI is an equation that never resolves therefore could be said to be a non-linear equation and irrational.

Before the advent of computers and people used pencil and paper, people routinely rounded decimal fractions up and down as tho' the twiddly bits to the far right of the decimal point were largely irrelevant.

The first man who kind of got an inkling was Edward Lorenz have a look here

Peter


They call it an elephant's trunk, whereas it is in fact an elephant's nose - a nose by any other name would smell as sweetly.
[ Parent ]

yfi (none / 0) (#156)
by curien on Wed Jun 01, 2005 at 02:04:11 AM EST

PI is an equation that never resolves...

Pi is a number not an equation.

Some equations resolve some don't.

What the fuck does that mean?

Edward Lorenz

Oh... you're one of those. Look, chaos is great for modeling some types of complex systems (like turbulence and weather). For other types of systems, it really sucks. We got to the moon using nothing but Newtonian physics (realitivistic effects weren't even accounted for, according to my physics teacher). If we had tried to model the trajectory chaotically, we'd still have smart folks sitting in a room with a drawing board at what's now the LTA annex at Langley AFB.

--
This sig is umop apisdn.
[ Parent ]

Pi (none / 0) (#159)
by dollyknot on Wed Jun 01, 2005 at 11:17:26 AM EST

Pi is a number not an equation.

Divide the circumference of a circle by its radius to derive Pi, this is no way as simple as it sounds. You can crudely approximate Pi with a fraction like 22/7. So as you say Pi is a number, but you will only approximate Pi with that number, unless you would like to go down the road of the people who came up with this

· In 1897 a bill to fix the value of p at 4 was passed by the House of the Indiana state legislature (73). The bill was referred to the state senate, where it seemed sure of passage, but in the meantime various newspapers had made the legislature such an object of ridicule that the Senators decided to shelve the bill indefinitely. This turn of events especially disappointed the State Superintendent of Public Instruction, who had been hoping to secure the use, free, of the new copyrighted value of p for his state textbooks. "The case is perfectly simple. If we pass this bill which establishes a new and correct value of p , the author offers our state without cost the use of this discovery and its free publication in our school textbooks, while everyone else must pay him a royalty."

from here

There are many different *equations* for generating Pi, all of them only give an approximation, I just felt that the term equation more gave the flavour of Pi than saying it is a number, because it impossible to actually state what the number Pi is, or perhaps you could tell us what number is Pi

Some equations resolve some don't.

What the fuck does that mean?

Meaning you can calculate some things for ever and ever, such as the square root of two

Edward Lorenz Oh... you're one of those. Look, chaos is great for modeling some types of complex systems (like turbulence and weather). For other types of systems, it really sucks. We got to the moon using nothing but Newtonian physics (realitivistic effects weren't even accounted for, according to my physics teacher). If we had tried to model the trajectory chaotically, we'd still have smart folks sitting in a room with a drawing board at what's now the LTA annex at Langley AFB.

I beg your pardon One of those You do not know anything about me - so how can you can you say that, whatever one of those might be?

Multi-cellular biological systems have a non-linear structure ie they are fractals, a good example for this is the blood circulation system, the aorta emerges from the heart then bifurcates into two arteries which then bifurcates into four arteries eight sixteen so on and so forth down to the capillaries, ensuring a blood supply for every cell in the body. The blood circulation system is fractal because it has self similarity. Also the ratio of bifurcation conforms to Feigenbaums ratio 4.669. Feigenbaums constant is also irrational just like Pi. You can see a picture of Feigenbaums diagram here

Another very good example for a biological fractal is the lungs. Because of the fractal order evident in biological systems I would be very surprised if a fractal type order was not present in our brains.

I think it arrogant of humanity to think that humanity created order. 'Human beings are the most intelligent beings on the planet' is a statement one often comes across, who sez this then? Why human beings of course - bit of a coincidence don't ya think? We are not even intelligent enough to be able to understand our own intelligence


They call it an elephant's trunk, whereas it is in fact an elephant's nose - a nose by any other name would smell as sweetly.
[ Parent ]

You're a complete retard (none / 0) (#162)
by curien on Thu Jun 02, 2005 at 01:53:28 AM EST

Divide the circumference of a circle by its radius to derive Pi, this is no way as simple as it sounds.

Sure it is. You measure the circumference to a certain number of sig figs, you measure the diameter to a certain number of sig figs, and you divide them to get pi to a certain number of sig figs.

That's roughly the same as how I'd, say, determine the area of a rectangle. Measure length, measure width, multiply to get area. Are you suggesting that determining the area of a rectangle is difficult?

You can crudely approximate Pi with a fraction like 22/7.

You can crudely approximate the number sqrt(2) with the fraction 7/5. What's your point?

So as you say Pi is a number, but you will only approximate Pi with that number,

No, pi is a number. It would take you an infinite amount of time and space write it in a traditional numbering system... but so what?

Look, there are lots of numbers that are hard to write. The above-mentioned sqrt(2) is one of them, as is 1.1010010001000010000010000001... . Just because it's hard to write in decimal or rational form doesn't mean it's not a number.

I feel like an Indian trying to convince a Roman that zero really is a number.

--
We are not the same. I'm an American, and you're a sick asshole.
[ Parent ]

*hint* (none / 0) (#169)
by ph317 on Tue Jun 07, 2005 at 04:59:17 PM EST


My gift to mankind - wisdom shrouded in an obscure comment in a kuro5hin article that will never see enough of the light of day to produce useful results for society:

The secret of the power of the human brain is twofold, and in turn these two things explain the failure of AI attempts to date and the directions in which to look for the answers:

  1. Non-determinism.  There are processes in the brain which are driven by truly random input, which affect everything the brain perceives and acts out to (at least) some small degree.  Without random input, one can never break free of the paradoxical looping conditions outlined by Hofstatder.  Humans do not get stuck in loops, except in extremely pathological cases of neurological disorder.  That is because every decision has a small truly random element to it, wihch breaks repeatability.  It also conveniently explains why it requires so much practice and concentration for human to do anything, even a simple motor sequence, consistently the same every time.  Autonomic responses are less affected by this, if not completely unaffected.  The real question lies in whether "muscle memory" responses (say, a karate block that has been practiced for years until it becomes a thoughtless reaction) have random input, or behave more like Autonomic actions.  
  2. Genetic Evolution of the superstructure.  One cannot do a final intelligent brain design from scratch by human effort.  The structure, the interconnectedness of various subsets of neurons, or lack thereof, must be evolved by natural selection.  We already have genetic algorithms for accomplishing this, just not on the broad scale neccesary for a neural network as powerful as a human brain.  This may prove to be the ultimate roadblock to real AI - evolving a highly intelligent AI "brain" may take so long in real-world terms that it is almost as slow as nature was originally.  After all, for every iteration, the AI brain must be taught from birth through at least young adulthood, just as you would a child, before it's true fitness can be estimated for natural selection.  We can optimize away selections that seem neurologically dysfunctional for the stage of evolution we're on at any given time, but it will still take a very long time.


[ Parent ]
No. (none / 0) (#147)
by CAIMLAS on Tue May 31, 2005 at 01:09:56 PM EST

Definitively wrong. Using an arbitrary test to assess the supposed intelligence of a human is very wrong-headed to begin with. Add to the fact that the questions on SAT exams are figured out to be mathematically opitimal in many fashions. This doesn't work well - at all - for a test question merely being one of quintuple possible answers for each question, a random number generator would do "better than average", statistically speaking - particularly since most students don't finish/rush through and end up answering off the cuff when they're not sure. You could theoretically do well on the SATs (IIRC - it might have been the ACTs) simply by answering 1 question from each section correctly, and leaving the rest blank. There are a lot of ways you can cheat the system.

I've heard of many people that have simply filled out random blocks (or patterns) and have subsequently scored well above average on the SAT. I did so myself on one of the tests (just wanted to get it over with, and I was already admitted in the school I wanted), and I got (IIRC) in the top 6%. So, a machine could theoretically do this without any problem. Randomly.


--

Socialism and communism better explained by a psychologist than a political theorist.

You're an idiot (none / 0) (#157)
by curien on Wed Jun 01, 2005 at 02:09:05 AM EST

No, it wouldn't. Try learning something about the fucking test before you start flinging oral feces.

If you answer a question correctly, you get 1 point. If you leave a question blank, you get zero points. Now pay attention, because here's the kicker. If you answer a question incorrectly, you lose a quarter point.

So your random number generator will, on average, get a score of ZERO regardless of how many more questions it answers than the real person.

--
This sig is umop apisdn.
[ Parent ]

AI vs. Information Analysis (none / 0) (#164)
by bobej on Fri Jun 03, 2005 at 11:21:33 AM EST

This is information theory, not AI. I have to put this into the category of the AI fakers. AI is stimulus and response.

Regarding the cultural problems of testing for intelligence, obviously any test involving human languages will be biased. Period. Perhaps someone could work out a test that evolves it's own unique language from first principles for each test (evening out the field), but invariably such a test would need to use a human language to instruct the test taker, thereby re-introducing bias.

So do we throw out standardized tests? Nope, they are still useful when we want to empirically measure a candidates suitability for a task. For college and work, skewed results due to cultural influence might be appropriate (a candidate for an American company should have a good grasp of English).

Where this gets sticky is when social institutions like police or government use such tests to determine policy. Universities are borderline in my view, since it's easy to switch universities, not so easy to change your government.

tells you little (none / 1) (#166)
by jcarnelian on Sat Jun 04, 2005 at 04:04:25 AM EST

The SAT is correlated with scholastic achievement, but it does not measure it directly. By analogy, you can tell a lot about a person's health status from their age, height, and waist, but producing a lump of wood with the same age, height, and waist doesn't make a healthy person. For a computer program to score well on the SAT is a decent achievement in information retrieval, but it has nothing to do with artificial intelligence.

standardized tests (none / 0) (#167)
by fourseven on Mon Jun 06, 2005 at 02:22:24 PM EST

If anything, this reveals how inadequate the current tests are at evaluating human capability. Mechanistic, standardized, they are a manifest of the shortcomings of the "educational" "system". No wonder a computer system is scoring well -- computer systems are used in preparing, collating and generating these tests. We should be asking whether existing tests are capable of measuring uniquely human potential in any useful way.

On the other hand, the accomplishments of Turney et al are an interesting step forward -- we're now a little closer to being able to use the word "like" as an element of the human-machine interface.

Overall, a great article. Thanks for the write-up.

The Flip Side (none / 1) (#168)
by bobej on Mon Jun 06, 2005 at 07:33:12 PM EST

This seems relevant. This is a guy who purposely set out to get every question wrong on the SAT: link.

What sweet irony. AI makes great strides in taking human evaluation tests, human makes great strides in failing them utterly and completely.

Tic-tac-toe (none / 0) (#170)
by eodeod on Sun Jun 12, 2005 at 12:42:27 PM EST

How is this any different than a program playing tic-tac-toe. There is a set of rules and conditions, and it follows them.

AI Breakthrough or the Mismeasure of Machine? | 171 comments (156 topical, 15 editorial, 0 hidden)
Display: Sort:

kuro5hin.org

[XML]
All trademarks and copyrights on this page are owned by their respective companies. The Rest © 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!