Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

[P]
The Blank Slate

By localroger in Technology
Tue Nov 12, 2002 at 07:48:21 AM EST
Tags: Hardware (all tags)
Hardware

In a recent comment I advanced a personal theory of consciousness which strongly rejects genetic determinism. I was not too surprised by the usual range of responses, or by this.

The thing is, a comment isn't the place to put forth an idea that is so commonly thought extraordinary, and thus needing extraordinary evidence. Allow me to make a more substantial case for the idea: Genetics have almost nothing to do with intelligence.


ADVERTISEMENT
Sponsor: rusty
This space intentionally left blank
...because it's waiting for your ad. So why are you still reading this? Come on, get going. Read the story, and then get an ad. Alright stop it. I'm not going to say anything else. Now you're just being silly. STOP LOOKING AT ME! I'm done!
comments (24)
active | buy ad
ADVERTISEMENT
Currents of Desire

I am a contrarian by nature. This does not mean I am stupid; it often happens that when everyone believes a certain thing, they are right. But not always. I am frankly very suspicious of ideas with universal currency precisely because belief is so common and pervasive that nobody asks the hard questions.

Within living memory it was believed that geological change never happens at a rapid pace, that it took millions of years for the dinosaurs to disappear, that the idea of asteroids hitting the Earth and causing noticeable disturbance was ridiculous, that the continents certainly did not move around, and that anybody who thought otherwise was some kind of radical or fool.

The reasons for these supposedly scientific beliefs were rooted in politics. The suggestion that catastrophic change could drop out of the sky at random said as much about the stability of civilization and empire as it did about the formation of fossils. The sea change in geology which has occurred since 1970 was made possible by political forces. The clues were always there; any child can tell the continents fit together like a puzzle, and the K-T iridium layer was a klaxon waiting for any listening ear. But the hard-liners had to die or retire, and the commonly held metaphors had to soften up to the idea. A culture shocked by Watergate and weary of Vietnam found the idea of catastrophic change much easier to swallow than the culture that had stood fast against Hitler and electrified the Tennessee Valley.

My own belief in the tabula rasa is not politically motivated. It is an extension of a personal project to develop strong Artificial Intelligence. Being a contrarian, I approached the problem from the assumption that, since nobody is making any progress, everybody must be wrong. Nobody ever knows where such assumptions will lead; if the Universe were constructed a bit differently the few people who remember an odd chap named Einstein would know only an oddball with this crazy obsession about the speed of light. Not to say I am right on Einstein's scale -- the jury is still out -- but the conjecture has been very fruitful. I will explain how shortly.

The pervasive cultural belief in genetic determinism, however, most certainly is politically motivated. Any scientist who establishes (or claims to establish) a genetic link to any abstract behavior can be guaranteed headlines and grant money to do further research. It may be crass, but people who have a lot of money and power like to be told that they deserve it. They also like to be told that it's not worth wasting money on down-and-out losers because they can never make anything of themselves, anyway. And because they have the money and the power they hold the purse strings and have a lot of influence over who gets research funding and who gets published. This atmosphere poisons the entire field in ways that affect even honest researchers.

Twin Studies

We've all seen the articles. Two people are pictured, twins ripped asunder at an early age yet years later showing the same interests, same physique, same talents, in one case literally wearing the same number of rings on the same fingers.

Twin studies were pioneered and popularized by Cyril Burt. Burt was a brazen fraud, one of the most disgustingly successful in the history of science; he made up test subjects, made up colleagues and published their made-up letters lauding his own accomplishments in journals he edited, took their decidedly non-made-up salaries for himself, and after being the toast of Britain, being knighted, and dying peacefully at an advanced age he got away with it all. He was safely dead when his frauds were uncovered.

Burt was a profoundly evil man who left millions of victims in his wake. His research was used to advance policies of institutionalized racism, test-score marginalization, denial of educational opportunities, and even forced sterilization of the "undesirables" whose genetic inferiority was made so "obvious."

While Burt was alive there were already doubters, but they dared not contradict the grand old master of their field. Only when he was dead could they really investigate; and when they did, the mood was one of shock. Nobody doubted him fully enough to suspect the true extent of his fraud.

Nobody suspected, as one researcher found in the early 1980's, that every single twin study ever done was similarly either fraudulent or so poorly conducted as to have meaningless results.

As for the fabulous twins pictured in Time and Scientific American, it turned out a lot of them had been separated at much more advanced ages than "birth" -- in one case eleven years. And given that N% of the population goes into any given field, given a first twin in that field there is always a N% chance that the other twin will also drift that way by pure random chance. Given several hundred million people in America alone, that leaves at least several million pairs of twins. If none of them remotely resembled one another it would be just as startling as if they all wore the same kind and number of rings.

The question that floats to mind is, why do people keep doing twin studies?

Think about it. If every person who precedes you into that vast unknown has been a charlatan, why do you follow? Surely you must understand why someone like me hears the phrase "twin study" as "attempted fraud." Why set yourself the amazing uphill task of proving to the world that you're not just another fraud or incompetent? Surely there are easier and more satisfying ways to earn a living.

But some people just want so badly to believe that they will keep throwing money at the subject, will throw their own lives and credibility at it, because the whole idea is so seductively simple. Nobody does a twin study to disprove the idea of genetic determinism. If you don't really, really want to believe it, the experiment wouldn't seem necessary. It would be like deliberately stabbing yourself in the abdomen to prove that it causes peritonitis.

If you do a twin study, it's obvious that you want a certain result. And it doesn't take much fraud or sloppiness to get that result. And you will be rewarded if you get that result.

So I am not impressed by twin studies. Next topic.

Reflexes

A reflex is a pattern of activity which does not have to be learned. Humans have reflexes. Aha, say the determinists, a smoking gun!

The extreme deterministic viewpoint (which nobody will admit to believing, unless they are schmoozing up some obvious Nazi for grant money) is that consciousness itself is just a big old wad of reflexes too complicated to reverse engineer, but no more "learned" than breathing. The middle ground is that there are "tendencies" which can be inherited, such as a "tendency" to violence or a "tendency" to emotion over intellection or a "tendency" to like rings putting pressure on your fingers. (Really, I did hear that on TV once.) The problem is defining that magic word "tendency."

If you have the misfortune to be 24 years old and male, even with a perfect driving record and every possible plus you will pay triple the auto insurance of a woman the same age with four accidents. This is because of your male "tendency" to get in accidents. It's horribly unfair to the individuals thus targeted, and a society interested in fairness or justice wouldn't let insurance companies get away with this crap. (And for anyone who suspects sour grapes, I've been out of that group for longer than I like to think about. I get even better rates than the 24-YO girl at this point in my life, but it's still wrong.)

Nobody will admit to believing that testicles automatically make you a hothead with a lead foot, but the thing is they make you pay even if you have your testicles under control. Somehow that always ends up happening in the end.

Human reflexes are not very complicated. This is in direct contrast to some other animals. Precocial birds and mammalian herbivores are born knowing how to walk. They can walk within hours of birth, and they walk with the gait they will have for their entire life. They cannot learn a different way to walk. This has effects. The reason the race horse Secretariat's sperm is worth more than weapons-grade plutonium by weight is that his genes carry the trait for a very efficient galloping gait. A horse who inherits that trait might win the Triple Crown; one that doesn't cannot be helped.

Humans and a lot of other mammals don't do it that way. We do have a walking reflex; in fact, it's probably the same one that lets the quail and the gazelle follow Mom around. But it doesn't work for us, not least of all because we're bipedal. It also doesn't seem to work for a lot of other mammals, including dogs and cats, which can kind of sort of walk at birth but with nothing like the grace of a day-old gazelle. We must lose that inborn walking reflex before we can learn to really walk, the way we will as adults. And we can change our gait, both from moment to moment and by learning a new gait at an advanced age. Hell, we can learn to dance.

This does not mean no reflexes at all are involved in walking. But the specific "walking reflex" which any competent pediatrician can test reliably goes away at the age of a few months, just like the Moro reflex. Human babies display other precocial traits which we also lose before we learn the "adult" way of doing things. This seems to be a pattern in all human behavior. We are of course animals, and we have the usual range of baggage associated with that. But our genius as a species has been the ability to move tasks normally done by hard-wiring in the brainstem into the cerebral cortex, our field-programmable tabula rasa. Of course not everyone does this. I, for example, could not dance for you now if my life depended on it. But other people can, and I'm sure I could if I were motivated and put the effort in. We can make our feet do things Nature never intended. Other parts of our body, too -- we can even learn to control "autonomous" functions like our blood pressure. It's not easy and few of us ever bother, but the ability is there.

What makes us so different from animals?

Hearing biologists use this as a defense against ideas like mine is a great weird-out. Where is this sentiment when People for the Ethical Treatment of Animals is organizing?

Seriously, it should be kind of obvious that there is some difference between us and the rest of the order Mammalia. None of the others is busy building skyscrapers, ocean liners, or atomic bombs. We consider it a triumph of quiet genius if they manage to teach another of their kind to use a stick as a tool to dig termites, while we use supercomputers to catalogue their success.

In more productive terms, we have not just a large cerebral cortex, but most likely a cerebral cortex with a few extra instructions. Biologists have a love-hate affair with this crowning achievement of human brain-growing; it takes something like 20% of the energy we get from food just to keep it alive, so it must be doing something for us (and must have been doing so long before we got to the level of building skyscrapers and atomic bombs). There is an obvious 1:1 correspondence between the one (1) species with this anomalously large cortex and the one (1) species that builds the aforementioned skyscrapers and atom bombs. Yet when they try to figure out how it works things keep coming up wonky.

In some parts of the brain, there is an obvious correspondence between location and function, though the cortex is physically as homogeneous as a potato. Touch an electric or chemical stimulator here, and you will get a memory of Mom, a taste of apple pie, or a forty-five degree purple line segment in the upper right hand corner of the visual field. Elsewhere a particular muscle will twitch, or you are filled with unease about the future. In many places, though, there is no obvious pattern. The area beneath the rear crown of the head seems to be largely concerned with mapping 2-D visual inputs into a 3-D model of reality, and no two people seem to map it the same.

Such probes into the cortex never elicit pain, and the other emotions they can elicit are subdued. Emotions do not live in the cortex.

One of the epiphanies that got me started on this little project was an essay by Stephen Jay Gould about some clever fellows researching bee-hunting wasps. One experiment they did was meant to figure out how the wasps find the nests they dig while they are out hunting bees; the humans waited for the wasp to leave then moved all the landmarks around the hole a few inches in the same direction. The wasp, upon arriving, landed a few inches in the same direction from her nest hole. There ensued a period of confused searching, after which she finally found her nest; then she spent several minutes hovering, obviously scanning the landscape as if to make sure her memory would not fail her again.

I was struck by this account, even through all the behaviorist language, that the wasp had reacted exactly as a human would if a sufficiently godlike being pulled a similar trick on one of us. The wasp had a model of the world in its, uh, head, maybe a smaller and lower-resolution model than the one we make but similar in principle. It used this model the way we use ours and reacted the way we would if we found ours in conflict with reality. Consciousness, I realized, was a very old thing not requiring anything as complicated as a human to express it. It was a thing computers might already be able to do, if one could only sort out the algorithm that made it happen.

The C-Word

Another researcher, IIRC Erich Harth, made a point of calling Consciouss the "C-word" because it was unspeakable in neurological circles; not being quantifiable or measurable, it must not "really exist" in the sense that quasars and bacteria do. That is changing a little -- the plate tectonic guys aren't being laughed at so much -- but the attitude is still ascendant that consciousness is a murky, subjective, unscientific thing that can't even be defined.

Consciousness (v.): the use of a certain class of "hill climbing algorithm" to evaluate the state of the world according to some arbitrary set of critera, evaluation of how various manipulative devices might be brought to bear to optimize its state, and occasional use of those devices to attempt to change its state based on these evaluations.

Now, that wasn't so hard, was it? A "hill climbing algorithm" is any answer to the generic problem of finding the highest point in the local terrain without the ability to see. That is, you can tell your altitude and whether you are getting higher or lower as you move around, but you can't see any "peaks" other than the one you're on; your task in this fog is to get to the highest peak as fast as possible. This problem is used as a generic model for optimizing any system with incomplete information. For example, you might have the controls of a machine with ten knobs, none of which is labelled, and you must maximize its throughput; the only information you have is what happens when you twiddle the knobs. This is a perfect metaphor for what living organisms do with their brains. Emotions are the feedback that let us know how well the machine is performing, and we begin to see how they might be quantified.

The particular "hill climbing algorithm" used by living things is probably closely related to one patented by the aforementioned Dr. Harth, called "alopex," which is interesting in its use of random thermal noise to create certain favorable characteristics (which also happen to resemble what real people and animals do an awful lot). You can learn more by reading his seminal paper (not online, alas) in Science, vol. 237, p. 184, "The Inversion of Sensory Processing by Feedback Pathways: A Model of Visual Cognitive Functions." Or you can read his more accessible and somewhat flawed popularization The Creative Loop.

If the wasps awakened my interest in duplicating consciousness, it was Harth who convinced me it was possible. Here were actual algorithms, implemented and tested on actual computers, making the same mistakes and over-generalizations that people do -- without being told to, but because such misbehavior arises naturally from flaws in the relatively simple algorithm that produces such fabulously complicated results.

Brains and Computers

Another weird-out comparable to the PETA-friendly invocation of our relationship to animals is the assertion that it is crazy, unscientific, or just plain wrong to use information theory to describe what happens in the brain.

It is true that some people get a little over-enthusiastic with the metaphor, but what is crazy and unscientific is thinking that anything in the Universe, including a brain, somehow functions outside of a fundamental thing like information theory. It's no less crazy than thinking that living things can't possibly be made of mere molecules.

Brains do not have registers and Von Neumann binary addressed memory, but they most certainly do process and store information. How they do this is not even much of a mystery. Neurons compete for inputs which form repeatable patterns, and they form synaptic connections with those input sources so they can detect those patterns ever more efficiently in the future. Feedback sources like emotions and activity level can encourage or inhibit this process.

It is interesting to note that one of the most intense emotions possible may involve this learning process. A state of epiphany is reached when one makes a great deal of new connections all at once, realizing how entire patterns of thought fit together in a previously unsuspected grand scheme; the feeling is more intense than an orgasm but, alas, also a lot more rare. There is some research which causes me to think the neurotransmitter Dopamine is involved in this process. It seems to be intimately involved in the process of forming new synaptic connections, and we are wired to positively reinforce such experiences. Without such a mechanism we might come to regard learning as a generally negative experience, what with the reason we generally have to do it and all, and seek to avoid it.

The psychoactive drug cocaine works by causing the brain to release its stores of Dopamine. The cocaine high may be an artificial epiphany, though I'm not curious enough to try it and compare it with the natural experience.

The pattern detectors which form themselves so then serve as the pattern library for a multi-level hill climbing optimizer whose driving engine is not in the cortex at all, but in the thalamus. By "multi-level" I mean that it functions in stages of abstraction, starting out with raw inputs and progressing away from the parts of the thalamus and cortex where the inputs are wired to areas which code for patterns of higher abstraction and less detail. Each layer of abstraction has its own optimizer, using the lower one as input and providing an output to the next one up the line. Cutting across the top of your head is a line of special cortical areas that are hard-wired to inputs (in the back) and outputs (in the front). These back-to-back I/O regions are strongly associated with parts of the body in a consistent and detailed mapping. These are where the lowest abstraction patterns are stored. Working toward the back of the head and down the sides we find less consistent and more abstract maps, until we reach the muddle of the parietal regions. The visual areas are mapped separately, to the very lower rear of the cortex, and work upward until they reach this same parietal muddle.

Working from the outputs forward, we reach "staging" areas which light up in PET scans when we "rehearse" a movement but before we actually perform it; then again higher levels of abstraction representing increasingly complex movements, until we reach an ill-defined muddle between this mess and the prefrontal cortex, which is another kind of muddle entirely.

If one examines the interareal wiring of the cortex, one finds that areas of similar abstraction (according to the plan I've just described) are wired together between the back (input) part of the cortex and the front (output) part. This is consistent even in the areas we can't map because they are muddles, if one works just by distance from the areas we do understand.

What seems obvious enough to me is this: As the hill-climbing algorithms connected to the back of the brain evaluate our position in life by firing pattern detectors which correspond to things that are going on, those in the front are evaluating how to modify it. In the back information moves from areas of low abstraction to high; in the front it moves from areas of high abstraction to low. At each level it is evaluated, and if the net effect seems to be a gain based on several competing scales it's passed on to the next less abstract output. Finally, if the system finds an idea good enough to spend energy implementing, it reaches the motor humonculous and the relevant motions -- now broken down into specific muscle movements -- get sent down to the brainstem, where they are sharpened by more reflexive modifiers and eventually expressed as bodily movements.

This model doesn't explain everything, but it explains a hell of a lot. My problem at the moment is nailing down the mechanism by which the pattern detectors are programmed; it must be simple enough for cells to do it (and individual cells are stoooopid) and it must be self-regulating for the level of chaos we exhibit in everyday life. Harth's own alopex algorithm, requiring careful adjustment of feedback parameters, fails on this point, but it's a great starting point.

How Computers Work (according to biologists)

The "obvious" paragraph above represents a thing you will never find in any serious biology text: An explanation, however tentative, of how the system starts with neurons firing and ends up doing what humans and animals do.

By comparison, suppose you read the following explanation of how computers work:

Computers are made of transistors, which allow small amounts of electricity to switch larger amounts. Transistors can be grouped to perform logical functions such as gates and flip-flops. Through the magic of modern technology it is possible to put a billion transistors on a silicon wafer. When you put enough transistors on a chip and wire them just right, you get a computer.

Someone who had never known a computer simpler than their Win98 box might not look askance at that last sentence, but fortunately we do know that mere humans built the first computers, that you do not need a billion transistors to do it, and most importantly that only a few more sentences are needed to flesh out the essential details about what makes a computer work. It's true that a Pentium IV is complicated, but the essential thing that makes it a computer isn't, and the quote above is structured to hide an ignorance that is not really forgivable by one who does have a clue.

The Plan and the Cathedral

However it really works, the brain contains on the order of 10^14 interconnections. Those connections actually get made somehow. They are real physical things that could be mapped. At some point they do not exist, and then as we grow it turns out they do; and unless they are wired totally at random something has to direct them.

For genetic determinists, that guiding principle is the genetic code, all of seven gigabytes or so of instruction on how to grow hair, how to build a pancreas, how to metabolize fat, how to heal scrapes and make blood clot and somewhere in all of that how to wire up the brain. This leaves us with a serious case of eight pounds of shit in a five pound bag, as the genetic code -- even if it were entirely devoted to brain-growing -- is nowhere near as complicated as the brain which grows under its direction. By something like five orders of magnitude, at least.

There is a great deal of structure in the brain, most of which we share with other animals that do not share our skyscraper and atom-bomb-making prowess. The crowning achievement of our humanhood, that massive cortex which we alone possess, is maddeningly homogeneous under the microscope, except for a very slight thickening at the visual area V1. While it lights up in spectacular patterns under a PET scanner depending on what we are doing, the structure itself seems no more specialized than that of dynamic RAM. (Ooooh, a misplaced computer metpahor.)

Also, apart from an extra layer or two and its greater surface area, our cortex is not noticeably different from that of cats, dogs, and even birds.

While the microstructure seems almost defiantly unspecialized, the areas of the cortical sheet are wired together in a specific pattern both through the sheet itself, and via interareal nerve bundles. This wiring is obviously controlled by the genome, and is the same in everybody who is not massively deformed.

Within the last 30 years or so we have acquired a model for how systems like living things can turn relatively simple inputs into outputs of great complexity; it is called chaos theory and its most singlular expression is the fractal, a surprisingly complex (often beautiful) pattern formed by an unexpectedly simple expression. It is obvious that the brain (the entire body, in fact, and possibly the entire Universe) is a fractal. This is how so little genome grows so much and such complicated brain. It is really the only explanation science has to offer, if one does not want to start invoking pixies and elves, so we had better pay some attention to it.

Without going into a lot of detail, the important thing about fractals is that it is not possible to make a small change in one. If you change the generative algorithm even slightly, you will not get a slightly different fractal; you will get a massively and consistently different result. This is what happens in human deformities like Down's Syndrome. This is the smallest kind of point mutation possible in a fractal system; the amazing thing is that Down's victims can survive at all. There are other similar errors which are not so fortunate. Some grow very thin cortexes that obviously don't process right; some grow very smooth cortexes with too little surface area and probably not enough areas. People who have these defects are amazingly consistent, just as normal people are in the convolutions of our cortexes, which are an emergent property like protein folding. If you make a small change in the code, you don't throw a monkey wrench into the works, you throw a nuke.

Back to the tendency tendency

A very good point was made in the last discussion about smaller damage, like ion channels that don't form right, distorting feedback pathways. Let's consider these changes that don't affect the basic wiring, but may affect how it programs itself.

I'm going to go out on a limb here because to my limited form of common sense nothing else makes any damn sense, and say that there is such a thing as a "properly working brain." That is a brain which is properly nourished, free from genetic or teratogenic formative defects, with all the chemical messenger systems functioning nominally.

It is possible for that brain to fuck up, and for reasons that are totally out of our control.

Since the cortex is not -- can't possibly be -- programmed by the genome with its inadequate array of instructions, it must acquire its fine programming through experience. This can only occur as the relevant areas are myelinized, a process that happens only after birth because of the logistical problem of getting our fat heads out of Mom without killing her. As it happens we can chart baby's progress easily; the eyes learn to focus and track, and later the hands learn to grasp what the eye sees. Most parents notice when baby figures out that objects hidden from view still exist. Later on we add language and the beginnings of reason, the R-word animals so dramatically lack.

This programming is fraught with peril. At every level of abstraction we risk forming an incomplete or skewed set of symbols, which will in turn affect the patterns that can be coded at the next higher level with the limited inputs that will be forwarded. In extreme cases people may not reach what we consider a standard equilibrium with the world; they may be withdrawn or excessively expressive.

Abuse and neglect increase the chances of this happening. A full range of experience decreases it, but I don't think there are ever any guarantees.

Now given this "properly functioning brain," there are a range of insults one can throw at the system which will increase its chances of misprogramming. Certain genetic defects figure in here, because they may interfere with emotional feedback paths or directly inhibit our ability to maintain electrical activity long enough to form a detectable pattern, or to form connections when the activity is there to stimulate it. These failures will be on the large indistinguishable from insults such as child abuse that cause the same problems.

It might be possible to screen for some of these genetic insults. This might even have some positive benefits, though the more likely result is that you are not only 24 and male, you have an acetaldehyde metabolism defect which makes your insurance rates even higher, and there's no cure. Does this mean that alcoholism is genetically determined?

No, it doesn't. It's just a red herring. Alcoholism is a pattern which may or may not be encouraged by certain knocks we face in the road of life; but ultimately it's a thing that can happen to anybody. Just as anybody might turn out to have the willpower or alternate interests to make it irrelevant or unlikely.

The danger here is not just the one of creating social injustice, but of missing the real and possible explanations because of focusing on some tangential cofactor. I've given one explanation of how consciousness works here, though it is incomplete and very unsubstnantiated; and it is one more explanation than I have ever gotten from any source outside of myself, anywhere.

If you didn't know how cars worked and spent your life cataloguing the various colors of drip which appeared under them, and meticulously cross-referencing them with symptoms of ultimate car failure, you would be able to draw some very general predictive rules about fluid drips and failure. What you would never do is figure out how a car works. You must do that by working from the other direction -- why is flammable fuel required? You might correctly identify this as the source of the heat, noise, and forward motion. You would have to speculate on mechanisms by which fire could be used to make propulsion. You would have a lot of clues. Your mechanism has to be loud, has to vibrate, has to do certain things under conditions of load and idling. Working from that direction it probably wouldn't take you long to reinvent the internal combustion engine. But you have to give up on the fucking leaks. They're a side issue. While we concentrate on taking all the cars with red fluid leaks off the road, nobody is figuring out how the drive train works so we can make the cars with the red fluid leaks safe or identify their problems at a stage when they can still be fixed.

Humans and Animals

One last thought. What is it that gives humans our skyscraper and atom-bomb making prowess? I think the answer to that is a few extra layers of abstraction in the prefrontal cortex, in a place where no other animal has them. Adding a few layers like this is exactly the sort of thing one would expect a point mutation to do; it's the opposite of the missing instructions that give us things like Down's Syndrome. It is not just brain mass but this extra depth of abstraction which allows us to form plans involving the far future and distant dreams, to plan and execute vast enterprises across a span of lifetimes.

One might wonder what humans would be like without our prefrontal cortex. But being human we don't have to just wonder; ever clever with our little tools, we can confidently know.

Sponsors

Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure

Login

Poll
My brain is powered by...
o Faeries 11%
o Fuzzy Logic 8%
o Neural Nattering and Grommishing 6%
o Intel Optium 8 zillion gigahertz 4%
o Alcohol 14%
o Chemical Imbalances 26%
o Brain? What brain? 8%
o All your brain are belong to us. 19%

Votes: 115
Results | Other Polls

Related Links
o recent comment
o range
o of
o responses
o this
o Cyril Burt
o forced sterilization
o know
o Also by localroger


Display: Sort:
The Blank Slate | 188 comments (177 topical, 11 editorial, 4 hidden)
How can (2.00 / 3) (#2)
by medham on Mon Nov 11, 2002 at 11:26:18 PM EST

You post a story like this without mentioning Pinkie?

The real 'medham' has userid 6831.

Because (1.00 / 1) (#47)
by CodeWright on Tue Nov 12, 2002 at 11:09:16 AM EST

Pinker is an unmentionable. Like your underwear.

--
"Humanity's combination of reckless stupidity and disrespect for the mistakes of others is, I think, what makes us great." --Parent ]
pretty long, sloppy thinking (3.16 / 6) (#3)
by Arthur Treacher on Mon Nov 11, 2002 at 11:38:18 PM EST

I love it.  +1FP

"Henry Ford is more or less history" - Bunk
too many intuitions that seem wrong to me (5.00 / 6) (#4)
by speek on Mon Nov 11, 2002 at 11:42:05 PM EST

For genetic determinists, that guiding principle is the genetic code, all of seven gigabytes ... even if it were entirely devoted to brain-growing -- is nowhere near as complicated as the brain which grows under its direction. By something like five orders of magnitude, at least ... Since the cortex is not -- can't possibly be -- programmed by the genome with its inadequate array of instructions ...

It is obvious that the brain (the entire body, in fact, and possibly the entire Universe) is a fractal. This is how so little genome grows so much and such complicated brain

So which is it? Yes the genome can grow a brain, or no it can't? And does the fact that humans exist with their tiny genomes and big brains enter into this at all? Why does it have to be fractal? Non-repeating, non-fractal "patterns" can also result from a small instruction set.

Since the cortex is not -- can't possibly be -- programmed by the genome with its inadequate array of instructions, it must acquire its fine programming through experience

Why do you imagine genes only get you so far, after which experience takes over 100%? You do realize that genes play a big part in the creation of all your proteins, till the day you die, right?

I'm also not sure why you talk about "point mutations" and Down's syndrome. People with Down's have a lot of extra genetic material. I'd hardly call it a point mutation.

And, I thought you're whole point was about the failure of AI, so I expected more talk about current progress in AI. What are you suggesting they do that they aren't currently doing?

--
al queda is kicking themsleves for not knowing about the levees

Obviously didn't make it clear (5.00 / 1) (#22)
by localroger on Tue Nov 12, 2002 at 06:58:27 AM EST

So which is it? Yes the genome can grow a brain, or no it can't?

Yes it can, but there are limits on how the instructions can be modified by mutations. You have no idea how many idiots have seriously tried to convince me that there is a genetic basis for really small details of personality. My argument is there is no way to code for small details in such a way that they can evolve.

People with Down's have a lot of extra genetic material. I'd hardly call it a point mutation.

Actually it is a point mutation, but in the copy operation rather than in the details of the data being copied. You get an extra copy of a whole chromosome, but that doesn't imply the chromosome you get an extra one of is itself defective or unusual.

I can haz blog!
[ Parent ]

subject (3.00 / 1) (#40)
by speek on Tue Nov 12, 2002 at 10:46:41 AM EST

But you and I, who are not identical twins, have similar brains and intelligence (within +- 50 IQ pts, I'd guess). How is that possible? It seems, according to you, any change in the genetic code, even a "point mutation" (such as a different base pair in one sequence) must have drastic effects. It seems to me that you are arguing that the differences in our genetic code is having zero effect on anything to do with our brain after it's been initially formed.

But that's crazy. You even point out your belief in the importance of dopamine, yet where do you think dopamine comes from? The genetic code is playing a part, continually, in the creation and distribution of all the various neurochemicals.

This is not to say there's necessarily going to be a 1:1 correspondence between a gene and a personality trait, but it's just as silly to argue that it is not a factor.

--
al queda is kicking themsleves for not knowing about the levees
[ Parent ]

oh yeah (2.80 / 5) (#6)
by raaymoose on Mon Nov 11, 2002 at 11:56:41 PM EST

I love the smell of crack-pottery in the morning. Can't wait to vote this up.

I am interested in strong digestion, myself. (4.37 / 8) (#7)
by Noam Chompsky on Tue Nov 12, 2002 at 12:16:56 AM EST

I have an almost working algorithmic model of digestion. Once my computer gains twenty pounds the mind/stomach duality will become a part of language--er, reality. I meant reality.

---
"They are in love. Fuck the war."

You people are still wasting neurons over this? (3.00 / 8) (#10)
by Estanislao Martínez on Tue Nov 12, 2002 at 04:53:36 AM EST

God.

Talking about "blank slates", I saw Steven Pinker talk about his new book a few weeks ago. Strawman from beginning to end; well, except when he vaguely accuses all the people who supposedly believe in blank slates (since he didn't name a single contemporary that does) of being guilty by association with Stalin and Mao. That bit was ad hominem, and a pretty stupid one (if the extreme version of blank-slatism is Stalinism, the extreme version of innatism is Nazism).

--em

And your criticism of my article is...? (5.00 / 5) (#21)
by localroger on Tue Nov 12, 2002 at 06:53:48 AM EST

Oh, wait, you didn't criticize my article at all, you erected a straw man and criticized that. Oh dear, and I was starting to think you were really smart and all that. Or is this supposed to be some really clever postmodern self-referential joke?

I can haz blog!
[ Parent ]

That was his point... (4.66 / 3) (#49)
by jmzero on Tue Nov 12, 2002 at 11:41:41 AM EST

I think it was pretty clear from his post that he didn't want to bother.  Insulting him, which you did, will probably ensure that he never wants to bother with you.  From the way you dismissed thoughtful comments by iGrrl and others at the beginning of your article as "just the usual responses" or whatever, I'm guessing many people have given up or will give up on you.  

There's a certain compulsion among humans to think "What if everyone else is wrong about...?"  It's made worse by thinking about the times when that his been the case, when one renegade thinker was right.  The reality, though, is that almost always the "renegade thinker" is wrong - especially if that thinker has not taken the time to become a true expert in the field.  And while you may feel you are an expert, reading your discussions with someone like iGrrl makes it clear that you aren't.

In the case here, I don't think that you're really all that much of a renegade thinker.  The "mainstream" seems to have a spectrum of views on this issue - and your view is only "towards one end".  

There is a real breakthrough to be made in AI.  Somehow, I think, there is a better approach to learning that hasn't been explored.  Perhaps you'll find it.  Perhaps you'll revolutionize our knowledge of the brain.

However, you'll be much more likely to do so if you lose the "the man is keeping the truth down" attitude and make a tremendous effort to learn the field.
.
"Let's not stir that bag of worms." - my lovely wife
[ Parent ]

Pinker bashing (none / 0) (#54)
by Dogun on Tue Nov 12, 2002 at 12:42:46 PM EST

Yeah, I don't respect Pinker either - you're not alone. I was pissed when I got one of his books for Christmas because the picture that was ludicrous - I know, from taking a very basic neurobiology course that his theories on language are pure nonsense - confirmed this summer when his reactions to the finding of the gene that gives enhanced control over throat muscles, which he immediately labelled the "language gene". As for the strawman I've just constructed, refer back to the article for my argument against Pinker. I've thought very similarly for years, and am glad to see that someone else had the courage to post a view that is not very popular these days due to celebrities like Pinker.

[ Parent ]
Pinkers Book (none / 0) (#149)
by bryaninnh on Wed Nov 13, 2002 at 07:45:02 PM EST

have you actually read the book ? or just "saw him talk about it" as you state in your comment?

[ Parent ]
I'm not sure (3.50 / 2) (#14)
by dr k on Tue Nov 12, 2002 at 05:51:37 AM EST

what there is to discuss here. You haven't really asked any interesting questions, just blocked off some tiresome threads. It is more interesting to say: what elements of intelligence are/are not determined by genetics? More interesting than passively claiming one "has little to do" with the other.

If genetics is just a recipe, then what is the bread that it makes to allow for intelligence? If genetics just makes us monkeys, then what happened to those monkeys over there?


Destroy all trusted users!

Today is my "bitch at liberal arts" day (4.81 / 11) (#15)
by a2800276 on Tue Nov 12, 2002 at 06:00:32 AM EST

... so excuse me, I really think that it's a nicely written article, a tad longish, but it displays a number of pet pieves I have about people from a liberal arts background (such as myself) writing about science. I'm just going to mention one or two examples and will vote it up anyway.

Ok, first off, it's way to long and not focused on the subject. By the time I've finished reading the first section (5 paragraphs) concepts such as universal currency, extinction of dinosaurs, Einstein, the K-T Iridium Layer (WHATEVER that is) and how a generational gap between WWII-Generation experiences vs. Vietnam War experiences influence one's view of such things. Oh yeah, this was supposed to be about genetic determinism of intelligence.

Then it goes on to criticise twin studies. The chief argument brought up against them is the fact the the pioneer in the field was a charlatan and that later studies in the field saw themselves in Cyril Burt's tradition.

Some pseudo-statistics are brought up to argue that it's perfectly normal that some twins should exhibit the same characteristics, but unfortunately, you don't provide any numbers to back that up and the schema you present doesn't make particular mathematical sense even if you could back it up with numbers.

The critique of twin studies is concluded by claiming that all studies are worthless because the researchers are making a point to only look for evidence that will support their cause, viz. genetic determinism. Now I'm no expert in the field of twin studies. I'm sure that there are a number of valid points to say against them, although you don't make the effort to name any concrete cases with the exception of Cyril Burt who died half a century ago, not exactly contemporary research.

You don't make any concrete mention of research that is systematically incorrect because - I suspect - it would be rather difficult to find such research. As I'm sure you know, since at least the 50's the basis for empirical science has been Popper's Falsification Principle, i.e. you don't look to verify your theories, you look to falsefy them, and only when you fail every effort to falsify a theory has it reached any merit.

All scientific research has to meet those criteria, because any other methodological procedure is no going to receive funding or peer recognition for long. Verification is a problem in scientific research that every scientist is aware of, it's not something that only now came to your attention.

The car insurance example is another gross misunderstanding of statistics. No insurance company claims to judge you on your personal driving skills. They stuff you into a representative group. You're a 16 year old male with bad grades, sorry, you lose, even if you have excellent driving skills. It's as simple as that. The discounts you get because you've had so and so many years without accident don't reflect that the insurance company now trusts that you personally are a better driver, you're simply moved to another statistical slice, e.g. 28 year old males who haven't had an accident recently. They don't know if that's because you never drive or because you're a defensive driver, or because your just lucky.

What I also just realized looking over the "Reflexes" section again, is that I fail to see the relation between car insurance and the complexity of human reflexes.

The What makes us diffrent from animals Section: I hope you agree that the jury is still out on that topic. Your take on the subject is pretty vague, but you seem to suggest that it has to do with the larger size of the human cortex. There are of course other opinions that are just as plausible. My favourite being the purely physical ability that humans possess that allow them to use verbal language coupled with the fact that humans are also capable to preserve language, e.g. in writing. I'm suprised you don't mention it, because it would back your case.

Ok, this is getting kind of long, so I'll try to hurry up: In Brains and Computers you consider using information theory to describe the brain, as unscientific. To back your claim, you ressort to statements such as:

A state of epiphany is reached when one makes a great deal of new connections all at once, realizing how entire patterns of thought fit together in a previously unsuspected grand scheme; the feeling is more intense than an orgasm but, alas, also a lot more rare.
Do not consider that to be"scientific"?

Next point:

This model doesn't explain everything, but it explains a hell of a lot. My problem at the moment is nailing down the mechanism by which the pattern detectors are programmed; it must be simple enough for cells to do it (and individual cells are stoooopid) and it must be self-regulating for the level of chaos we exhibit in everyday life. Harth's own alopex algorithm, requiring careful adjustment of feedback parameters, fails on this point, but it's a great starting point.
You claim that the "obvious" paragraph cited above explains "how the system starts with neurons firing and ends up doing what humans and animals do." I fail to see where it explains any such thing to any degree, sorry.

So then you go one criticising a simplistic explanation on how computers function (you never cite the source of the computer test, by the way), having only a few sentences before crammed Harth into the space of, what, two paragraphs. Maybe you should consider applying the same standards to your own writing.

This leaves us with a serious case of eight pounds of shit in a five pound bag, as the genetic code -- even if it were entirely devoted to brain-growing -- is nowhere near as complicated as the brain which grows under its direction. By something like five orders of magnitude, at least.
Boy, I'd like to see some justification of those numbers. Just what metric are you using to measure complexity here, and what exactly do you mean by 5 orders of magnitude? I don't see what differentiates the above statement from saying that a car is orders of magnitude more complicated than the blueprints of the car, because he blueprints can't possibly contain all the places where you can drive the car, so it's more complicated by the factor 7.

Then you go on to claim

It is obvious that the brain (the entire body, in fact, and possibly the entire Universe) is a fractal.
without ever having explained the much-abused term "fractal" to any satisfactory extent. Only that it takes a simple output which it transforms into a complex and "often beautiful" output. Apart from the fact that I find "beautiful" to be a rather unscientific concept, and you so much value the scientific process, what you provide does not to any degree describe the concept of fractals.

At every level of abstraction we risk forming an incomplete or skewed set of symbols, which will in turn affect the patterns that can be coded at the next higher level with the limited inputs that will be forwarded.
He?! You call using information theory to explain the human brain/consciousness "plain wrong" just a few paragraphs back? I have serious trouble trying to figure out what that sentence is supposed to mean, but I do see alot of buzzwords from just the field of information theory.

You're certainly going out on a limb claiming that alchoholism is not geneticly influenced, because I have yet to hear of any scientific study suggesting that. Sure, it's not purely genetic...

So then, it kind of fades out, you leave off with a vague reference to lobotomy, only in the form of a link, but I don't see how your piece concludes intelligence is not genetically determined. As a final note, as far as I know, a lobotomy doesn't so much affect the pure "intelligence" of a person, as much as leaving them emotionally crippled and apathetic, so that's the final point I have difficulty understanding.

But then again, maybe I'm just not intelligent enough to understand fancy writin'. Sorry, both my parent are stupid. :)

Misread (3.00 / 1) (#18)
by localroger on Tue Nov 12, 2002 at 06:49:51 AM EST

You call using information theory to explain the human brain/consciousness "plain wrong" just a few paragraphs back?

No, I said that mainstream thought in the area calls it wrong. I most certainly do believe information theory is necessary. I think we mostly agree, but you read it a bit quick and mistook a couple of things I am criticizing for things I believe myself.

Granted, I muddled the writing in a few places. It was getting late, and it's hard to distill seven years of thought into an article this length without losing something.

I can haz blog!
[ Parent ]

Absolutely (4.50 / 2) (#23)
by a2800276 on Tue Nov 12, 2002 at 07:25:50 AM EST

... you're right, I did misread that part. It was still kind of early here! Sorry.
Another weird-out comparable to the PETA-friendly invocation of our relationship to animals is the assertion that it is crazy, unscientific, or just plain wrong to use information theory to describe what happens in the brain.
But ... what's a weird-out, and who makes assertions denying the legitamite use of information theory in describing the function of the brain? "Real" scientists, I presume. And they're operating in an empirical, concrete realm where analogies are not appropriate.

The whole concept of information theory applied to human interaction, consciousness, etc. is in itself at most of philosophical interest, because people like Shannon never meant for it to be applied in that domain. Shannon was an engineer.

As an aside: another one of those misapproriated concepts that's popular in that domain is "entropy". I could puke whenever that term comes up in a non-physics context.

[ Parent ]

But... (none / 0) (#170)
by bjlhct on Thu Nov 14, 2002 at 08:27:50 PM EST

But the theory says the information on entropy and consciousness are both fractal...therefore...you are wrong!
*

kur0(or)5hin - drowning your sorrows in intellectualism
[ Parent ]

It's just philosophy (4.60 / 5) (#28)
by MrSpey on Tue Nov 12, 2002 at 09:03:35 AM EST

Today is my "bitch at liberal arts day", ... so excuse me, I really think that it's a nicely written article, a tad longish, but it displays a number of pet pieves I have about people from a liberal arts background (such as myself) writing about science.

I've found that for the most part when people without science backgrounds argue against some type of established scientific theory they usually base their arguments on the priciple of, "The science just doesn't make sense to me." While I in usually encourage the questioning of untested scientific ideas specifically and the questioning of established conventions in general, the questioning of science needs to be backed up with more science, not with rhetoric. Unfortunately, most liberal arts majors who argue strongly against science have nothing more than good rhetoric to back them up, which can be tough to defend against if one is a scientist who's specialty is not the topic of debate. "It's been peer reviewed and established for years," is a weak defense against what sounds like solid reasoning from a charismatic person.

Personally, I just consider articles like this to be philosophic in nature instead of scientific and read them for entertainment instead of education.

Mr. Spey
Cover your butt. Bernard is watching.

[ Parent ]
Voilŕ un crackpot. (3.63 / 11) (#16)
by Estanislao Martínez on Tue Nov 12, 2002 at 06:01:11 AM EST

Brains do not have registers and Von Neumann binary addressed memory, but they most certainly do process and store information.

You are confusing the theories of information and computation. Different things, having to do with different topics (of course they are related, it's math after all, but that doesn't make them the same thing, hell, far from it, they're really different, dammit). Computers don't "process information" (what the hell is that supposed to mean, anyway), they compute functions, and even that only over discrete objects. (And don't give me some story that comes down to redefining "computation". That term is defined by Turing machines, period. A CONNECTIONIST NETWORK IS NOT A COMPUTER, dammit.)

A state of epiphany is reached when one makes a great deal of new connections all at once, realizing how entire patterns of thought fit together in a previously unsuspected grand scheme; the feeling is more intense than an orgasm but, alas, also a lot more rare.

Are you trying to troll us?

Consciousness (v.): the use of a certain class of "hill climbing algorithm" to evaluate the state of the world according to some arbitrary set of critera, evaluation of how various manipulative devices might be brought to bear to optimize its state, and occasional use of those devices to attempt to change its state based on these evaluations.

Yes, you must be trying to troll us. That "definition" certainly has as one of its inevitabole consequences the little tiny puny "stream of thought" that I experience in my waking hours, the difficulty of explaining which constitutes the problem of consciousness.

--em

You are wrong. (2.00 / 10) (#17)
by tkatchev on Tue Nov 12, 2002 at 06:39:18 AM EST

You forget one thing:

Both the Von-Neumann and the Turing machines have absolutely nothing at all to do with modern computing.

Both models were invented fifty or so years ago, back when "computer science" as a field did not even exist. Both are useful only insofar as they help prove completely abstract mathematical theorems. More importantly, both are taken to be axioms and have no solid grounding in reality.

Look, the only thing that differentiates a computer from an extremely large table of logarithms is the fact that a computer has memory. In effect, the ability to shift data in and out of memory (i.e. the ability to process data) is what makes a computer a computer.

"Computing a function" is simply the ability to match one set of values to another; a simple table printed on a sheet of paper is just as good at "computing a function" as a computer.

P.S. Please don't argue with me, as this is a field that I'm waay more experienced in than you.

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

P.S. Don't sound so arrogant (4.00 / 2) (#29)
by Shovas on Tue Nov 12, 2002 at 09:20:42 AM EST

You come off as mightily pickish with your P.S. comment. What qualifications do you have? What experience do you have?
---
Join the petition: Rusty! Make dumped stories & discussion public!
---
Disagree? Post. Don't mod.
[ Parent ]
And you? (none / 0) (#51)
by tkatchev on Tue Nov 12, 2002 at 12:34:10 PM EST

What qualifications do you have to criticise my criticism, huh?

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

he or she asked you first :) (nt) (none / 0) (#56)
by ethereal on Tue Nov 12, 2002 at 01:02:56 PM EST


--

Stand up for your right to not believe: Americans United for Separation of Church and State
[ Parent ]

The thing is, (none / 0) (#59)
by Shovas on Tue Nov 12, 2002 at 01:36:56 PM EST

Any crackpot can come off the street and make any assertion they want. Outrightly laying a rule down that one can not argue with you immediately raises a red flag and makes you look like a lunatic. The epitome of knowledge is the question, and it is by questioning one another that we gain knowledge. By saying one shouldn't even bother attempting to aruge, you're basically saying whatever point they bring up is already wrong. It's just a very "cheap" thing to do and protrays a negative sense of arrogance.

Best thing to do is say what you want to say and let the debate fly. It'll make you look much more professional and, besides, the truth will sort things out.
---
Join the petition: Rusty! Make dumped stories & discussion public!
---
Disagree? Post. Don't mod.
[ Parent ]
Glad we agree. (none / 0) (#98)
by tkatchev on Wed Nov 13, 2002 at 02:04:42 AM EST

I think Turing is an obvious crackpot.

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

No you are (as well as arrogant) (4.00 / 1) (#60)
by the womble on Tue Nov 12, 2002 at 01:40:52 PM EST

Turing machines have absolutely nothing at all to do with modern computing.

Except that a Turing machine can compute any function that any conventional computer can

Both are useful only insofar as they help prove completely abstract mathematical theorems.

Which apply to all digital computers becuase a Turing machine can compute anything any digital compuP.S. Please don't argue with me, as this is a field that I'm waay more experienced in than you. ter can.

both a to be axioms and have no solid grounding in reality.

A criticism that applies to the whole of mathematics! How devastating!

Look, the only thing that differentiates a computer from an extremely large table of logarithms is the fact that a computer has memory. In effect, the ability to shift data in and out of memory (i.e. the ability to process data) is what makes a computer a computer. "Computing a function" is simply the ability to match one set of values to another; a simple table printed on a sheet of paper is just as good at "computing a function" as a computer.

Everything a computer can do can be replicated by a table - it just needs to contain every possible state that the computer can have and the appropriate transition for any input(on a really BIG piece of paper), not very different from a function implemented as a table with an entry for every possible set of parameters.

taken to be axioms and have no solid grounding in reality.

A criticism that applies to the whole of mathematics! How devastating!

Look, the only thing that differentiates a computer from an extremely large table of logarithms is the fact that a computer has memory. In effect, the ability to shift data in and out of memory (i.e. the ability to process data) is what makes a computer a computer. "Computing a function" is simply the ability to match one set of values to another; a simple table printed on a sheet of paper is just as good at "computing a function" as a computer.

Everything a computer can do can be replicated by a table - it just needs to contain every possible state that the computer can have and the appropriate transition for any input(on a really BIG piece of paper), not very different from a function implemented as a table with an entry for every possible set of parameters.

P.S. Please don't argue with me, as this is a field that I'm waay more experienced in than you.

The pinnacle of rational discussion, an unstabstantiated appeal to authority. Its a pity Plato (and Newton, Russell, Godel etc.) never thought of that - think how all those tedious books and papers could have seen edited down by adopting your approach. With your approach you just state your conclusion and your done.

[ Parent ]

Tables don't compute. Period. (5.00 / 1) (#66)
by Estanislao Martínez on Tue Nov 12, 2002 at 02:55:21 PM EST

Everything a computer can do can be replicated by a table - it just needs to contain every possible state that the computer can have and the appropriate transition for any input(on a really BIG piece of paper), not very different from a function implemented as a table with an entry for every possible set of parameters.

The tape on a Turing machine is infinitely long. You can only make a table finitely large.

Not to mention the huge conceptual difference between a table mapping inputs to outputs and a Turing machine computing the same mapping. From the perspective of a completed table, any function you can represent with it is the same; a you can write a random set of outputs in the table, or write the very same value as output for all inputs, and you'll get a table of the same size, which will take equally long to use to compute either function.

From the perspective of a Turing machine, however, these two functions (the one whose outputs follow no real pattern, and the one whose output is always the same constant value) are vastly different. The Turing machine needed for the first one is far more complex, as it needs to store the table. The one needed for the second one is trivial, it just outputs the correct value without looking at the input. In the lingo of algorithmic complexity theory, the second table is highly compressible; the program that outputs it is much shorter than the table itself.

So paper tables are crucially different than Turing machines, not only in that the Turing machine can compute more functions on virtue of having an infinite tape, but simply in that only the Turing machines actually compute, i.e. use a small repertoire of minimal actions whose combination can compute all effectively computable functions. Turing machines do some work to find a value, and that work can be measured. Tables are inert paper.

--em
[ Parent ]

Turing (4.00 / 1) (#70)
by ucblockhead on Tue Nov 12, 2002 at 03:19:36 PM EST

According to Turing, this doesn't particularly matter. For instance, if your machine, with obviously finite storage, periodically requests a magnetic tape for additional storage when it needs it, its storage can be considered to be effectively infinit. At least, that's what Turing himself thought.

But nice troll, though.
-----------------------
This is k5. We're all tools - duxup
[ Parent ]

*yawn* (none / 0) (#106)
by Estanislao Martínez on Wed Nov 13, 2002 at 03:24:19 AM EST

According to Turing, this doesn't particularly matter. For instance, if your machine, with obviously finite storage, periodically requests a magnetic tape for additional storage when it needs it, its storage can be considered to be effectively infinit. At least, that's what Turing himself thought.

So change "infinitely long tape" to "unlimited storage" if it makes you feel happy. Doesn't make a difference; e.g. if we're talking about the lambda calculus, the equivalent property is just the fact that lambda terms have no upper bound on their length. The real point is that the set of machine/tape state pairs is infinite.

--em
[ Parent ]

And the point flies right over em's head. (nt) (none / 0) (#132)
by ucblockhead on Wed Nov 13, 2002 at 12:53:44 PM EST


-----------------------
This is k5. We're all tools - duxup
[ Parent ]
News flash: (none / 0) (#99)
by tkatchev on Wed Nov 13, 2002 at 02:08:20 AM EST

Math is supposed to deal with completely abstract things.

CS, on the other hand, is all about getting your boss' database to scale and distribute properly.

Pray tell, how is Turing's infinite tape going to help me distribute Oracle over a global grid system?

All this just shows how horribly lacking modern CS education is. Personally, I think all tenured CS professors must be fired; we need to start with a clean slate. Too many crackpots like Turing have put too much garbage into too many people's heads.

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

Can't...stop...feeding...troll (4.00 / 1) (#143)
by Boronx on Wed Nov 13, 2002 at 04:17:56 PM EST

You are a troll, but whatever. They may have called it computer science where you went to school, but it wasn't computer science.
Subspace
[ Parent ]
The only "troll" here is staring back at (1.00 / 1) (#157)
by tkatchev on Wed Nov 13, 2002 at 10:48:23 PM EST

...from the mirror.

Go back to school, kid, before you try arguing with the big boys.

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

Look up a dictionary (none / 0) (#166)
by the womble on Thu Nov 14, 2002 at 02:58:34 PM EST

You do not know the difference between science and engineering.

[ Parent ]
You need to review computation theory (4.00 / 1) (#64)
by Estanislao Martínez on Tue Nov 12, 2002 at 02:37:16 PM EST

Both the Von-Neumann and the Turing machines have absolutely nothing at all to do with modern computing.

Please explain when exactly "modern computers" managed to compute functions beyond those computable by these models.

Both are useful only insofar as they help prove completely abstract mathematical theorems. More importantly, both are taken to be axioms and have no solid grounding in reality.

So just go ahead and solve the halting problem on your computer. Everybody will be amazingly impressed.

Look, the only thing that differentiates a computer from an extremely large table of logarithms is the fact that a computer has memory.

Try again. Tables of logarithms are finite; the tape on a Turing machine is infinitely long.

In effect, the ability to shift data in and out of memory (i.e. the ability to process data) is what makes a computer a computer.

"Processing data" is not a defined term in computation theory. And depending on what you call "memory", that is not essential to computing-- witness the lambda calculus.

"Computing a function" is simply the ability to match one set of values to another; a simple table printed on a sheet of paper is just as good at "computing" a function" as a computer.

The table is finite, therefore can only match finitely many inputs to finitely many outputs.

--em
[ Parent ]

P-b-P rebuttal: (none / 0) (#97)
by tkatchev on Wed Nov 13, 2002 at 01:58:51 AM EST

Please explain when exactly "modern computers" managed to compute functions beyond those computable by these models.

And this is Turing's achievement exactly HOW?

Also, sorry to dissappoint you, but the halting problem has absolutely no bearance to the "real world" or to any "real world" problem.

The only function of the halting problem is to make first-year CS majors feel good about themselves. ("Oh look mommy, we have real theorems and stuff just like the big mathematics boys!")

Sorry, but CS is not math. If you crave "theorems", I suggest you buy yourself a math textbook instead.

CS is about writing programs and drawing ridiculous diagrams, not about "the halting problem".

"Computation theory" is a crackpot science, staffed by crackpot professors on crackpot crappy CS faculties. It is as much a "science" as psychoanalysis or torsion field theory.

If, for some reason, you feel an urge to study something "scientific", please go study some math and some physics instead.

P.S. My table of logarithms is infinite; it's printed on the same paper that Turing's tape is made of.

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

Subtle, crucial point. (5.00 / 1) (#111)
by Estanislao Martínez on Wed Nov 13, 2002 at 04:11:24 AM EST

P.S. My table of logarithms is infinite; it's printed on the same paper that Turing's tape is made of.

And how does one figure out which numbers to put on it?

--em
[ Parent ]

What's the problem here? (none / 0) (#115)
by tkatchev on Wed Nov 13, 2002 at 06:06:12 AM EST

The same way as in Turing's infinite tape.

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

Precisely. (none / 0) (#152)
by Estanislao Martínez on Wed Nov 13, 2002 at 09:21:14 PM EST

What does this tell you about your suggestion that a table can map inputs to outputs, then?

--em
[ Parent ]

It tells me... (1.00 / 1) (#155)
by tkatchev on Wed Nov 13, 2002 at 10:46:57 PM EST

...that Turing is a troll and an ugly crackpot.

You still haven't succeeded in showing to me why Turing should be taken seriously.

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

What do you think about complexity theory? (n/t) (none / 0) (#119)
by gzt on Wed Nov 13, 2002 at 08:53:41 AM EST

 

[ Parent ]
Positive. (1.00 / 1) (#136)
by tkatchev on Wed Nov 13, 2002 at 01:39:17 PM EST

Complexity theory is actually useful; well, at least a small subset of it.

Even though most of it uses Turing's wacky ideas as a foundation. (Nothing special here about Turing -- we could replace all references to "Turing machine" with "Athlon XP processor" and nothing would change.)

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

What CS is and is not (5.00 / 1) (#128)
by Three Pi Mesons on Wed Nov 13, 2002 at 11:54:56 AM EST

I really like "CS is about drawing ridiculous diagrams"! Though I would add "thinking about programs" to your definition, and the halting problem is something to think about...

There are plenty of (practical) areas of CS where undecidable problems are relevant. For example, various problems with verifying high-reliability systems sound very, very similar to the kind of abstract issues raised in computation theory.

:: "Every problem in the world can be fixed with either flowers, or duct tape, or both." - illuzion
[ Parent ]

Newsflash. (1.00 / 1) (#135)
by tkatchev on Wed Nov 13, 2002 at 01:36:36 PM EST

"Real world" CS doesn't "verify" anything.

"Verification" is just another form of intellectual masturbation for ossified tenure professors.

The "real world" operates in terms of "acceptable risk", not in terms of "correctness".

Besides, in "real world" CS, the phrase "unsolvable problem" is really a code word for "nondeterministic polynomial".

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

The real world (5.00 / 2) (#139)
by Three Pi Mesons on Wed Nov 13, 2002 at 03:22:58 PM EST

You may think that verifying the correctness of a program is something that only happens for ten-line toy problems, as a homework exercise. This is not the case. When constructing high-integrity, safety-critical systems, these kinds of process are vitally important. We need to know that a design does what is intended, and that a particular system fulfils that design. "Verification" is a good word for this, and it's one that you'll find used not only by professors, but also by designers, engineers and programmers working on "real" problems.

Admittedly, we can't check the correctness of a large program as fully as we might like, but the problems are of the same kind. You are right to identify acceptable risk as a factor in deciding testing strategies and so forth, but correctness is part of this too. We're deciding how much of the correctness issue can be safely ignored; when writing flight control software, for example, the thresholds for risk are a lot higher, so the formal analysis aspect is more evident.

:: "Every problem in the world can be fixed with either flowers, or duct tape, or both." - illuzion
[ Parent ]

I conceed. (3.00 / 1) (#158)
by tkatchev on Wed Nov 13, 2002 at 10:50:32 PM EST

OK, "verification" might be useful -- but only in terms of a broader risk analysis process. If you don't have enough man-power for proper testing, I guess verification might be useful to some degree.

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

alright Mr. Knowitall, (4.00 / 1) (#65)
by twi on Tue Nov 12, 2002 at 02:38:46 PM EST

> Both the Von-Neumann and the Turing machines have absolutely nothing at all to do with modern computing.

Saying "have absolutely nothing at all to do with" if somebody else just claimed that they do is almost always wrong and this is no exception. If you are so mightily experienced then tell me, what IS the vast principle difference between the machines which Von Neumann convieved and the Athlon XP on your desk ?

> Both models were invented fifty or so years ago, back when "computer science" as a field did not even exist.

So what ? You don't need a label to think clever thoughts about something.

> Both are useful only insofar as they help prove completely abstract mathematical theorems. More importantly, both are taken to be axioms and have no solid grounding in reality.

There beeing no way to "prove" things like e.g. Churchs Thesis does in no way mean that they have no solid grounding in reality. It just means that they contain intuitive concepts for which we do not (yet?) have a nice definition.

> "Computing a function" is simply the ability to match one set of values to another; a simple table printed on a sheet of paper is just as good at "computing a function" as a computer.

And what's wrong with that ? You could print out a huge memory-map of your computer which shows it running quake. Build that into a table and lookup the next frame. Then paint the pixels on a another piece of paper with your pen. You have then computed the quake-function by hand.

> P.S. Please don't argue with me, as this is a field that I'm waay more experienced in than you.

If you want people to just shut up and take your word for it (which is generaly inappropriate (and insulting) in a forum such as this) you'd better offer some better explanations than you did.

[ Parent ]

There is no "quake function". (none / 0) (#96)
by tkatchev on Wed Nov 13, 2002 at 01:49:36 AM EST

Functions map from one set of values to another set of values. (Review your high-school algebra textbook if you're confused.)

Now, pray tell me, what is the input value set for Quake?

Well, there isn't one; and guess why? Because the current output of Quake depends very heavily on the sequence of previous output values.

Which means that you've come back "data processing".

P.S. Yes, you are right that Von-Neumann architecture is still important today.

But, only for hardware manufacturing. Modern compilers and operating systems have all but abandonded Von-Neumann's model. Most use some sort of stack-based model. Just look at Java, for example.

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

Q(x) (5.00 / 1) (#116)
by twi on Wed Nov 13, 2002 at 06:55:41 AM EST

> Now, pray tell me, what is the input value set for Quake?

It is the state of the complete memory of the machine running it including memory-mapped IO for the current user-input. That it stores its own input is not a problem, because the stack-machine you mentioned yourself could do this and that is not more powerful than the turing-machine some posts ago.

> Well, there isn't one; and guess why?

See above.

> Because the current output of Quake depends very heavily on the sequence of previous output values.

Only in so far as this previous states are represented as bits in memory. If they are not stored they are forgotten. The memory is of fixed size and it's contents unambigously (apart, perhaps, from randomized bot-AI, which wouldn't realy be needed) determine the state of the next frame.

> P.S. Yes, you are right that Von-Neumann architecture is still important today.
> But, only for hardware manufacturing.

Or for OS- and compiler-developers. At least it should be, as long as it reflects the hardware. There are reasons why java still routinely sucks. (Especialy in my current field of activity, which is MIDP)

[ Parent ]

Processing data (4.00 / 1) (#127)
by Three Pi Mesons on Wed Nov 13, 2002 at 11:46:59 AM EST

In effect, the ability to shift data in and out of memory (i.e. the ability to process data) is what makes a computer a computer. "Computing a function" is simply the ability to match one set of values to another; a simple table printed on a sheet of paper is just as good at "computing a function" as a computer.
Shifting data in and out of memory is nothing more than what I do with a filing cabinet. Computers are more powerful, because they can make decisions - I don't mean "intelligent" decisions, but the ability to select alternative sequences of actions under certain conditions. "If-then" is the key ingredient to the computational power of a Turing machine, to the lambda calculus and its generalisations, and to actual, physical computing machines.

In some sense, yes, computing a function is just a "lookup" task: we can define f: A -> B as a certain subset of A × B. But that's only a theoretical foundation, to make sure we have some reasonable definition of functions in general. There are all kinds of interesting questions connected with how functions can be computed, and that's in the domain (sorry) of Computer Science.

:: "Every problem in the world can be fixed with either flowers, or duct tape, or both." - illuzion
[ Parent ]

"Alternative sequences". (none / 0) (#134)
by tkatchev on Wed Nov 13, 2002 at 01:31:36 PM EST

The phrase "alternative sequences of actions" makes sense only if have some sort of memory access scheme in place.

As a rule of thumb, the easiest way is to use two stacks. (One is not enough -- what you want is really random access to state variables.)

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

What I meant (4.00 / 1) (#140)
by Three Pi Mesons on Wed Nov 13, 2002 at 03:27:23 PM EST

I meant that having a memory access scheme is not enough. You also need to have computational processes going on, using that memory in whatever form it is available. Otherwise, you don't have a computer: you have an automated filing cabinet.

:: "Every problem in the world can be fixed with either flowers, or duct tape, or both." - illuzion
[ Parent ]
Table = Function Computation? (4.00 / 1) (#142)
by Boronx on Wed Nov 13, 2002 at 04:14:29 PM EST

If you insist.

Show me a table that computes the addition function for all pairs of integers.
Subspace
[ Parent ]

Come on, you can do better than that. (5.00 / 1) (#156)
by Estanislao Martínez on Wed Nov 13, 2002 at 10:48:13 PM EST

Ask him to show you how he might go along in constructing even a finitely large addition table.

The notion of a completed addition table of any size presupposes that we have an effective procedure for addition that allows us to construct said table. How else could it be an addition table?

--em
[ Parent ]

Sir, you are a clown. (none / 0) (#164)
by tkatchev on Thu Nov 14, 2002 at 07:40:46 AM EST

As far as "computation" is concerned, where exactly you get your values from is completely irrelevant.

For all it matters, you can just make them up as you go along. (That, indeed, would be a pseudo-random-number function; a perfectly valid function as far as functions go.)

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

connectionist networks (none / 0) (#69)
by ucblockhead on Tue Nov 12, 2002 at 03:13:36 PM EST

Rumelhart proved that you could train a connectionist network to perform an exclusive-or using back propogation in the late sixties. With this proven, he showed that it was theoretically possible to build a traditional Von Neumann Architecture machine entirely out of connectionist networks.

Both computers and brains are devices that take input and produce output based on that input. How similar they are otherwise is, of course, an open question.
-----------------------
This is k5. We're all tools - duxup
[ Parent ]

The brain does not compute a function (none / 0) (#107)
by Estanislao Martínez on Wed Nov 13, 2002 at 03:32:07 AM EST

Both computers and brains are devices that take input and produce output based on that input.

This is not an empirical statement. This is one way that one can think about brains and computers. And I would say it is not a very insightful one when it comes to brains, which as a whole is involved in a feedback loop with its surroundings. This is a different idea from computing a function. (And don't reply with something like "but it can be modeled as a function from times to brain/environment state pairs; the fact that you can model an X as a Y simply does NOT mean that an X is a Y.)

Computation theory, having being invented by mathematicians, is overly concerned with functions, and thus with "inputs" and "outputs". In the major work, nonterminating processes are regarded as junk; there is some small amount of work that develops notions of the "work" that such a process does, but this is a minoritary thing.

Anyway, thinking about brains as "input/output devices" is a profoundly misleading idea smacking of behaviorism.

--em
[ Parent ]

Quirks and Quarks (none / 0) (#24)
by lonesmurf on Tue Nov 12, 2002 at 07:31:49 AM EST

I listen to a really cool radio show, Quirks and Quarks, online every week. This week on the show, the author of The Blank Slate was on and gave an interview. Sounded like a pretty cool guy with his feet on the ground and his head full of good, hard science and ideas. I intend to order the book, sounded fascinating.

They also had a cool interview with a woman doing reasearch in baby babbling. The woman sounded like a ditz, but her ideas were neat. Like, did you know the majority of people speak out of the right side of their mouths because it is the left side of the brain which controls language? How cool is that? (Try it!)


Rami

I am not a jolly man. Remove the mirth from my email to send.


More than that (none / 0) (#100)
by Kalani on Wed Nov 13, 2002 at 02:11:46 AM EST

I once read a study (I don't recall the name of it offhand but I think that the name of the man who did the study is "M. S. Gazaniga") that studied patients who had their Corpus Collosum severed. In one part of the study, he performed an experiment in which the patient was shown a sequence of shapes and first asked to name the next shape and then to write the name of the next shape on a piece of paper with his left hand. The sequence of shapes was set up such that the patient's right eye could only see up to the next-to-last shape and the left eye could see the last shape. So if the next-to-last shape was a circle and that meant that the last shape was a triangle, the patient would say that the next shape would be a triangle. At the same time, the patient would actually write "square" (or whatever the shape after triangle would be) with his left hand. I think that he went on to do experiments with more common objects and so on, but that's the main thrust of the project that I remember. It would be interesting to know if this is true and if the work has been duplicated since then (mid/late 50s, I think).

-----
"Images containing sufficiently large skin-colored groups of possible limbs are reported as potentially containing naked people."
-- [ Parent ]
insurance (4.00 / 1) (#26)
by tps12 on Tue Nov 12, 2002 at 07:36:34 AM EST

Insurance companies have an economic incentive to make their predictions as accurate as possible. The only way they could make it "fair" w/r/t men and women would be to charge some other group more to help cover the risk they'd be taking on from young men.

you presented only half of the idea (none / 0) (#34)
by nex on Tue Nov 12, 2002 at 10:13:16 AM EST

> Insurance companies have an economic incentive to make their
> predictions as accurate as possible.

Actually, insurance companies have an economic incentive to make their rpedictions as accurate as possible, and map them to rates that don't resemble these predictions as accurate as possible, but to rates that will generate maximum revenue.

[ Parent ]

true, but... (none / 0) (#35)
by tps12 on Tue Nov 12, 2002 at 10:27:27 AM EST

Competition then drives the rates down to the minimum that the companies can afford. While mandatory insurance laws artificially inflate rates (by effectively shifting the risk burden of "uninsurable" drivers to everyone else), they do so uniformly. Auto insurance is highly competitive, so if any of these companies could figure out a way to offer lower rates to any demographic, it's a safe bet that they would do so.

[ Parent ]
true, but... (none / 0) (#36)
by nex on Tue Nov 12, 2002 at 10:40:53 AM EST

> Competition then drives the rates down to the minimum that
> the companies can afford.
True, but those rates are not necessarily fair for the individual customer. There is lots of competition, but every competitor want to maximise income, not to have fair rates.

You're actually right, in general, competition brings rates to very realistic values, because if everyone charges, say, women too much, it wouldn't take long and a new company that offers cheap insurances for women would be founded, lots of women would switch to that company and everyone else would lose customers.

It's just that the situation is biased. Many people who own a Lexus that is worth five times as much as your car are able and willing to pay rates that are ten times as high as yours, without really causing twice as many accidents... Oh, well, damn, I can't find a better example.

[ Parent ]

nice try (none / 0) (#41)
by tps12 on Tue Nov 12, 2002 at 10:48:31 AM EST

Many people who own a Lexus that is worth five times as much as your car are able and willing to pay rates that are ten times as high as yours
The old rich-people-ruin-capitalism-for-the-rest-of-us argument. Try again.

[ Parent ]
totally wrong (none / 0) (#43)
by nex on Tue Nov 12, 2002 at 11:03:05 AM EST

umh, that was not the point i tried to convey. and it's not the old rich-people-ruin-capitalism-for-the-rest-of-us argument at all; quite to the contrary: it implies that richer peopler help less wealthy people pay their rates, nit that they do them harm.

[ Parent ]
oh (none / 0) (#46)
by tps12 on Tue Nov 12, 2002 at 11:09:02 AM EST

I thought you were saying that bad companies stayed in business because rich people would pay uncompetitive prices. My bad.

But...the wealthy pay more than the poor? Do you mean that expensive cars are disproportionally more expensive to insure than less expensive ones? I can't imagine that this would be the case, unless you are talking about sports cars or SUVs or something that are statistically more likely to be involved in accidents.

[ Parent ]

Expensive cars (none / 0) (#53)
by BCoates on Tue Nov 12, 2002 at 12:42:36 PM EST

The wealthy pay more for insurance because:

a) they probably drive a more expensive car, and get theft/comprehensive/uninsured driver coverage, which pays the higher replacement cost on that lexus vs. the gremlin that's probably totaled for $500 in damage.

b) they have more to lose in a liability situation.  If I run into a bus and get hit with $1 million in chiropractor bills, they'll get my legal minimum $25,000 in insurance, and nothing more, because I don't have much money for them to take; but if I had that million dollars to lose, i'd want more coverage so the insurance company gets to pay instead of me if something like that happened.

--
Benjamin Coates

[ Parent ]

insurance rates... (none / 0) (#44)
by gregbillock on Tue Nov 12, 2002 at 11:06:59 AM EST

>  True, but those rates are not necessarily fair for the individual customer. There is lots of competition, but every competitor want to maximise income, not to have fair rates.

If it were more efficient to assess you as an individual, insurance companies would do so. (Look at all the sub-sub-sub-categories and rate incentive packages you get now; this is a direct result of trying to get close to that goal.)

Even if the insurer had the time to assess you individually, it wouldn't be worth it. Why? Because I'd start a company which wouldn't, but would just give you rates slightly lower than company X which did. I'd let them do the hard, expensive work of figuring out your best rate, then steal you as a customer.

As a consequence, actuarial tables, as prejudgmental as they might be, are probably the best you're going to get, and when coupled with incentives that already exist, like cheaper rates when you don't have accidents or go to college, are probably as tuned for the individual as it is possible to do with today's level of invasion-of-privacy tech.

In the days before insurance companies watch you 24/7, gross signaling characteristics, like Y chromosomes, college attendance, and the like, are the best you've got going. The question of whether it is 'fair' or not is irrelevant. It may not be fair to you personally, but it would be unfair to the insurance company to be forced to give you a more personalized rate (or, at least that's the free market theory).

[ Parent ]

that's what i wanted to say :-) (none / 0) (#79)
by nex on Tue Nov 12, 2002 at 07:07:27 PM EST

that's perfectly true. just in case i didn't get the message across as intended: i didn't want to complain about the insurance companies being unfair. i just wanted to clarify that their primary incentive is maximising profit, not charging everyone a perfectly fair rate (which would be impossible anyway, as you explained very nicely).

[ Parent ]
An extra thought for the morning (5.00 / 3) (#27)
by localroger on Tue Nov 12, 2002 at 08:09:04 AM EST

It's been about ten years since I read the wasp essay, and seven since I picked up The Creative Loop at a used-book fair. If the article is long and rambly, it's a large and rambly problem where important clues must be taken where they are found.

I don't really expect this article to convince anyone that my theory is valid, or even that the mainstream view is all that wrong (though I think it is). The idea is to present a way of looking at the problem, and some of the data points that have shaped my thinking. There are other data points I've not bothered to mention, of course, and some of them would be even harder to document than the ones I've bothered to include here.

The main thing I hope to accomplish is to show that another way of thinking about the problem is possible, and other conclusions are reasonable. Meanwhile, every year or two I go back to the medical library to see if any new research has been done that I might find useful, but usually it's just more analysis of dripping oil and low rumbly sounds. Nobody has had the internal-combustion-engine idea yet.

One of the most tragic results of the mainstream view probably involves Dr. Erich Harth himself. His theory explains a great deal of human behavior and misbehavior -- I was especially impressed because I was neck-deep in the casino environment when I found him -- and leads to all sort of really obvious and useful suggestions, like: consciousness and your current state of mind are mediated by the thalamus, not the cortex; the cortex houses the feature extractors; and features of the hill-climbing algorithm show up in human behavior at all sorts of levels of abstraction. Harth draws none of these conclusions in his writing, though one gets the sense that he understands the importance of his own work.

But Harth is an important neuroscientist with a reputation and a career to protect. He can't point out things like that merely because they are obvious and probably right. After all, look what happened to the guy who pointed out that the continents fit together like a jigsaw puzzle.

I can haz blog!

mainstream view? (4.00 / 6) (#42)
by speek on Tue Nov 12, 2002 at 10:54:20 AM EST

I think you could take some comfort in recognizing there really isn't this "mainstream view" that is blocking out all others. It's rather open and unknown. Just a few weeks ago, I was reading a Discover article trying to convince the "mainstream" that we aren't just a product of our environments, that genetic predisposition plays a bigger role than most people think. It would seem to me that extremists (like yourself, sorry but true) like to invent a mainstream strawman so they can justifiably attack it with radical ideas.

--
al queda is kicking themsleves for not knowing about the levees
[ Parent ]

For the record (4.00 / 1) (#75)
by krek on Tue Nov 12, 2002 at 05:35:44 PM EST

I like your theory.

I have been having thoughts along these lines for a couple of years now, but, unfortunately, only a couple years, and thus, I do not really think that I can add much to your analysis. All I can do is point out the reasons that I came to these sorts of conclusions.

The first being that we are not really very different than any other mammal around, we just have a bigger and better cortex. This occured to me as an extropolation of analysing racism as a natural human instinct, combined with an extropolation, in jest, of the vegan viewpoint that 'animals have rights too' to 'what about wheat, does wheat have rights?'.

And second, my belief that God did not create humanity in His image but that we created God in humanity's image lead me to generalise; perhaps we create everything in our image. Internal combustion engines and computers definitely seem to have many parallels to our biological systems. It may be simply due to our amazing powers of pettern recognition fooling us into seeing patterns where none exist, but I do not think so. It stands to reason that when faced with a problem that needs a solution, our pattern recognition circuits kick in and give us solutions based on what we know, even if we do not consciously know it. In this manner we might be able to study ourselves by studying the tools and devices that we have created, ostensibly, in our image. Thus, the "misinformed" computer analogies, may not be so misinformed after all, just misunderstood.

I have no idea if my comment will be welcome to you, since is is not exactly based on solid scientific method, and hardly any reading of official texts has been done, but, there it is. Besides, it is my opinion that scientist have lost their way recently, becoming too dependant on the scientific method while leaving empirical speculation behind, as if it were useless. With only the scientific method to guide you, it would be very difficult to make any kind of progress, and near impossible to find new ideas to study. The scientific method is extremely important because it gives us eventual certainty (as much as we can get in this world anyway), but without speculation; certainty in regards to what? Left to it's own devices, Scientific Method would wander round and round in ever tightening circles, lacking any kind of inspiration at all.

And as a last note, I would like to know what literature you have been reading. The only book that I can offer up on this topic is "A general theory of love", a book regarding the proper rearing of childeren from a neurological point of view, disguised as a self-help type book about love.

[ Parent ]
sorry (5.00 / 1) (#30)
by NKcell on Tue Nov 12, 2002 at 09:33:12 AM EST

but I have a problem with your comment:

Since the cortex is not -- can't possibly be -- programmed by the genome with its inadequate array of instructions, it must acquire its fine programming through experience

Why not? What do you think it's inadequate? Is this a gene-number argument, or is it a view that you think it's too complicated of a system?

Also, why can't it be both experience and genetic-based? There are intricate patterns and circuits formed in the cortex genetically- this has been proven. Why can't the crude template be created intricately, with the remaining refinement done through experience? Thus intelligence would be a product of both genetics and experience.

--------------------------------------------------------

Black holes are where God divided by zero. -Steven Wright

clarification of my point (5.00 / 2) (#31)
by NKcell on Tue Nov 12, 2002 at 09:49:31 AM EST

I think we both agree there is refinement of some sort for neuronal circuit, I think you just believe that most complex neuronal pathways are not preprogrammed (correct me if I'm wrong).

However all major senses and their neuronal pathways all seem to be genetically programmed. Take the most complex sense so far: olfactory system. 1000 receptors detecting >10,000 different distinct odors. All those neuronal pathways are predetermined, as are their connections to higher brain centers. It doesn't take that much of a leap to consider that many higher processing is preprogrammed as well (such as the genes involved in language ability that were just discussed in Nature/Science).

In short, while genetics isn't the only determination of intellegence and brain organization, it is a major factor. To deny it would be incorrect and illogical.

--------------------------------------------------------

Black holes are where God divided by zero. -Steven Wright
[ Parent ]

Reason (5.00 / 1) (#87)
by localroger on Tue Nov 12, 2002 at 07:39:23 PM EST

Is this a gene-number argument?

Basically, yes. It's like fitting the instructiosn to build a race car on a postage stamp.

Also, why can't it be both experience and genetic-based? There are intricate patterns and circuits formed in the cortex genetically- this has been proven.

Long story short, genes obviously route the bundles of cables that go here and there, and probably do more low-level programming in the spine and brainstem. Within the cortex, I think all genes do is route the interareal connections, sprinkle the intraareal connections randomly, and start looking for inputs that repeat themselves and reinforcing them.

Since human behavior is dominated by the cortex far more than most or even all other animals, this means our behavior is much less regulated by genetic influences and much more plastic.

I can haz blog!
[ Parent ]

genes create millions of proteins (5.00 / 3) (#90)
by atsmyles on Tue Nov 12, 2002 at 10:34:35 PM EST

While it is true that genes do not appear to account for our complexity in itself, genes create millions of protiens. These proteins are not only three-dimensional, which allows for variation of use, but also affect each other temporally (When protiens are created could have as much of an influence as what is made) We have just become aware of this problem, and have not even made a dent yet in understanding how these protiens relate to each other

[ Parent ]
Genetic Determinism in the field of Psychology. (5.00 / 3) (#32)
by faets on Tue Nov 12, 2002 at 10:00:25 AM EST

First of all its good to see such a old debate being carried out in such a new medium. This is the old "nature" vs. "nurture" debate rearing it's head again.

I'm currently doing a double major in CS and Psych (right in the middle of exams, hence surfing the web!) and one of the main topics covered in the Genetics section of my Developmental Psych unit was, you guessed it, "Genetic Determinism". So naturally this article piqued my interest.

A large focus of the Genetics debate in Psychology has been on intelligence, which is nothing new psychologists are _obessed_ with intelligence. The two main competing theories in the area of the development of intelligence are "Socialization" theory (representing the nurture camp) and "Behaviour Genetics" theory (in the nature corner).

Socialization theory attributes parental rearing style as having the largest influence on the development of intelligence. This camp assigns different parental styles as being associated with varying levels of intelligence (eg. "authoritarian" styles are theorised to lead to low IQ).

Behaviour Genetics on the other hand attributes genes as being the biggest influence in the development of intelligence. This point of view doesn't completely disregard the effects of environment on intelligence, however it states that genetics ultimately influences environment as well. This actually makes a cetain amount of sense when you sit down and think about it. People will treat children of different genetic make-ups differently (say the agressive child in comparison to the inquisitive child) and specific genetic make-ups will seek out specific environments (say the introverted, yet highly technically competent make up will seek out the "Computer Science department" environment <g>).

Anyway thats an overview of the dominating theories (there are heaps more trust me and every new paper takes its on spin on old theories). This leads me to my point... You unfairly criticise Twin Studies. Both the dominant theories while having completely different causal models predict very similar correlations of intelligence in biological families and it is absolutely necessary to look at Twin and Adoption studies to determine which theory is closer to the mark.

By looking at adopted monozygotic ("identical") twins you take the effects of a shared environment (including things like parental rearing style) out of the results. Monozygotic twins raised together will have the effects of both shared environment AND genetics acting on their development. On the other hand Monozygotic twins raise apart will only have genetics in common. This way you can look at the relative effects of genetics and environment (plus things like their interaction) in a purely empirical way.

I have some relevant research in my notes and I'd have to say all the empirical research to date definately backs up the Behaviour Genetics viewpoint. The correlation in IQ between identical twins raised together is 0.86, while twins apart is 0.76 (very high don't forget experimental error is in there too). This was conducted with 1,300 and 137 pairs of twins respectively (while 137 doesn't sound like much it is more enough for the sensitive mathematical tools that are typically used, eg. ANOVA). This is all from Sandra Scarr's (1992) review of heritability data.

While at the moment behaviour genetics is the theory of the month (or rather decade, it has been the "most popular" theory since around the early 90s). This hasn't always been the case. As in any other scientific discipline debate is always raging and Psychologists are fickle crowds. As soon as another better theory comes along with some solid research backing it up they will abandon BG no worries.

Personally I think the truth lies somewhere in between BG and the nurture side. I don't think genetics can account for ultimately causing ALL of the environmental effects which can influence development (particularly in early life) but I do think genetics does account for large slices of the pie. I stress that THIS view isn't backed up by any data in particular however.

localroger I suggest that if Genetic Determinism really interests you you should take some Psych units or at least pick up some modern Developmental Psych books. These will give you good pointers to the relevant researchers etc. and the theories that they respectively propound. You will find many of the arguments that you represent as being used as pro-genetic determinism don't hold water in proper scientific examination. I have found Psychology a nice "half-way" point between, what I consider the liberal arts "airy fairy" argue-any-point practices and hard science grounded in _empirical_ scientific research.

errata (5.00 / 1) (#33)
by nex on Tue Nov 12, 2002 at 10:07:02 AM EST

while i consider the article high quality FP material, there are some specific points on whih i will now pick.

> ... even with a perfect driving record and every possible plus you will pay triple the auto
> insurance of a woman the same age with four accidents ... It's horribly unfair ...
with four accidents? are you sure you're not exaggerating? Anyway, your opinion that this is unfair is very arguable. Sure, on one hand, you could say that having gender-specific rates is sexist, but on the other hand a woman who has to pay as much as a man, even though she's less likely to be involved in an accident, could also find her rate unfair. If the insurance company's statistics don't take your plusses and assets into account properly, the statistics are flawed, but not the general idea of having different rates for different people.

[ The paragraph about the disoriented wasp. ]
You talk about how consciousness was much less complicated as it is believen and that computers might already be given consciousness if anyone knew a nifty algorithm. However, you don't define the term 'consciousness' even roughly and you don't show how it relates to the rest of the story (you don't really need consciousness to find your nest). And you don't say why you believe that a nifty algorithm implementing consciousness could possibly exist. You don't talk about how most people think that consciousness works completely differently. Well, you give a definition a bit below, but it's one that doesn't fully apply to the wasp story.

> ... the genetic code -- even if it were entirely devoted to brain-growing -- is nowhere
> near as complicated as the brain which grows under its direction. By something like five
> orders of magnitude, at least.
orders of magnitude of what measure exactly? you seem to measure complexity purely as the amount of information needed to store the physical structure---you're talking about gigabytes---which is clearly wrong. consider a huge poster that shows a certain portion of the mandelbrot set. you may need gigabytes to store those those pixels, but you could store an algorithm who generates them in a few hundred bytes. so, the brain itself is not all that complex. a particular brain that was shaped by years of learning, that's what needs lots of information to describe. but pretending to know how much information exatctly and speaking of x oders of magnitude without stating the measure is rather ridiculous.

to make a long rant short: just because it looks huge and complex, it doesn't necessarily need to be complex, it's maybe just the underlying pattern that's too complicated to understand (you couldn't reverse-engineer a plot of the mandelbrot set, without knowing how it was made, to the algorithm that generated it). we don't know how many bits would be needed to store the information a brain stores. this is an extraordinarily difficult question, because the brain doesn't store bits. a more apropriate question would be how many bits you'd need to store the information a brain stores inside a binary machine that emulates said brain rather perfectly---that's also hard to answer.

> It is obvious that the brain (the entire body, in fact, and possibly the entire Universe)
> is a fractal.
Wrong. A fractal is a pattern composed of shapes that are similar to the shape of a larger or smaller part when magnified/reduced to the same size. When you magnify the "apple man" (a plot of the mandelbrot set), you get an uncountable number (in strict mathematical terms, it might actually be countable, i just mean: really many) of smaller, similar looking "apple men". When you look at the human body at different levels of magnification, you see totally different patterns. The brain is not a fractal, and the whole arganism is most certainly not.
The universe is in some aspects similar to a fractal (planets circling suns just like electrons circle atom cores), but nothing more than similar.

> This is how so little genome grows so much and such complicated brain.
It depends on your point of view if the 10GB output of a 500 byte algorithm is really more complex than the algorithm itself, or just much more redundant. The brain (not counting all the connections made through experience, learning etc., which aren't pre-determined in the genome anyway) isn't all that complex after all---you explained this fact yourself, describing how simple and homogenous the structure is.

Well, of course the body consists of lots of cells that are similar to each other, and there are tons of ther similarities. So a full grown body is a really really redundant thing, while the genome is a very compressed representation of this thing. But we're not dealing with a fractal here, as I explained above.

> Without going into a lot of detail, the important thing about fractals is that it is not
> possible to make a small change in one.
This sentence is quite fuzzy. What you mean is that it's not possible to make a change that is local to a little portion of the picture. Humans are different from that, there are people who look really similar and just have a different eye colour. Anyway, it is possible to make small changes in fractals. For example, if you have one that draws coloured areas of certain shapes, you could change the red shapes to blue ones. That would be a small change. It just wouldn't be local to a small portion of the image, as I said above.

> If you make a small change in the code, you don't throw a monkey wrench into the works, you
> throw a nuke.
Wrong. It's not true that the genome is so vulnerable in general. A large part is never interpreted at all and can be altered to no effect. Certain mutations are detected and repaired. Other mutations cause the body to look differently, but work as advertized. An then there are those that really fuck things up. Some of them are rather likely to happen, resulting in quite a number of people exhibiting the same symptoms. Those are genetic defects that can be diagnosed, such as Down's Syndrome.

Oh, I just noticed that at some point in my comment I started using capitals. I didn't do so at the beginning. Well, anyway...
Those are not the result of a nuke that was thrown at the works, but rather of a tiny monkey wrench that happened to hit a very sensitive spot.

By the way, it's not that Down's Syndrome is caused by a tiny error in the genome. IIRC, those people are lacking a whole chromosome?

And after all, we are just animals. The minimal differences you're pointing out between our physiology and that of other animals make us different from apes like tomtits are different from chickadees. It's just that this particular difference proved to have dramatic consequences, e.g. a rather largish impact on nuclear weapons research.

Down's syndrome (5.00 / 1) (#73)
by bytesmythe on Tue Nov 12, 2002 at 04:27:54 PM EST

By the way, it's not that Down's Syndrome is caused by a tiny error in the genome. IIRC, those people are lacking a whole chromosome?

Actually, it's an extra. They have a 3rd copy of chromosome 21.

[ Parent ]

thanks for correction (none / 0) (#83)
by nex on Tue Nov 12, 2002 at 07:13:42 PM EST

Oh! Right. I learned that at school too long ago... Thanks for the info!

[ Parent ]
Definition of Consciousness (4.75 / 4) (#37)
by Arevos on Tue Nov 12, 2002 at 10:41:11 AM EST

If I may deviate a little from the above topic, what exactly is consciousness?

Consciousness (v.): the use of a certain class of "hill climbing algorithm" to evaluate the state of the world according to some arbitrary set of critera, evaluation of how various manipulative devices might be brought to bear to optimize its state, and occasional use of those devices to attempt to change its state based on these evaluations.

Ok; that's the definition given in the above article, but what evidence is there that this is right? Our brains use (or are theorised to use) a system like this, with an interconnected selection of neurons being the base of the "hill climbing algorithm", but does this mean that this is the only system that can produce consciousness? And isn't this definition just a little broad?

Well, it depends on what you mean by consciousness. If we take the above definition, every animal on earth is conscious. Even simple insects which have less ability for solving problems than modern day computing equipment. Since we seem to be talking about human intelligence here, I'll assume the consciousness you're after is the kind we possess. In short, being self aware.

Now, this is a small pet theory of mine, which follows at least some logic from what I generally think the phrase "self aware" means. By "self", I will assume it means the interconnected neurological activity that makes up our thoughts. So a definition of being "self aware" might be that we are capable of having thoughts about our thoughts. In short, though this is by far a complete definition, a conscious, self aware mind must have the ability for self-analysis. Which in turn means that there might have to be some sort of abstract compression; that we are able to think about the whole of our mind without thinking of every detail. Like having a computer program which, when executed, will output its own code. Without some compression of some sort it would not be possible, so perhaps a factor (not by any means the whole answer) of intelligence is the ability to reduce any abstract source of information to a compressed form.

So the mind could be considered a highly complex machine for destroying information. That might sound a little counter-intuative, but we do it all the time. We take a picture of, say, a dog, discards all the billions of "pixels" that make up the picture (or at the very least files them away in some remote area of the mind, in case you're someone who believes that the mind is capable of prefect recall), and remember it in a compressed form that picks out the details to enable us to recognise it later.

The solutions to puzzles, too, can be thought of as compression. Taking a selection of near infinite ways to proceed, and filtering out the most efficient path. Though I admit this is on far, far, shakier ground than the previous example of memory. That said, this is perhaps a less intuative way of looking at consciousness, and anything which gives another way of looking at things has to be good.

Now I'll stop before I embarrass myself further :)

Anyone else have any good/weird ideas about what consciousness/being self aware actually means?

I read an interesting book last week (none / 0) (#76)
by sully on Tue Nov 12, 2002 at 06:24:57 PM EST

I recently finished Consciousness in Four Dimensions by Richard M. Pico. The book was pretty in-depth and provided a good deal of supporting evidence for the author's idea of consciousness.

To summarize (and run the obvious risk of completely mangling the central concept of the book): The essential characteristic of consciousness is the (apparent) temporal continuity of our mental processes. You know, the whole stream of thought thing. The book provides some details of the goings-on in the prefrontal cortex, where we're processing information at the highest levels of abstraction available to us. Supposedly, as our focus, or internal dialogue, or whatever, shifts from idea to idea, the area of peak activity in the prefrontal cortex moves around. This selective focus is controlled by various mechanisms used to hamper activity in unrelated areas of the brain. According to Pico, the thing that differentiates humans from other animals with similar brain structures, and thus the "key" to consciousness, is the robustness of connectivity between the different areas of our prefrontal cortex. Basically, at the highest levels we are processing the input from external stimuli combined with the output of our last peak computation. The incoming connections from other areas of the brain carry the external stimuli, while the connections between different areas of the prefrontal cortex carry the output of computations from region to region, as our focus shifts. So in humans, supposedly, these intracortical pathways carry more than a certain threshold of information, so that each peak computation has enough input from the previous computation to make for an essentially continuous process. In non-sentient beings, while they do receive input from memories, of course, each peak computation is performed almost independently from the last, because not enough information is passed between them.

That's the best I could do off the top of my head - if it doesn't make any sense, I suggest you read the book. This is an area of pretty intense (amateur) interest for me, so I've read several other theories, but so far this one has to be my favorite. I look forward to seeing any other good ideas in this thread.



[ Parent ]
quines (5.00 / 1) (#77)
by kubalaa on Tue Nov 12, 2002 at 06:35:10 PM EST

It's quite common to write computer programs which output themselves without any compression whatsoever. Do not be fooled into thinking that fitting an entire program within itself would create an infinite loop. That's why it's a program -- it is able to dynamically generate part of itself from an alternate representation. See http://www.eleves.ens.fr:8080/home/madore/computers/quine.html

[ Parent ]
Compression (none / 0) (#186)
by Arevos on Thu Nov 21, 2002 at 11:43:17 AM EST

By definition, some sort of compression must be used. For example, the page about quines gives a step by step explanation of how to create a quine (very interesting actually - thanks for that link!). One of the steps is:

char *s="#include <stdio.h>\n\nint\nmain (void)\n{\n";
printf(s);  printf("char *s=\"%s\";\n",s);

Observe the compression. I'm not talking about zips or gzips or any standard compression program, I'm talking about substituting an the variable s for the string "#include <stdio.h>\n\nint\nmain (void)\n{\n". That's compression. It's reducing 40 or so characters to 1. It can do this because data is duplicated.

The quine itself:

#include <stdio.h>

int
main (void)
{
  char *s1="#include <stdio.h>%c%cint%cmain (void)%c{%c";
  char *s2="  char *s%c=%c%s%c;%c  char *s%c=%c%s%c;%c";
  char *s3="  char n='%cn', q='%c', b='%c%c';%c";
  char *sp="  printf(";
  char *s4="%ss1,n,n,n,n,n);%c";
  char *s5="%ss2,'1',q,s1,q,n,'2',q,s2,q,n);%ss2,'3',q,s3,q,n,'p',q,sp,q,n);%c";
  char *s6="%ss2,'4',q,s4,q,n,'5',q,s5,q,n);%ss2,'6',q,s6,q,n,'7',q,s7,q,n);%c";
  char *s7="%ss2,'8',q,s8,q,n,'9',q,s9,q,n);%ss2,'0',q,s0,q,n,'x',q,sx,q,n);%c";
  char *s8="%ss3,b,q,b,b,n);%ss4,sp,n);%ss5,sp,sp,n);%c";
  char *s9="%ss6,sp,sp,n);%ss7,sp,sp,n);%ss8,sp,sp,sp,n);%c";
  char *s0="%ss9,sp,sp,sp,n);%ss0,sp,sp,n,n,n);%c  return 0;%c}%c";
  char *sx="--- This is an intron. ---";
  char n='\n', q='"', b='\\';
  printf(s1,n,n,n,n,n);
  printf(s2,'1',q,s1,q,n,'2',q,s2,q,n);  printf(s2,'3',q,s3,q,n,'p',q,sp,q,n);
  printf(s2,'4',q,s4,q,n,'5',q,s5,q,n);  printf(s2,'6',q,s6,q,n,'7',q,s7,q,n);
  printf(s2,'8',q,s8,q,n,'9',q,s9,q,n);  printf(s2,'0',q,s0,q,n,'x',q,sx,q,n);
  printf(s3,b,q,b,b,n);  printf(s4,sp,n);  printf(s5,sp,sp,n);
  printf(s6,sp,sp,n);  printf(s7,sp,sp,n);  printf(s8,sp,sp,sp,n);
  printf(s9,sp,sp,sp,n);  printf(s0,sp,sp,n,n,n);
  return 0;
}

Uses a lot of compression, as you can probably see from the selection of variables it uses. Just because the compression isn't obvious, and that the program is compressed by human reasoning rather than a set algorithm, doesn't mean that it is not compressed in some way.

For example:

printf("Hello World!\nHello World!\nHello World!\nHello World!\nHello World!\n");

And:

int i;for(i=0;i<5;i++){printf("Hello World!\n");}

The second is a compressed version of the first.

[ Parent ]

compressed != encoded (none / 0) (#187)
by kubalaa on Sat Nov 23, 2002 at 09:36:47 PM EST

Sometimes the encoded version can be longer than what it encodes.

[ Parent ]
But... (none / 0) (#188)
by Arevos on Sun Nov 24, 2002 at 05:00:12 PM EST

But when it's shorter, it's compression, by definition. The same amount of data is in a smaller space. How else would you define compression?

[ Parent ]
good, but a bit biased and sloppy in places... (4.25 / 4) (#38)
by pb on Tue Nov 12, 2002 at 10:42:09 AM EST

I don't think you did justice to twin studies--as interesting as your anecdotes were, they didn't convince me that twin studies weren't viable, just that they haven't been conducted well in the past. Perhaps broader and more statistically rigorous studies could turn up more useful information.

Also, your assertion that genetics couldn't possibly code for as much information as people think that it does is weakened by your mentioning fractals--which can produce a great deal of complex information from a very small amount of data. Now, obviously people learn.  I don't think anyone (many people?) are arguing that point. But let us do a little thought experiment and see where the fractal analogy can get us.

Let's say that our chromosomes code for some sort of algorithm that generates the fractal pattern that is our brain.  Now, as you mentioned, our brains are built up over a period of time, so naturally this is an iterative process, and fractals are very sensitive to initial conditions.

So although early development will play a part in the formation of a healthy brain, so will genetics.  And small changes in the algorithm itself (random variation in its parameters, for example) have the potential to create much greater differences in the final outcome.

All that having been said, I doubt the fractal analogy is a perfect one, but it's an interesting start.  And I approve of your approach--that of taking a hard look at the messy details and reasoning from there as opposed to treating the entire system as a black box for no good reason. But I think your bias is showing.  :)
---
"See what the drooling, ravening, flesh-eating hordes^W^W^W^WKuro5hin.org readers have to say."
-- pwhysall

Don't conflate size with complexity (4.91 / 12) (#39)
by ghjm on Tue Nov 12, 2002 at 10:44:05 AM EST

Properly, the term "information theory" refers to a branch of physics that deals with the energy states necessary to represent information, the limits to the speed of propagation of given quantities of information, and so forth. You are correct that nothing in the observable Universe can escape from this, or any other branch of physics.

However, the term "information theory" also popularly refers to the characteristics of actual computers we have built, and their capabilities and limitations. This is totally inapplicable to biology, and the reason biologists don't like "information theory" is that they spend far more time debunking its misapplication than learning from it.

It is ironic that you make claims for the broad applicability of information theory - even accusing "some people" (presumably not including yourself) of getting it wrong - and then, a few paragraphs later, proceed to make the worst sort of befuddled misapplication: You claim that the brain is "something like five orders of magnitude" more complex than the human genome, because it contains 10^14 interconnections while the human genome contains merely seven gigabytes (7 x 10^9) of data.

There is so much wrong with this statement I barely know where to begin.

At the most basic level, you are comparing 10^9 bytes with 10^14 interconnects. How much data is stored in an interconnect? Is it much more or much less than one byte? Are these values even a measure of the same or a similar physical characteristic, or are you asking a question like: How many meters are there in five seconds?

Similarly, human DNA sequences are plainly, as a simple observational matter, not composed of seven gigabytes of binary data. The best you can say is that our descriptions of human DNA, which we have in fact entered into a binary computer, occupy seven gigabytes of storage space therein. But the description is not the artifact. I can represent a tree in four ASCII bytes: "TREE". Does that mean that anything that requires more than four bytes, like "CIRCLE", must inherently (as required by "information theory") be more complex than a tree?

And of course, if you had actually studied information or complexity theory, you would know that complexity is orthogonal to size. Things can be very small and yet very complex; for example, the equations describing the Mandelbrot set are representable in a few tens of bytes of ASCII. You even refer to the complexity-concentrating power of fractals later in the article! Given this explanation, how can you support the argument that the brain "must" be less complex than the genome?

For that matter, you assert that the brain, the body, and the Universe are fractal. You say this is obvious, as if merely to observe the brain, the body or the Universe is to perceive its fractal nature. Yet the brain, the body and the Universe are - as a matter of simple observation - NOT self-similar; levels of magnification CAN be determined from context. As far as I can tell, your argument is that fractals are cool, therefore it would be cool if everything turned out to be a fractal. I do not find this persuasive.

If it is not possible to make slight changes in fractals, what about the n-dimensional space of closely related Julia sets? If massive change is an unavoidable result of any "point mutation" (whatever that is), then how do you explain away the observed fact of vast numbers of very minor birth defects? Oh, and while we're on the subject - what does this paragraph have to do with anything else in the article?

Sigh.

-Graham

i just learned a new english word (4.50 / 2) (#81)
by nex on Tue Nov 12, 2002 at 07:11:03 PM EST

sorry, unfortunately i didn't have time to read this whole comment, but i give it a 5 just for the title. this point is really missing (are rather plain wrong) in the article. i tried to express something similar in my long-winded comment above, but this is really an elegant summary:

Don't conflate size with complexity

[ Parent ]

I was with you until you lost it (5.00 / 4) (#85)
by celeriac on Tue Nov 12, 2002 at 07:26:22 PM EST

True, there is a big problem estimating the brain's encoding capacity from the number of interconnects; it's tabloid science at best. And information theory, while compelling, is very difficult to apply to complex biological systems. However, your comment derailed somewhere halfway through, and in it your understanding of information or complexity theory is not apparent. I'm not making the post to attack you or defend localroger, but to extend and clarify some things in your highly-rated comment that could have been said much better.

First, there's the tendency, which I'm going to call "Wolfram syndrome," (not to say that Wolfram is its first victim, but he is certainly one of its most well-known sufferers,) to throw around the word "complexity" as though it means something in particular. It's a C-word, much like "consciousness," and no one agrees about it to begin with. For instance, the assertion that the Mandelbrot set has any appreciable complexity is problematic. According to what definition of complexity? There is exactly one formal, mathematically treatable definition of complexity (Kolmogorov complexity) that I know of, and it is quite adamant that the output of a Mandelbrot-producing program is no more complex than the length of the program itself. However, this is not a definition favored by pundits who prefer nebulous philosophy to math, so one could be forgiven that omission...if one had any acceptable thing to replace it with.

Then there's the issue of the amount of information, or lack thereof, contained in the genome, but you kind of sidestepped the point -- Information is not a quantity that is inherent to a bitstring. You need, at the very least, two systems, one to measure and one that can be predicted based on the measurement. If I am hit in the head with four ASCII bytes, the fact that they are "TREE" gives no information. But if I have a black box and the knowledge that the four ASCII bytes describe what's in the box, then the contents fo the bytes has information, or more properly, the bytes and the box have some amount of mutual information. Now, it is a very basic result that the amount of mutual information between a system and a binary string can not be greater than the length of the string itself! Reading "TREE" may provide anywhere between zero and 32 bits worth of information about the box, depending on how efficient the coding scheme is--but no more than that!

This is where you might have missed the boat when you said that size is "orthogonal to" (gotta love that nerdly malapropism!) complexity. It isn't. The fact is, if we believe the scientists, the contents of the genome can be entirely specified using no more than 7 gigabytes (or whatever the actual number is), plus whatever knowledge it takes to be able to construct some DNA (if we restrict ourselves to asking questions about DNA-based organisms only, this can be ignored.). This provides us with an <i>upper bound</i> on the extent to which the genome can determine behavior. If we find an <i>idiot savant</i> who can memorize and recite back more than 7 gigabytes' worth of uncorrelated data, then bam, that's explicit proof that behavior is not determined entirely by genetics. Really, the point is quite simple: Either the relation of the brain's inputs to its outputs is less complex (in the Kolmogorov sense) than the length of the genome, or the brain's development depends on information from other sources (namely, the environment.) If you think (as localroger appears to) that the complexity of mental processes dwarfs the length of the genome, some might take that as a big point in favor of tabula rasa, but it really isn't--all the environmental information might just go to change your behavior in the specific event that you are asked to rattle off some random digits for a while.

Anyway, application of information theory to the problem of consciousness is kind of futile, insamuch as it's an emourmous, ill-defined, and computationally intractable problem. I think that it is much more useful in genetics--for instance, it makes experimentally verifiable predictions about the rate of deleterious mutations that occurs in populations. It is also applicable to basic (i.e. not hand-wavy) neuroscience--the information about a stimulus that is carried by a neuron can be directly measured, for instance, and hypotheses about coding schemes can be thereby directly tested.


[ Parent ]

Complexity (3.66 / 3) (#89)
by sigwinch on Tue Nov 12, 2002 at 10:03:40 PM EST

...it is quite adamant that the output of a Mandelbrot-producing program is no more complex than the length of the program itself.
Recall the definition of the Mandlebrot set: an infinite number of points, each iterated upon an infinite number of times. Whether a point is a member of the set is undecidable by a computer.
There is exactly one formal, mathematically treatable definition of complexity (Kolmogorov complexity) that I know of,...
I find it amusing that you talk about the Mandlebrot set without mentioning the Hausdorff-Besicovitch dimension.

One could also reduce the process to a function, take the Fourier transform of that function, and ask how probable it is for that spectrum to be produced by white noise with the same standard deviation. Equivalently, you can examine the function's autocorrelation.

There are lots of perfectly meaningful ways to measure complexity, and most of them boil down to asking "How similar is this sytem to itself?" The trouble is that the detailed form of that question depends on the system under consideration. The generalized three-body problem in Newtonian mechanics is complex, as is Conway's Game of Life, but useful descriptions of their complexity will be radically different.

Either the relation of the brain's inputs to its outputs is less complex (in the Kolmogorov sense) than the length of the genome, or the brain's development depends on information from other sources (namely, the environment.)
You're conflating two things:
  1. The complexity of the information that describes how to construct and operate a machine.
  2. The amount of information that can be stored by the machine.
They have no necessary relation. Any process that creates an information processing system can be repeated to create a system that is arbitrarily large. Every time you add one more bit (binary digit) of information capacity to the system, you double the number of states it can be in.

--
I don't want the world, I just want your half.
[ Parent ]

is 'conflate' the word of the day on some website? (5.00 / 1) (#102)
by celeriac on Wed Nov 13, 2002 at 02:23:46 AM EST

One of the hallmarks of Wolfram syndrome is that one never actually says what definition of "complexity" you're operating under, or even worse, you pick them according to your whims at the moment. I can quite easily and simply define an object that has whatever Hausdorff dimension I want; thus I fail to see how it's an appropriate definition of complexity for this topic. YMMV.

Extending the concept of decidability from the integers to the reals or the complex plane, as required if you're going to talk about the Mandelbrot set, is tricky. As it turns out, the set is undecidable only on boundary points which have non-algebraic values, and that's only if you use Blum, Shub, and Smale's notion of algebraic computbaility. The jury is still out on whether M is decidable in the sense of the more conventional theory of computability over the reals. In any case, it's a moot point of you're only talking about a computer program that just draws fractals.

You're conflating two things: 1. The complexity of the information that describes how to construct and operate a machine. 2. The amount of information that can be stored by the machine.

Not in the sentence you quoted--the relation of the system's inputs to its outputs would be expressed by a Turing machine, which has all the information capacity you could want. However, my example about the idiot savant did suffer from that flaw. Half my point was that it's a completely impractical test anyway. I'll have to think about whether a correct test could ever be tractable.

[ Parent ]

just curious (5.00 / 1) (#123)
by speek on Wed Nov 13, 2002 at 10:50:25 AM EST

If we find an idiot savant who can memorize and recite back more than 7 gigabytes' worth of uncorrelated data, then bam, that's explicit proof that behavior is not determined entirely by genetics.

I'm wondering how important the "bam" part is in this argument. Is it a premise? Or was a step in the proof? I know I haven't studied my John Madden logic system enough, but maybe you could clarify:
Does the argument work without "then bam"?

--
al queda is kicking themsleves for not knowing about the levees
[ Parent ]

actually it doesn't work ;) (5.00 / 1) (#126)
by celeriac on Wed Nov 13, 2002 at 11:37:48 AM EST

just like when Emiril says it, "Bam" is used to cover up the fact that I forgot what I was doing (see below).

[ Parent ]
no way (5.00 / 1) (#129)
by speek on Wed Nov 13, 2002 at 12:16:59 PM EST

(see below)

You just keep your pants on ya perv.

--
al queda is kicking themsleves for not knowing about the levees
[ Parent ]

You're not making any sense (5.00 / 2) (#147)
by ghjm on Wed Nov 13, 2002 at 06:47:22 PM EST

First of all, where did math come into the discussion? Apparently excluding yourself, all participants in the discussion are in fact "pundits who prefer nebulous philosophy to math." Why? Because we are not talking about math. The topics in question are simply not amenable to mathematical analysis. We are not manipulating or describing a formal system, we are discussing the philosophy of mind. To do so, it is necessary to introduce concepts of consciousness, complexity, and various other "C-words."

However, your point about Kolmogorov complexity is extremely disingenuous. As a mathematician you should be very aware of the difference between symbols and interpretations of those symbols. It is self-evident that four arbitrary bytes cannot contain more than 32 bits of information, in isolation. But an arbitrary 32 bits can easily refer to vast quantities of information not contained within the four bytes themselves. The word "TREE" is a reference to a vast amount of information regarding trees: What they look like, what experiences you have had in or near trees, the references to trees you have seen in the literature you have read, cultural attitudes towards and understanding of trees, political beliefs and preferences related to trees, etc, etc, etc.

So your third paragraph is quite wrong. I quote: "... if I have a black box and the knowledge that the four ASCII bytes describe what's in the box, then the contents fo [sic] the bytes has [sic] information, or more properly, the bytes and the box have some amount of mutual information." The problem is that you are taking describe to mean exhaustively specify. This is simply not what the word means. The four bytes in the word "TREE" undoubtedly describe a tree, in the sense that they refer to an object that is clearly not an anteater. But it would be ludicrous to then look at an actual tree growing in your back yard and conclude that it must contain no more than four bytes of information.

On to your fourth paragraph, where you continue to confuse the word "TREE" with an actual tree. The much-referred-to seven gigabytes of data contained within a genome map describe the makeup of human DNA but certainly do not fully specify it. Suppose you have a DNA sequence of GTTACAGT. I just described it in eight bytes. Is it valid to conclude that this part of the actual DNA molecule can therefore contain no more than eight bytes of any sort of information? No, because "G" is no more a full specification of the the guanine molecule than "TREE" is of an actual tree. A full specification of the human genome would include not only the seven gigabytes of genome map, but also a full specification of all chemical and physical properties of adenine, guanine, thymine, cytosine; the physical properties of the other, (allegedly) non-information-bearing components of DNA (the phosphate and sugar components); and a full unambigous specification of all of physics and chemistry (or at least, those portions for which you can't provide a proof that they have nothing to do with DNA replication).

You say: "Either the relation of the brain's inputs to its outputs is less complex (in the Kolmogorov sense) than the length of the genome, or the brain's development depends on information from other sources (namely, the environment.)" This is quite correct, except that the length of the genome has nothing to do with it; you need to compare to the complexity of the total specification of the genome - which is unknown. Also unknown is the maximum potential complexity of the relation of the brain's inputs to its outputs - this is another way of saying, "what is the most complex thought a human being can have." By comparing these two unknowns, we derive exactly zero new information about how human behavior is determined.

In your fifth paragraph, you state that application of information theory to the problem of human consciousness is futile. If I may be permitted to ask, what then were you doing for your first four paragraphs?

-Graham

[ Parent ]

Out of my depth, but: (none / 0) (#150)
by Control Group on Wed Nov 13, 2002 at 08:04:33 PM EST

Since the "context" of the genome (i.e., the specifications of the makeup and behaviors of the bases) doesn't change from person to person or from DNA-based species to DNA-based species, isn't it irrelevant to determining the differences in intelligence levels from person to person or from species to species? (With apologies for the run-on sentence) While the context is certainly critical for determining the whole of brain design, it shouldn't be (as far as I can conceive, anyway) pertinent to the differences between brains.

So the question then becomes whether or not the differences between levels of intelligence amount to more information than can be encoded in the human genome. Since I have no metric by which to measure the complexity of the difference between my brain and, say, a platypus', I can't comment on that. (Well, I could comment, but it would be crap).

The problem is the article is comparing only the variable parts of DNA with the entirety of the brain, which is (as you point out) incorrect. However, the fact that this comparison is incorrect does not necessarily mean that the act of comparing is incorrect. There may be value in comparing the length of the genome and the distinguising characteristics of the brain; simply not in this fashion.

***
"Oh, nothing. It just looks like a simple Kung-Fu Swedish Rastafarian Helldemon."
[ Parent ]

You're tilting at windmills (none / 0) (#153)
by celeriac on Wed Nov 13, 2002 at 10:04:28 PM EST

If I may be permitted to ask, what then were you doing for your first four paragraphs?

Demonstrating that it was futile. Duh. You seem to have gotten the point, aside from your obvious cluelessness about the "information theory" that you claim biologists keep having to "debunk."

[ Parent ]

OK (5.00 / 2) (#86)
by localroger on Tue Nov 12, 2002 at 07:32:32 PM EST

How much data is stored in an interconnect?

Since the brain is not universally wired it is considerably less than the log base 2 of 10^14. I would guess it to be in the 4 byte range in practical terms, based partly on a squinty observation of biology and partly on my tentative attempts to build the data structure backward. This implies that the average neuron has a few billion potential targets within its natural range of termination. It might, however, be more in the 24-bit range depending on the exact mechanism by which the feature extractors program themselves, the main focus of my attention right now. When I work that out to my satisfaction, I will begin writing the emulator :-)

How many meters are there in five seconds?

It depends on how fast you're going, of course. The question is not as meaningless as you make it out to be.

Given this explanation, how can you support the argument that the brain "must" be less complex than the genome?

I don't, of course; the very argument is that the only place this complexity can come from is (a) an automatic process like the fractal generators (or irrational number generators like pi, if you prefer); or (b) from experience. (a) has implications for how mutations can propagate.

Yet the brain, the body and the Universe are - as a matter of simple observation - NOT self-similar; levels of magnification CAN be determined from context.

Only because the universe is not a pure fractal of infinite depth. As a matter of fact it can be quite difficult to tell images taken with an electron microscope from those taken of moons from space. And even when we can tell by certain details that we are looking at neurons rather than seaweed, there are great similarities at different scale. If we were not familiar with neurons and seaweed -- for example, if we were hot gaseous beings who somhow lived in the cores of stars -- it might be quite difficult for us to tell the scale of either photograph.

If it is not possible to make slight changes in fractals, what about the n-dimensional space of closely related Julia sets?

Julia sets are a very special case among fractals, and even there you only see the interesting behavior in certain regions of the Mandelbrot fractal. A much better question is what happens if you make a small change to the algorithm that generates the Mandelbrot fractal itself.

It is the disparity between input complexity and output complexity -- the output (brain) certainly being a lot more complex than the input (genome) -- which makes it impossible to render minor changes in the output by tweaking the input. This actually synergizes with another ascendant theory, evolution by punctuated equilibrium. Change occurs not by the piling on of minute changes but by the occasional big one that happens to work out.

I can haz blog!
[ Parent ]

How many meters are there in 5 seconds? (5.00 / 1) (#104)
by xriso on Wed Nov 13, 2002 at 02:45:07 AM EST

approximately 1.5x10^9 :-)
--
*** Quits: xriso:#kuro5hin (Forever)
[ Parent ]
My two pennies! (4.00 / 1) (#45)
by bukvich on Tue Nov 12, 2002 at 11:07:31 AM EST

1.) this article is interesting. Probably more in the category of cafe metaphysics than science, but the same could be said of nearly everything written on the topic of human consciousness.

2.) my examination of as much of the basic data as I had time to do in a couple afternoons in the stacks of my University library ( primarily the Journal of Geophysical Research ) is that the theorized impact event at the end of the Cretaceous is still moderately far fetched. Most of the people in the field may be convinced, but they haven't convinced me yet.

So your best example of a Kuhnian paradigm shifting of the conventional wisdom falls flat for me.

Are you familiar with Warren Carey's expanding earth theory?

I suppose I am rambling here but I would suggest that a skeptical viewpoint is indispensible regarding any science that cannot be done on a laboratory bench top. Which of course leads you into the infinite loop that dogmatic skepticism is a self-refuting viewpoint.

3.) I very much appreciate the bits on the twin studies. Educational for me.

4.) Regarding the politics of science and what gets funded, etc. This is almost like original sin. One guy working by himself really can't do squat. Every day I observe a dozen of my fellow humans' delusions up close; they really can't be helped. To have delusions is human. The more useful question, in all likelihood, is what are mine?

Well, I used to have one about twins studies. Now I don't. Thanks.

But please don't be betting any of your investment capital on your blank slate theory.

B.

K-T Impact (4.00 / 1) (#82)
by localroger on Tue Nov 12, 2002 at 07:13:25 PM EST

The crater was recently found, and the existence of the event itself is no longer in question.

There are still some questions, treated quite fairly in the linked article, about how and whether the K-T impact itself could have caused the extinctions. If it was not a major causative factor, then it is surely one of the most bizzarre coincidences in Earth's history.

I can haz blog!
[ Parent ]

those are cartoons not data (none / 0) (#120)
by bukvich on Wed Nov 13, 2002 at 09:00:59 AM EST

The best data is around 120 degrees of a circular arc shaped magnetic anomaly. It corresponds to those groundwater spring dots on that web site you linked to. The other 240 degrees of the circle are not on the magnetic potential maps. The seismic lines (much higher resolution than magnetic potential data) I have seen are not conclusive.

This impact event is no more than a theory. Unless you are inclined to put faith in the word of experts. You don't need to do this; go to the library and look at the data for yourself.

[ Parent ]

The best data (5.00 / 1) (#145)
by localroger on Wed Nov 13, 2002 at 06:30:16 PM EST

...The best data are not the seismic or imaging data at all, even those make the prettiest pictures. The best data are the shocked quartz microgranules. The consensus among the people who found the crater (by following the shocked quartz) is that nothing but a nuclear explosion or asteroid impact could cause the effect. The crater was discovered by modeling the likely effects of the impact and working back from areas in the Caribbean and Pacific where shocked quartz were found. The model is quite detailed and convincing.

The nearest point where the K-T boundary emerges from the sedimentary overlay has also been explored; it is deep in the jungle and required some effort to reach. The K-T layer there is many feet thick and consists of folded over layers, as the seabed was hurled back over itself by the peripheral effects of the impact. One book I read on it had some dramatic pictures. Again, very convincing and not leaving much doubt.

I linked a realtively intro-level site because it's obvious you aren't very familiar with the hunt for the crater. Follow the links or do a google search for shocked quartz. There is absolutely no doubt anymore in any serious geologist's mind that the impact occurred and had roughly the scale discussed. The only outstanding question is how the disaster became general enough globally to account for the absence of fossils in the layer above the boundary.

I can haz blog!
[ Parent ]

meta (none / 0) (#165)
by bukvich on Thu Nov 14, 2002 at 09:57:33 AM EST

Your argument is appeal to authority, which is rather ironic as one theme of your article is that all the consciousness theory authorities are wrong. I know hundreds of geologists. Every one I have discussed this issue with agrees with you. But one. He is reluctant to talk about this. Like, if I were to name him and he found out he would be somewhat mortified.

I assert the rest are all wrong. The quartz impact microstructures are in the same universe as blood spatter patterns in terms of reliable evidence, in my opinion.

You have in the past rather eloquently described instances of the human propensity to find patterns in random data. Just WTF do you think most geology is?

The best evidence is 1/3 of a circular arc in the potential data. I am 1/3 convinced, i.e. not yet convinced. Calling it a fact is (metaphorically) a rape of the word "fact".

[ Parent ]

You are not arguing geology (none / 0) (#169)
by localroger on Thu Nov 14, 2002 at 07:18:36 PM EST

I assert the rest are all wrong. The quartz impact microstructures are in the same universe as blood spatter patterns in terms of reliable evidence, in my opinion.

This is physics, and you are wrong.

I can haz blog!
[ Parent ]

tendencies and determinism (4.50 / 2) (#48)
by gregbillock on Tue Nov 12, 2002 at 11:26:34 AM EST

> there is such a thing as a "properly working brain." That is a brain which is properly nourished, free from genetic or teratogenic formative defects, with all the chemical messenger systems functioning nominally.

It might help to remember that there is a broad range of "properly working." Indeed, the fantasy that there is a narrowly defined "proper function" of the brain is what leads to a lot of social disfunction of brain science of which you are suspicious.

Also, learning isn't really that associated with "myelinization".

There's another option to believing that the brain centers that train early critically affect those that train later. It could happen in reverse: brain centers that train later could be remarkably resistant to small "errors" (read, "variation") in those that trained early. There are some examples of both: it appears that the phoneme detection learning shuts down around age 2, and if you aren't exposed to some phonemes before that age, they'll simply be clustered together by language centers. On the other hand, there is obviously a lot of plasticity in language generation cortex, because adults can learn to speak other languages with basically native fluency, although they may never overcome early training to their larynx and mouth fine motor control, and so always have a perceptible accent.

> Alcoholism is a pattern which may or may not be encouraged by certain knocks we face in the road of life; but ultimately it's a thing that can happen to anybody. Just as anybody might turn out to have the willpower or alternate interests to make it irrelevant or unlikely.

I think you are chasing a red herring yourself, here. No-one claims that if you have gene X, then you'll be an alcoholic. The most aggressive position is that there is some gene that causes alcoholism when combined with other factors. I think a more typical position is that there is a genetic complex statistically linked to alcoholism, but it may not even be causative and certainly (heavens!) isn't "for" that as if it were selected for that purpose.


Blank Slate or Scantron Form? (4.50 / 10) (#50)
by jubilation on Tue Nov 12, 2002 at 12:19:20 PM EST

While I love the idea of the blank slate, I am unable to believe in it, for several reasons.

1.  Folks with Down Syndrome.  This is a genetically based disease which results in decreased cognitive ability.  That suggests prima-facie that there is a strong link between genetic factors and (at least) whether you have "normal" intelligence.  The big open question is whether the genetic contribution is additive or subtractive.  That is, the original poster (implicitly) seems to believe that there is One True Level of human cognition, which can be lowered by genetic damage.  But what if it is additive?  That is, your ability to think clearly is added to by many small genetic factors.  Maybe you got a little more serotonin than I did, and think *just a tad* faster.

2.  Statistics and cultural anthropology.  There are many human behavioral universals -- love, mother-love, marriage [this one may be a little controversial, heh heh], competition among males for status, others -- that it doesn't make sense to believe that every society independently and arbitrarily decided to act that way.  What I'm getting at here is that we have an inescapable animal heritage, and that our free-will self-actualization activities must be built on top of the basic package.

3.  Observation.  Some people are just plain dumb.  I like to think I'm not swelling their ranks, but early returns are inconclusive.  ;)

4.  Idiot-Savantism & Wild Abilities.  People with photographic memories.  People who are lightning calculators, but can't spell.  Musical prodigies.  Obviously, certain people have different cognitive abilities from birth; nobody trains for eidetic memory.  I speculate that these conditions arise from minor mutation strains causing a different balance of neuro-transmitters.  Ojala' I could prove it!

5.  People *want* to believe in the blank slate.  It fits in nicely with our egalitarian notions  (I will concede there is a counter-faction who believes that every bad habit or fault is genetically programmed and Not My Fault... I have a problem with them too).  The blank slate arguments often smack of special pleading.

6.  People *have* to believe in the blank slate.  In our current philosophical climate, any suggestion that cognition is linked to genetic factors brings up way too many specters... sterilization of retarded; the Bell Curve; Gattaca ;).

Please note that I am not espousing that we all knuckle under to our genetic overlords here...  I like to believe that people's brainpower capacity is large enough that training and focus can overcome most gaps in the "inborn" stuff.

Anyway, these are just my silly opinions.


Down's Syndrome (none / 0) (#138)
by epepke on Wed Nov 13, 2002 at 03:18:26 PM EST

Down's syndrome isn't really genetic, in the sense that the term is usually meant. Only about 1% of Down's Syndrome cases result from heredity. The majority of cases result from three copies of Chromosome 21, which is a result of an error in meiosis. So it's a developmental disorder, albeit one that occurs before conception.


The truth may be out there, but lies are inside your head.--Terry Pratchett


[ Parent ]
Quantum entangled twins? (2.20 / 5) (#52)
by wytcld on Tue Nov 12, 2002 at 12:34:12 PM EST

In another forum someone recently suggested that similarities between twins might be explained by quantum entanglement rather than by genetics. The twins, after all, starting from a single egg, should have that.

Those intrigued by the notion that quantum physics may be necessary to explain consciousness - and perhaps human freedom - may want to attend Quantum Mind 2003. This is not light-weight stuff, and goes far beyond the mechanistic, Newtonian views which are foundational to the genetic determinist arguments. It's not a flat Earth; it's not a Newtonian physical universe; there is continental drift.... In any case, the twin studies don't prove a thing about genetics unless you accept a view of the world in which genetics and enculturation between them exhaust the possible causes of correlation. That view of the world is not consistent with modern physics - which after all is the benchmark among sciences.

A hill-climbing algorithm, however, while we may utilize it at some level, doesn't explain a thing about how consciousness seems the way it does to us. It's another example of how pre-quantum, mechanistic views of the world don't explain why the shape of mind - of the conscious mind - fits so well to the shape of reality, despite the appearance of an ocean of difference between them.

What kind of crack are you smoking? (5.00 / 1) (#71)
by spcmanspiff on Tue Nov 12, 2002 at 03:37:07 PM EST

Twins never start from the same particle.

You're right that it's not light-weight stuff, so I suggest you try to approach it with some rigor before you're brainwashed by the next new-agey zirconium crystal salesperson that stops by.

Apolgies for being unreasonably cranky.

 

[ Parent ]

Quantum Physics is not magic, plzthx. (none / 0) (#103)
by xriso on Wed Nov 13, 2002 at 02:40:08 AM EST

CELLS ARE BIG. quantum effects are small. Entanglement is rare. Entanglement doesn't even matter to a cell!

Let's discuss classical versus quantum theories: Classical Physics is valid, within certain limits. Quantum Physics is valid, within certain limits (limits that have not been really found, but still limits).

Guess what? PEOPLE ARE WITHIN CLASSICAL LIMITS.

Mechanistic deterministic bla bla bla ... BIG DEAL. Why does this stuff matter at all? Oh wait, it doesn't.
--
*** Quits: xriso:#kuro5hin (Forever)
[ Parent ]

Quotes from Darwin and Hamilton (2.50 / 2) (#55)
by Baldrson on Tue Nov 12, 2002 at 12:59:05 PM EST

In the distant future I see open fields for far more important researches. Psychology will be based on a new foundation, that of the necessary acquirement of each mental power and capacity by gradation.
-- Charles Darwin, The Origin of Species, 1859, p. 449.

The tabula of human nature was never rasa and it is now being read.
-- William D. Hamilton,  1997

-------- Empty the Cities --------


Brain is just viscera (3.00 / 2) (#57)
by Pig Hogger on Tue Nov 12, 2002 at 01:12:57 PM EST

The brain is just another organ, like any other, and most especially the muscles.

If you are lucky to get the genetics that gives you muscles, and if you use them, you'll get a nice set of them.

If fou are lucky to get the genetics that gives you a brain, and if you use it, you'll have a nice smart one.

Use it or lose it.

Genetics can help a bit, but once you're born, all the (right) DNA in the known universe won't help a wee-bit. It's up to you to use your brain and have it develop nicely.
--

Somewhere in Texas, a village is missing it's idiot

Muscles and Brains (none / 0) (#148)
by Korimyr the Rat on Wed Nov 13, 2002 at 06:52:17 PM EST

An important corollary to this is that muscles develop better, both stronger and more pleasingly shaped, when used in specific ways and within a system of specifically controlled intensities, based upon both the current ability and the desired goal.

Carrying this analogy further, brains would develop best, not simply with increased input and demands for output, but with a system of increasing intensities of activity, structured in a way to stress different facets of "intelligence", alternated with activities designed to improve coordination between those facets.

--
"Specialization is for insects." Robert Heinlein
Founding Member of 'Retarded Monkeys Against the Restriction of Weapons Privileges'
[ Parent ]

Too many influences (4.20 / 5) (#58)
by dcheesi on Tue Nov 12, 2002 at 01:23:52 PM EST

There are at least four types of influences on brain (and general) functioning:

1) genetics

2) prenatal development

3) postnatal environment

4) random (or probabilistic) factors

Any attempt to explain individual behavior without acknowledging all of these factors is overly simplistic, as is any attempt to do away with all of them. The reason we need to treat people as equals is not because we are all the same, but because the variation between individuals is too complex and multifaceted to be predicted, even as a "tendency".

what about... (4.00 / 2) (#62)
by Fen on Tue Nov 12, 2002 at 02:10:46 PM EST

the nonquantized storage of infinite previous quantized information? I'd think that has something to do with it.
--Self.
[ Parent ]
"what makes us so different from animals?&quo (4.00 / 3) (#61)
by Fen on Tue Nov 12, 2002 at 02:07:12 PM EST

That's like saying "what makes Microsoft so different from corporations?" Or "what makes roses so different from plants?" I'm sick of hearing this illogical phrase. I don't know if you're some religious wacko who thinks the Earth is a few millenia old, but I don't know if I want to delve further to find out.
--Self.
All that's missing... (4.00 / 1) (#63)
by RadiantMatrix on Tue Nov 12, 2002 at 02:30:40 PM EST

When using any of the questions you list, what's missing is an "implied other". For example, "what makes us so different from [other] animals?" "What makes Microsoft so different from [other] corporations." And so on.

And, if you actually read the subheading, you'd see that the poor grammar in the phrasing of that question doesn't continue throughout its analysis.

--
$w="q\$x";for($w){s/q/\:/;s/\$/-/;s/x/\)\n/;}print($w)
[ Parent ]

Sir, (none / 0) (#94)
by tkatchev on Wed Nov 13, 2002 at 01:41:20 AM EST

...come back to me when your cat breaks away from your household and starts his own independant cat republic.

Until then, what you're spouting is simply illiterate gibberish that you're decided to take on blind faith because you're too dumb to think for yourself.

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

A couple things. (4.60 / 5) (#67)
by ucblockhead on Tue Nov 12, 2002 at 03:05:52 PM EST

First, the idea of the "Blank Slate" is hardly new to artificial intelligence:

Presumably the child-brain is something like a note-book as one buys it from the stationers. Rather little mechanism, and lots of blank sheets.
That's from Turing's Computer Machinery and Intelligence the founding document of the Artificial Intelligence field. So you can hardly claim that this is any sort of new idea.

Turing believed that by building a computer with enough memory and by providing it with some sort of minimal program, you could raise it as a child of sorts, and bring it to "intelligence" this way. No one in the AI community believes this now not because of any sort of authoritarian conspiracy but for the simple reason that all attempts in that direction failed utterly.

In terms of the complexity of the human brain and genetics, let me point out a couple of things.

First, the same complexity issues exist with all mammals. A whale has even more neurons than a human. Many animals have brains that are within an order of maginitude of a human's. So if you are using the complexity argument to say that it couldn't possibly genetic, then you have to do so for all mammals.

Second, the complexity issue exists for other features as well. The immune system is also wildly complicated. It can rearrange itself to deal with any of potentially trillions of different targets. Yet despite all that complexity, it is fairly easy to show that parts of the immune system are genetically controlled.

Not all complex patterns are fractals. It is very easy to build simple mathematical systems that create extremely complex appearing patterns. Yes, some of them create wild patterns in which a slight change of input creates wild differences. But other ones are extremely robust, creating simular patterns in many different situations.

The thing you have to realize about Down's Syndrome is that it is no tiny trivial tweak. It is a massive monkey wrench thrown into the machinery. It is an extra chromosome. Of the twenty-odd human chromosomes, most duplicates cause miscarriages. Down's Syndrome is the mildest case of this.

Finally, twin studies are by no means the only way that genetic factors affecting intelligence can be found. There are many, many causes of mental traits studied genetically through the use of family trees.
-----------------------
This is k5. We're all tools - duxup

Just one point... (4.75 / 4) (#68)
by MyrddinE on Tue Nov 12, 2002 at 03:09:24 PM EST

If you have the misfortune to be 24 years old and male, even with a perfect driving record and every possible plus you will pay triple the auto insurance of a woman the same age with four accidents.
Insurance companies couldn't give a flying fuck about genetics or tendancies. They only deal with statistics, chance, and probability. Statistically, a 24 year old male will cost them more than a 24 year old female.

Depending on the insurance company, some will take different inputs for their statistical engine. Some may take into account the kind of accident... others may ignore accidents until they hit certain thresholds of cost. This is part of why different Insurance agencies will give such different results.

The point is that Insurance companies have NOTHING to do with your argument, and this entire paragraph or two is invalid and unrelated.

Now back to your regularly scheduled rant.

Insurance (4.80 / 5) (#78)
by localroger on Tue Nov 12, 2002 at 06:50:25 PM EST

This keeps coming up, so let me explain.

If you have an accident your rates should go up. You should drive more carefully.

If you buy an expensive sports car your rates should go up. We all know why you wanted it.

If you live in a place where theft is common, your rates for theft should go up. You can always move.

If you live in a city versus the country, your rates should probably be different. See previous paragraph.

If you happen to be black, white, male, female, young, old, short, tall, or whatever, it should not affect your insurance rates. Period. It is an outrage that we let insurance companies factor these things in. We don't let the grocery store charge you more for steak because you're a man, and the same should go for other services.

If have known 16-year-old boys who were extremely careful and safe drivers, and middle-aged women who should never be allowed behind the wheel of a tricycle. If you are a dangerous driver it will show in factors that are within your control. You should not be charged outrageous premiums simply because people who happen to look like you have done something stupid.

This whole thing arises from the distressingly universal tendency to want to classify things in convenient groups and treat them as members rather than dealing with them as the individuals they are. The insurance example was perfect for my purposes, since this same desire is of course, at the heart of the nature side of the nature/nurture debate.

I can haz blog!
[ Parent ]

Insurance (none / 0) (#112)
by apteryx on Wed Nov 13, 2002 at 04:45:04 AM EST

Insurance is a money making scheme pure and simple.The companies will do everything they can get away with to minimise risk and then everything they can they can get away with to avoid paying out.

From that stand point I can't see how charging more for males of a certain age is morally different to charging more for someone who has _had_ an accident, or any of your other examples.

All these things are imperfect predictors of risk that rely on assumptions of causality.

Having an accident may mean I drive more carefully afterwards ; I may have bought a sportscar entirely for its looks...

I agree that it's unfair that insurance should penalise people on age or gender basis, but it's entirely consistent with the capitalist economic model.

For what it's worth, the only acceptable insurance to me would be where the risk is spread absolutely evenly over all subscribers. And state run so that the gains or losses are further shared by the whole community.

[ Parent ]

Insurance and Genetics (5.00 / 1) (#110)
by The Solitaire on Wed Nov 13, 2002 at 03:58:41 AM EST

Damn straight they care about genetics. I have a family history of an autosomal dominant kidney disorder. That means (since my father is positive) I have about a 50% chance of developing the disorder. If I do, I'll need a kidney transplant somewhere between 50 and 60.

Let's just say I'm lucky I live in Canada. But even here my life insurance premiums are fucked.

I need a new sig.
[ Parent ]

Another single point of disagreement... (4.33 / 3) (#72)
by MyrddinE on Tue Nov 12, 2002 at 03:53:33 PM EST

Seriously, it should be kind of obvious that there is some difference between us and the rest of the order Mammalia. None of the others is busy building skyscrapers, ocean liners, or atomic bombs. We consider it a triumph of quiet genius if they manage to teach another of their kind to use a stick as a tool to dig termites, while we use supercomputers to catalogue their success.
You state that humans have some basic hardcoding that we overcome as we grow and learn. Yet you seem to believe that this is completely UNTRUE for primates. Take a bunch of children between 1 and 3, put them in a controlled environment like a Cave that provides food, heat, and interesting objects to interact with, but no language or writing.

Do you think they will be building skyscrapers when they grow up?

Chimpanzees and other primates are not as intelligent as humans, but neither are they as stupid as cows. However, teaching other creatures a new trick is not something only primates do... animals like cats and dogs teach each other as well.

I have witnessed this firsthand. In my household, we had 2 cats, age 1 and 5. We got a new young tabby, a quite intelligent cat. He learned to get the milk in the bottom of a glass by using his foot to dab it and lick his paw. Neither of the other cats had done this in their lifetime (the older one tended to jam his head into the glass until he looked like a furry snake :-), but were able to understand and imitate this behavior after seeing it done only a couple times.

All animals can learn. Learning is not unique to man. Your comment is as harmful a view to other creatures in the same way that your feared worldview of 'intelligence = genetic' is a dangerous viewpoint.

You argue that people will justify eugenics, controlled breeding, and blue blooded 'right to rule' ideas by linking genetics to intelligence. YOUR viewpoint, that humans are genetically superior to all other animals is similarly specious. Chimpanzees raised by humans are quite stupid compared to most children... but then, there are some very stupid children out there.

Genetics obviously plays a role in intelligence there, yet you discount this by 'classifying' primates differently than humans. How is that better than a noble 'classifying' peasents differently than nobility?

If your theory does not carry over across the continuum of intelligence, from very smart humans through stupid humans, primates, mammals, and down, then your theory is probably wrong.

stuff (4.00 / 2) (#74)
by beefman on Tue Nov 12, 2002 at 04:34:15 PM EST

>If you do a twin study, it's obvious that you
>want a certain result.

Better to leave out this kind of stuff.

>It's horribly unfair to the individuals thus
>targeted, and a society interested in fairness or
>justice wouldn't let insurance companies get away
>with this crap.

Ditto, but incidentally, I don't agree.  Insurance
is the distribution of risk.  If the danger is not
distributed evenly in a population, why should the
insurance rates be?

>Consciousness (v.): the use of a certain class of
>"hill climbing algorithm" to evaluate...

This is a very broad definition, in a sense
equivalent to:

Consciousness- Whatever humans do that we can't
currently understand.

This is traditional in philosophy, but doesn't
mesh well with how people actually use the word.
Cognitive Psych. can do better -- the 'sensory
gating' definition, for example.

Perhaps "intelligence" would be a better word for
this definition.

>Neurons compete for inputs which form repeatable
>patterns, and they form synaptic connections with
>those input sources so they can detect those
>patterns ever more efficiently in the future.
>Feedback sources like emotions and activity level
>can encourage or inhibit this process.

That's interesting and would be well worth
simulating in a CA to see what happens.  Do you
have a source for this or were you just making it
up?

>The psychoactive drug cocaine works by causing
>the brain to release its stores of Dopamine.

Cocaine is primarily a dopamine reuptake
inhibitor.

>The cocaine high may be an artificial epiphany,
>though I'm not curious enough to try it and
>compare it with the natural experience.

It's nothing like epiphany.

>The pattern detectors which form themselves so
>then serve as the pattern library for a multi-
>level hill climbing optimizer whose driving
>engine is not in the cortex at all, but in the
>thalamus...

This paragraph seems dropped into the article from
nowhere.  If you deleted the rest of the article
and made this papagraph into something that made
sense, you might have wound up with something
really cool.

>This model doesn't explain everything, but it
>explains a hell of a lot.

Really?  Show us!

>...simple enough for cells to do it (and
>individual cells are stoooopid)...

The behavior of single neurons is extremely
complex, and not well understood.  It takes a lot
of cycles on a digital computer to simulate what
we do understand about one.  Check out the GENESIS
package.

>the genetic code ... is nowhere near as
>complicated as the brain which grows under its
>direction.

Kurzweil agrees (making many of the same errors
that Graham correctly catches), and says the rest
is randomly wired.  What randomness is and where
it comes from, he doesn't say.  A 1-D cellular
automaton can generate 7GB of data. . .

Until we can build a brain, we should lay off
saying what size a description is sufficient.

>Within the last 30 years or so we have acquired a
>model for how systems like living things can turn
>relatively simple inputs into outputs of great
>complexity; it is called chaos theory

Chaos theory is much older than that, and is the
study of sensitivity to initial conditions.
You're speaking of information theory, which goes
back farther still, to the 19th century
(thermodynamics).

The "complexity theory" branch of information
theory goes back only 20 years, but has its roots
in the cybernetics of the 50's.

>If you make a small change in the code, you don't
>throw a monkey wrench into the works, you throw a
>nuke.

True for many compressed descriptions, but you
don't show how this ties into what you're trying
to say.

BTW, fractals sometimes appear more complex than
their descriptions, but not always.  The
fractional dimension part is superfluous to what
you're getting at.  The digits of pi are are a
good example -- normal in base-2 but there's a
spigot algorithm for them!

>This can only occur as the relevant areas are
>myelinized...

Uh... there's lots of things neurons can do.
Receptor expression/regulation... look up "cascade
effects" and you'll see how complex the picture
is, and how poorly it is understood.

A nice article, but next time, why not include
some code and/or results?

-Carl

Cocaine.... (none / 0) (#105)
by vile on Wed Nov 13, 2002 at 02:56:08 AM EST

http://www.kuro5hin.org/comments/2002/11/11/225953/70/95#95

~
The money is in the treatment, not the cure.
[ Parent ]
insurance stuff (none / 0) (#176)
by NFW on Thu Nov 14, 2002 at 10:52:57 PM EST

>>It's horribly unfair to the individuals thus
>>targeted, and a society interested in fairness or
>>justice wouldn't let insurance companies get away
>>with this crap.
>
>Ditto, but incidentally, I don't agree.  Insurance
>is the distribution of risk.  If the danger is not
>distributed evenly in a population, why should the
>insurance rates be?

Exactly.  If the danger is not distributed evenly
among sub-24-year-old males, why should the
insurance rates be?

I suppose the answer is "because it would be hard,"
and hope you'll all forgive me for not finding that
a satisfying explanation. :-)


--
Got birds?


[ Parent ]

Brain power (4.00 / 1) (#80)
by X-Nc on Tue Nov 12, 2002 at 07:10:33 PM EST

There's lots of things I could say about the article but most of it is already being said. The idea that intelligence is not a combination of everything is shortsighted. However, I'd rather focus this comment on the poll.

It seems that the most selected choice is "Chemical Imbalances". I think that there's a good number of people who are choosing this because if it's drug related connotations and for other anti-establishment reasons. There's one thing that I though might be a good research paper on, though. How many people in jobs or industries that are considered of high intelligence are on medication that has mood or cognitive altering properties? Most of the techies I know and work with are on different meds (or just live with their manic-depression). It seems that most "genius" is not associated with a balanced mind (see John Nash). Anyone in here on some kind of meds? Doesn't have to be for depression or anything, I take a ton of drugs for the Fibromyalgia I have. Most of them have an effect on the mental facilities. It's a miracle I can write this post at all.

--
Aaahhhh!!!! My K5 subscription expired. Now I can't spell anymore.

More comments (ok, way too many comments). (4.00 / 1) (#84)
by wumpus on Tue Nov 12, 2002 at 07:18:28 PM EST

This whole claim seemed to be long on handwaving and short on evidence. Without a good explanation of why (the admitted existance) of how instincts (even if they only exist in children) are created, this whole article is not convincing.

A bit of strong evidence for your case can be round in the "we only use 10% of our brains cliche". If children (two years or younger, IIRC) can develop normally after up to 90% of their brain is destroyed, it should follow that the if the remaining brain would be "genetically" organized it would be in the wrong patterns and be hopeless. Instead the children grow up normally. The only reference I could find was http://www.urbanlegends.com/science/10_percent_of_brain.html

First, you fail to define "intelligence". While this may be a wise choice, you have implied elsewhere that you believe that variations in intelligence are not due to genetics. First, I'll assume that "Inteligence" is a somewhat usefull tool for determing the state of our suroundings. The idea is that that it varies between people.

My first instinct was to attack this from an evolutionary standpoint. If intelligence was not genetic, it could not evolve. On the other hand, those who follow Steven Jay Gould (who favored environmental influences) should note that if humanity was in a period of stasis, then intelligence should not be selected for nor against (which is likely only possible if intelligence does not vary genetically). I find this hard to believe, but was struck by one line in A Primate's Memoir by Robert M. Sapolsky: "A chimpanzee is what a baboon wants to be". If increasing degrees of chimpanzee intelligence could increase baboon reproductive chances (the scientists who studied them thought that it would rocket them straight to the alpha male spot), I would expect baboons to evolve into chimps. Apparently baboons just won't listen to me. While I personally think Stephen Jay Gould should have stuck to attacking creationists rather than Richard Dawkins, you might find a lot of evidence there.

Second, I deal with "Inteligence", meaning the means with which we determine our opinions and behaviors. This follows your developmental arguments.

You bring up the notion that we are born with instincts, but then imply that a functioning adult no longer needs them. This implies that reproductive strategies are non-genetic. An easy way to prove this is to show various societies where people raise children in ways that do not benefit their "selfish genes". I doubt you will be able to do this. Fear of being cuckolded is universal amount males (there are groups that do not monopolize women, they typically inherit matrilinearly and have a maternal uncle act as "dad"). Women are especially protective of their own children. Men prefer women in their reproductive prime (or who appear to be, thus beautiful), women perfer men with high status ("nice guys" moan about women wanting jerks, jerks act like "high status males"). If you can explain this type of thing remotely as well as Matt Ridley does (for genetic and evolutionary means) in The Red Queen you might have a chance of convincing people.

Your entire argument (aside from strawmen and ad hominem attacks) seems to be that the brain is too complex for the genes to hold and that it can not be influenced by the genes. While the brain is likely formed to meet a complex self-generating pattern ( like a fractal), this does not mean that modifying part of the brain afterwards modifies the entire brain. Since experience can somehow alter the brain, so can genes. If you already admit that children have instincts, there is obviously some means to alter the brain. Instincts can be rather subtle. Consider language. Even ignoring Chomsky (probably for the best), it would be hard to explain why the "brain fractal" created specific parts that are always used for language. Also, how in the world does a fear of snakes become an instinct? Consider how many "a priori" concepts are required to implement that. While the tabla may need a whole lot written on it (growing up and parenting are important parts of our lives), it never was rosa.

Wumpus

driven to fixation (5.00 / 1) (#91)
by danny on Tue Nov 12, 2002 at 11:31:07 PM EST

Genetics have almost nothing to do with intelligence

First of all, this is not literally correct - no one without a functioning haemoglobin gene will ever exhibit any signs of "intelligence", to take one clear-cut reductio as an example.

But the question is of course about variance and heritability. While variation in individual genes may be linkable to specific features of intelligence, this is not likely to be "usable" variation - e.g. if it were possible to "flip" all the "smart" genes in someone on, they would almost certainly have incompatible epigenetic effects - the result would not be coherent, and might be fatal. (The most "obvious" changes I can think of involve those in the growth and timing of neural develoment, where extra growth in one area seems prima facie likely to reduce growth in others - pace that "we only use 10% of our brains" nonsense.)

If there were any kind of straight-forward genetic variation underpinning intelligence, that could be selected for without detrimental effects elsewhere, why wouldn't selection have eliminated that variance?. Whatever the debates about the meaning of "intelligence" or "intelligences", I think most people would agree there should be (other things being equal) a correlation with survival and success.

Danny.
[900 book reviews and other stuff]

driven to fixation? (none / 0) (#151)
by merkri on Wed Nov 13, 2002 at 08:24:43 PM EST

e.g. if it were possible to "flip" all the "smart" genes in someone on, they would almost certainly have incompatible epigenetic effects - the result would not be coherent, and might be fatal.

I'd just like to note that you may or may not be correct. What you say is intuitively appealing, but there's no real reason to think it has to be that way. If I recall correctly, recent meta-analyses suggest that nonadditive genetic variance does account for part of the phenotypic variance in IQ. But not all of it; neither is it clear how the nonadditive portion is operating.

Why wouldn't selection have eliminated that variance?

It's a good question that has evoked debates among very intelligent people. Some possible explanations:

(1) Nonadditivity. The same thing that you propose would cause us all to drop and die is the same reason why "all the genes might not be turned on." Turn too many on, and you die; turn too many off, and you're dumb. Why not equilibrium? Maybe because the nonadditivity is too complex, maybe partially because of the following reasons.

(2) Pleiotropy on phenotypes that are selected for in opposite directions. If a gene influences one trait that is positively selected for, but also another trait that is negatively selected for, it may be that the net effect of the gene is small enough that there is polymorphic heterogeneity at the locus.

(3) Sexual selection on the trait. Maybe sexual selection on the trait induces variation despite the benefits and costs associated with natural selection. This is certainly possible with intelligence; both men and woman consider intelligence one of the most important characteristics of a potential mate.

[ Parent ]

Correlation (none / 0) (#168)
by roystgnr on Thu Nov 14, 2002 at 07:09:03 PM EST

why wouldn't selection have eliminated that variance?.  Whatever the debates about the meaning of "intelligence" or "intelligences", I think most people would agree there should be (other things being equal) a correlation with survival and success.

"Most people" have been wrong before.  Today, at least, reproductive rates (which are all the "success" natural selection cares about) are inversely proportional to educational attainment.

[ Parent ]

the present is an exception (none / 0) (#171)
by danny on Thu Nov 14, 2002 at 08:30:22 PM EST

The current control over reproduction is exceptional (looking at human history over the long-term) and has not been around long enough to have evolutionary effects.

But yes, there is an "all other things being equal" caveat with any attempt to link "intelligence" to evolutionary fitness... (And as you can probably tell, I have grave doubts about unitary notions of intelligence anyway.)

Danny.
[900 book reviews and other stuff]
[ Parent ]

That was just an example (none / 0) (#175)
by roystgnr on Thu Nov 14, 2002 at 10:37:58 PM EST

My intent wasn't to suggest that modern ideas about family planning have shaped human intelligence, but to point out that reproductive drawbacks aren't necessarily obvious.

If you want a paleolithic example, it's not hard to imagine possibilities.  Genes that increase intelligence along with brain/head size may cause more deaths in childbirth than additional children from intelligence-derived success.  Genes that increase intelligence along with brain complexity might cause an increase in mental illness that outweighs the decrease in mental acuity.  Increased intelligence even without any direct negative side effects may have caused social isolation; maybe fewer cavewomen would mate with cavenerds.  ;-)

Any particular supposition like these might be unlikely, but I don't think it's implausible to suppose that some detrimental side effect of human intelligence exists.


[ Parent ]

I knew it! I knew it! (none / 0) (#92)
by kholmes on Wed Nov 13, 2002 at 01:08:57 AM EST

But now its official!

"A state of epiphany is reached when one makes a great deal of new connections all at once, realizing how entire patterns of thought fit together in a previously unsuspected grand scheme; the feeling is more intense than an orgasm but, alas, also a lot more rare."

Philosophy is better than sex! :)

If you treat people as most people treat things and treat things as most people treat people, you might be a Randian.

what about having sex with a philosopher? (none / 0) (#108)
by d0ink on Wed Nov 13, 2002 at 03:47:43 AM EST



[ Parent ]
Epiphany vs. Philosophy (none / 0) (#146)
by Korimyr the Rat on Wed Nov 13, 2002 at 06:38:33 PM EST

Philosophy is only one field in which new neural connections can be rapidly made-- all of the sciences tend to promote this kind of experience, as unrelated theories and non-associated patterns fall into place.

 In fact, almost any intellectual pursuit will allow for this kind of rapid connection making-- though, as the original poster observed, such intense epiphanies tend to be pretty rare.

--
"Specialization is for insects." Robert Heinlein
Founding Member of 'Retarded Monkeys Against the Restriction of Weapons Privileges'
[ Parent ]

Science is a subset of philosophy (none / 0) (#159)
by kholmes on Thu Nov 14, 2002 at 12:04:26 AM EST

It would actually be difficult to prove the statement false.

If you treat people as most people treat things and treat things as most people treat people, you might be a Randian.
[ Parent ]
Philosophy (1.50 / 2) (#161)
by Korimyr the Rat on Thu Nov 14, 2002 at 02:00:17 AM EST

Philosophy is an intellectual pursuit related to the nature of existence itself, and the nature of consciousness. As such, it does try to explain some of the same things science does-- but it does so in a less rational way and relates to things that are not measurable.

Philosophy deals with beliefs and values.

It has next to nothing in common with science, and with the possible exception of fundamental religion, is probably the exact opposite of scientific inquiry. Calling science a subset of philosophy is absolutely ridiculous.

--
"Specialization is for insects." Robert Heinlein
Founding Member of 'Retarded Monkeys Against the Restriction of Weapons Privileges'
[ Parent ]

Great Article.. but.. (none / 0) (#93)
by vile on Wed Nov 13, 2002 at 01:15:22 AM EST

... one thing I did notice is when you mentioned why society puts up with insurance companies who practice the ideology of guilty before proven innocent.. it's because we're required to, *by LAW*. Doesn't that suck? People make our minds up for us, not the other way around.

~
The money is in the treatment, not the cure.
Dopamine... (4.00 / 1) (#95)
by vile on Wed Nov 13, 2002 at 01:44:58 AM EST

I would relate an 'epiphany' to the feeling that is provided by Ecstacy, a drug that releases the reserves of seratonin (among other things, I'm sure). The first experience(and thus, the most important and enhanced experiences) one could have with this drug is one that involves a simple, mere intellectual conversation for 6 hours. One is able to experience things on a 'metaphysical' level as well as on a higher level of consciousness that one could normally be used to.

Honestly, enlightenment is a good synonym for the experience that this wonderfully damaging drug provides.. and I would suggest the term enlightenment to be a synonym of epiphany.

Cocaine didn't do much to me.. Methamphetamines are a different story..

The lessons learned from meth takes time to unlearn.. but never the less they were good lessons. I once shot 6 perfect pool games in a row while experimenting with this drug. I didn't miss once -- which is something that I'm not used to. My experience was at such an acute level I was able to concentrate *perfectly*.

Concentration would be a perfect topic for another article from you. People tend to forget that they tend to forget.. a.k.a... lose concentration.

~
The money is in the treatment, not the cure.
Paul Erdos (none / 0) (#118)
by kaibutsu on Wed Nov 13, 2002 at 07:27:41 AM EST

Paul Erdos was undoubtably the greatest combinatorialist of the last centruy (who worked harder than ANYONE else right until that day he died five or six years ago). It's well known that for the last forty years of his life, he was continuously on some kind of stimulant or another, sleeping something like two hours a night. He beat everyone at ping-pong - they said his "reflexes were just faster than everyone else's." Methamphetamines can have ridiculous effects on an intellectual. In fact, in light of Erdos, one wonders why the shit isn't more common in these branches of academia.
-kaibutsu
[ Parent ]
sleep (none / 0) (#162)
by dr k on Thu Nov 14, 2002 at 02:05:02 AM EST

"In fact, in light of Erdos, one wonders why the shit isn't more common in these branches of academia."

Because it is nice to be able to go to sleep. Perhaps that is old fashioned.


Destroy all trusted users!
[ Parent ]

He slept.. (none / 0) (#183)
by vile on Sun Nov 17, 2002 at 05:01:24 AM EST

... all he needed to. :)

~
The money is in the treatment, not the cure.
[ Parent ]
Interesting.. (none / 0) (#173)
by vile on Thu Nov 14, 2002 at 09:22:37 PM EST

http://www.paulerdos.com/1.html

Great Article.

Coffee is an interesting stimulant.. if you know how to work it. If you don't, most likely it'll hamper your ability to concentrate on a single thought pattern. However, if you have the ability to, hmm, embrace the characteristics that come with it (such as an acute level of alertness, paranoia, etc.), then it can become a very useful tool.

Amphetamines are also great tools. The increased rate of neural firing is damn near amazing. One can jump from one thought to the next, and back to he same thought (connection pattern I like to believe) in near to no time at all. This tool provides one with the ability to keep in mind many subjects, multitask them, and dive into the detail of each -- without losing sight of anything pertaining to the thought pattern.

With both of these tools, concentration is enhanced. Your mind isn't spinning a hundred scattered thoughts (how's my wife, the electric bill, etc).. but is focused, entirely. I can relate to how Paul Edros was able to obtain his near-genius levels of brilliance.

There is also the feeling that both of these drugs provide. I believe the feeling to be a crucial part of the way these drugs work. I can relate this to the dopamine effect, the reward chemical that your brain releases. I tend to beleive that the dopamine effect can increase the level of concentration that someone could acheive in and of itself. Drugs are shortcuts.

I can relate all of this fairly easily to my pool table experience. I was able to focus on my goal. I was able to dive into the detail of angles, speed and accuracy (which also deals with math, no less!).. and not even really think about it. I just played! There was also a rhythm to my playing.. and too there was the feeling of not missing one thought, playing a perfect game.

I'm trying to limit this train of thought to playing a game of pool, but there have been other occasions where I was able to do a week's worth of work (intellectual work, programming, etc) with a few hours of dedicated concentration on projects.

In my last comment, I stated that there were lessons that both of these drugs (coffee not so much so, but amphetamines, most definitely) that I had to unlearn. If you do all of this work on amphetamines, or other forms of stimulants, the feeling isn't the same when you return to it. The reward is much less, thus, you don't enjoy it as much.

Some people may be different.. but in my experience, my experimentations provided intellectual levels that were far superior to levels that I normally operate on. But you're right, it's interesting to think why we don't embrace these tools more often and learn to work with them, as opposed to well, oppose them. Ahh one must admire the greatness of propoganda and being born a kid.

Check out the Paul Edros link.. and thanks for providing the name!

~
The money is in the treatment, not the cure.
[ Parent ]
A couple of things... (5.00 / 2) (#109)
by The Solitaire on Wed Nov 13, 2002 at 03:51:39 AM EST

It's too bad that I only got to read this at 3am... I figured I would make a couple of quick comments now, and hopefully, time permitting, produce some deeper insights tomorrow, when I'm fully conscious (pun intended). Before I make any criticisms, I want to say two things. First, I love seeing people in one discipline (I'm assuming you are a computer science type) take a stab at the problems of another. On the other hand - you have to be careful... you can get a very wrong impression of a field by reading intro stuff... a lot of it is way out of date, or grossly oversimplified. I once thought I had a pretty good grasp of what was going on in the brain, but my significant other (a neuroscience researcher) rather thoroughly stomped those delusions about a year ago.

The first thing that jumped out at me is that you set all of this up as if it is in some way new. None of it is especially new. From what I can tell it borrows somewhat from traditional "symbolic" cognitive science, connectionism, and dynamic systems theory - the three major "camps" in cognitive science today.

A related point is that I can't really discern what you are trying to argue for... you stated that you are against "genetic determinism", but I'm not really sure what "genetic determinism" is. I can only assume that this has something to do with the "nature/nurture" debate... what exactly, I'm not sure.

Second, you make a lot of unsubstantiated claims about the structure and composition of the human brain. One example is "individual cells are stoooopid" - I think that you might be surprised at just how "smart" individual neurons can be. I don't have a really good example on-hand (and I'm too tired to look any up right now), but you should be able to find plenty of examples in an advanced neuroscience textbook. On top of that, there are lots of things we still have yet to learn about the workings of individual neurons.

Don't feel too bad about this criticism, however... the neuroscience background of many people working with connectionist networks (thinking, wrongly, that they resemble biological neural nets) is often deplorable (and I include myself in that statement... and I would say that I know more than many).

On to comment number three - I think you are grossly underestimating the problem of consciousness. One thing you haven't addressed at all is what David Chalmers calls "the hard problem of consciousness". Very loosely paraphrased the question is "why is there a subjective?" - this is a real problem. And incidentally, it may be a question about which science has nothing to say at all, since science deals (more or less exclusively) with the objective. Incidentally, Chalmers has a great bibliography of work done on consciousness from all perspectives.

When you discussed the differences between us and "the animals", I found that you missed what is likely the single most unique thing about humans. Language. Not just any old communication (obviously most, if not all animals have that), but generative language, using arbitrary symbols linked syntactically together. This is arguably the most important distinction between humans and animals (I hate saying that because, as far as I am concerned, humans are animals). If you want to discredit nativism - you're going to have to tackle its chief champion on his home turf sooner or later - might as well get a head start.

Finally, if you are interested in research that attempts to bridge the gap between small groups of neurons and high-level behaviour, Chris Eliasmith is doing some pretty interesting research into computational neuroscience here at Waterloo.

I need a new sig.

the subjective (4.00 / 1) (#117)
by kaibutsu on Wed Nov 13, 2002 at 07:23:23 AM EST

Admittedly, it's not a question I've thought much about, this of the existence of the subjective. But it seems to me that if we are 'wired' to recognize useful generalizations (such as 'chair' or 'female' or conscious being'), the idea of an 'I' falls directly into this class. It looks to me like the much more likely algorthim to start with this concept building and *then* build self-consciousness within this framework. After all, the I is just as ambiguous as any other class of objects we commonly recognize. Maybe this would explain why I don't remember my early childhood - none of these things happened to me, they only happened to my brain. The 'me' didn't exist yet...
-kaibutsu
[ Parent ]
The Self Symbol (4.00 / 1) (#121)
by The Solitaire on Wed Nov 13, 2002 at 09:15:08 AM EST

Some researchers (Daniel Dennett for sure) have referred to the "self-symbol" which is pretty much what you have described. On the surface it might seem to work, but I don't think it answers the "hard problem". Sure, we have a mental representation of the self. But - why is there an "I" to experience the self in the first place?

Think about it this way - I could write a program that has a symbol (or set of symbols) to represent the program as a whole, and the various parts of that program. Indeed, there are such programs out there - any program that has a self-diagnostic routine could be said to have a self symbol. But that doesn't (intuitively) make the program conscious. Why are we conscious, but not the program?

Once can bite the bullet and say that the program is conscious, or (like Dennett) say that we aren't conscious. Actually, Dennett's view is more accurately that the idea of consciousness isn't coherent, and as science progresses, we'll eventually realize that we were just thinking sloppily, and find that there is no need for "consciousness". I must admit, I'm not doing Dennett much justice here - better to go read Dennett himself for an accurate characterization of his view.

I need a new sig.
[ Parent ]

The Subjective = Soul (4.00 / 1) (#141)
by Boronx on Wed Nov 13, 2002 at 04:06:58 PM EST

I was thinking about the question of the Subjective yesterday, because it has always bothered me and my materialistic philosophy. "Why am I me and not somebody else?" Is the way I like to ask it ... or just "Who am I?".

It occurred to me yesterday that to seriously ask the question is to hold on to the idea of the soul, that there is some separate otherness that is me.

I don't believe there is a soul. I think biting the bullet on this question means believing that the your subjective experience is just another emergent behaviour of the mass of neurons that is you.
Subspace
[ Parent ]

Perhaps (5.00 / 1) (#144)
by The Solitaire on Wed Nov 13, 2002 at 04:40:33 PM EST

I think that you are right in a sense. I myself think that to think of the self as a "thing" is wrongheaded. One thing that has often bothered me is what I call the "container metaphor". That is, that the mind is a container, that holds a bunch of beliefs and desires (and emotions, etc). I tend to identify the self with the holding of those beliefs. It frames the question differently, even though I do not think that it is a particularly new way of thinking about the mind.

But I don't think that admitting to having a mental life amounts to accepting the idea of a soul. Without a doubt, we have thoughts and feelings - I think that any theory that denies this is patently absurd (n.b. I don't think that Dennett's eliminativism denies this). I still think that seems very odd to attribute a different property (or aspect or something) to brains than the properties we attribute to every other physical object in the universe. The odd thing isn't the answer to the question of "Who am I?", but more why such questions even make sense to us.

I obviously don't expect this problem to get solved here (or anywhere else for that matter). But I do think that it is important to understand that it is there, lurking in the background of all the conversations about mind.

I need a new sig.
[ Parent ]

Statistics versus Tendencies (4.00 / 1) (#113)
by ShiteNick on Wed Nov 13, 2002 at 05:35:50 AM EST

If you have the misfortune to be 24 years old and male, even with a perfect driving record and every possible plus you will pay triple the auto insurance of a woman the same age with four accidents

I disagree. Insurance companies are there to make money and they try to do that as best as they possibly can. Statistically the chance of a young male having a life threatening accident is much, much higher than the chance of a young female driver having a similar accident.

I am in the age group where it hurts and I think it's unfair, but then much of life is.

Ask the average young girl (and I did say average - include school teachers, nurses and not just the few geeks) what kind of car she'd like and then ask a young man the same question. Men dream of Ferrari's. Women don't. (Again - mostly, there are plenty of exceptions, but this is about human behaviour)



another couple... (4.00 / 1) (#114)
by apteryx on Wed Nov 13, 2002 at 05:45:18 AM EST

First, I appreciate the article.I don't think it's too long and if it's not as tight as some others like, I'd rather see ideas out there rather than imagining a perfectly concise thesis stuck in someones head !

I particularly like your contrarian stance. This echoes John Ralston Saul's call to "Doubt everything". Certainly as I get older, I increasingly distrust people who are very sure of themselves. Life and everything is bloody complex...

One thing I would challenge however, is the implication that intelligence is a) usefully definable and therefore b) a useful measure of anything much at all.

I recently read a book on animal intelligence (sorry, I've forgotten the title _and_ the author !) that, through numerous examples showed that many behaviours that would be considered signs of 'intelligence' in higher primates are exhibited in animals trditionally considered as thick as planks. One example that comes to mind is a species of spider that has a phenomenal ability to spot prey from a long distance and apparently plan quite complex routes and strategies to get within striking distance.This isn't learn't behaviour and is accomplished by the amount of neurons that could just about fit on the head of a pin. Other species of spiders entirely lack this ability...

So, my feeling is that in all likelihood there are numerous factors influencing how we become mentally what we are - genetics, developmental influences,direct experience and culture etc.It's the ratios that are unknown...

Chemical Imbalances... (5.00 / 2) (#122)
by The Solitaire on Wed Nov 13, 2002 at 09:33:14 AM EST

Wow... I just took the poll, and realized that one of the options isn't as jokey as it seems! Our brains really do run on chemical imbalances!

I'm sure I'm being simplistic here, but the firing of a neuron is dependent on concentration gradients of sodium and potassium on either side of the cell wall. When a neurotransmitters interact with the cell wall, more channels open in the cell wall, allowing more sodium into the cell. Eventually, a critical point is hit, and the neuron blows open all the channels (allowing Na+ to flow inward, and K+ to flow outward) creating an electrical signal (aka an "action potential") that travels down the axon. Afterwards, the cell needs some time to pump all the Na+ back out and the K+ back in, which results in a "refractory period", during which the cell is relatively unresponsive. This leads to the "pulsing" behaviour of neurons.

So, if you accept sodium and potassium (also chlorine and calcium are extremely important but beyond the scope of this post) as chemicals, our brains really do run on chemical imbalances!

I need a new sig.

Memory is Inherited? (none / 0) (#124)
by Perpetual Coming on Wed Nov 13, 2002 at 10:53:20 AM EST

Couldn't memories or at least "muscle memory" just be passed on genetically? That would make it seem like it's not just a "blank slate", when it is in theory, in the sense that things could be unlearned, or at least reorganized.

The reason why I choose to believe that all people are equal (hence a sympathy for the blank slate theory) is a very pragmatic one. Even the person with Down Syndrome, as one poster mentioned, could have outstanding intelligence in a way that I cannot understand. For example, they have much much lower cancer rates than the rest of the human population. So, the best way for me to improve my own intelligence, is to seek the intelligence that I do not understand. If I had a presumption as to the best kind of intelligence, it would undermine my best understanding of intelligence in the first place.

This also causes me to appreciate genetic diversity, as it's the only way that intelligence can improve. There are so many examples in genetic history of improvements that do not appear to us in retrospect to having marginal benefit, but only benefit when they achieve some more significant order, such as with the eye. The person with Down Syndrome may be a container for some amazing genetic connection down the road.

This doesn't mean, that if I had the opportunity to get rid of certain diseases or genetic conditions that cause suffering, that I wouldn't. Instead it means that every life is an experiment, which has an infinite amount to be learned from while it's in process, and it's unadvisable to try to get rid of imperfections because each person is imperfect and therefore unqaulified to make such decisions. So, I'd rather just humble myself and think that all people are equal because it's a belief that I believe will improve the world so long as it's not perfect.

IMHO (2.50 / 2) (#125)
by r00t on Wed Nov 13, 2002 at 11:37:28 AM EST

Everything is genetic.. EVERYTHING. Wild animals don't posses higer level intelligence because they don't have the genes to code for a better brain. They don't have morals, or values, just instinct. It is a result of their biochemistry, and that is a result of their genes.

-It's not so much what you have to learn if you accept weird theories, it's what you have to unlearn. - Isaac Asimov

the blank slate (4.50 / 2) (#130)
by bumhat on Wed Nov 13, 2002 at 12:31:38 PM EST

(first post, hello everyone) Apologies if this has been suggested already, as I haven't had time to read all the comments posted. You might find it useful to read Steven Pinker's book "The Blank Slate" (which I've just finished)which sets out to refute the idea of the "blank slate" and restore balance to the debate about genetic determinism. In particular he discusses the heritability of personality, and deals with the standard objections to the idea of genetic determinism in a highly readable way. His discussions about politics, sociobiology, race and family are illuminating. It's not necessary to agree with everything he says (in particular his discussion of the arts echoes a rather standard and outmoded dismissal of "modern art") but it's highly recommended nonetheless.

I like lists. (none / 0) (#167)
by bjlhct on Thu Nov 14, 2002 at 06:05:50 PM EST

  1. The 'males tend to have more accidents' thing has evidence, statistics, and a mechanism, culture. What more are you looking for here?
  2. Bacteria were here long before people and will be around long after people are gone.. There are more bacteria than people by number and weight. Who really won?
  3. A "hill climbing algorithm" as you think of it doesn't work so well when the hill to climb is as dynamic as ours.
  4. Being a contrarian tends not to work as well with science (as it really is, not as you read about in the McNews) as it does with other things. (Such as stocks.) I don't think your different philosophy does nearly enough to compensate for you not being an expert.

*

kur0(or)5hin - drowning your sorrows in intellectualism

Too bad I couldn't -1 in time (5.00 / 1) (#172)
by Hobbes2100 on Thu Nov 14, 2002 at 09:14:45 PM EST

Given the extensive discussion of this in philosophical and scientific literature, I find it interesting that you don't cite one single thing.

So, in effect, you're arguing from 1) first principles, 2) personal experience, 3) you're obvious authority, and 4) some weird combination of anecdotes and pseudo-citations of "well known things".

Welcome to a waste of time, boys and girls.

Regards,
Mark

P.S. Among other things the The pervasive cultural belief in genetic determinism, however, most certainly is politically motivated is not something I am willing to take based on your vast prior credibility -- neither it's existence nor its political motivation.
Sed quis custodiet ipsos custodes? --Iuvenalis
But who will guard the guardians themselves? -- Juvenal

Welcome Day 3 Trolls (3.50 / 4) (#174)
by localroger on Thu Nov 14, 2002 at 10:09:25 PM EST

Welcome to my story! Glad you could drop by. As usual, you waited until the story was half-way down the FP and the real discussion pretty much over, except for a few die-hard threads. What a great time to drop a couple of trolly toplevel comments that, due to the default K5 reader setup, will be the first thing anybody sees for the rest of the history of the story!

What a convenience it is that you don't have to worry about the dozens of people who have commented here challenging you. Your comments stand like the statues of Stalin outside of Russian cities, making sure any future visitors are greeted first by your spotless and unchallenged take on the situation.

As opposed, of course, to those of us who let our ideas (however malformed or defective they may be) ride the toboggan of the voting queue and invite everyone in the world to pick over our bones.

Yes, the banquet hall is empty but there are always a few scraps of the buffet left for the vultures. Enjoy, my friends, and don't let the screen door hit your ass on the way out.

I can haz blog!
[ Parent ]

Can't even think of a witty title response (none / 0) (#181)
by Hobbes2100 on Sat Nov 16, 2002 at 02:02:09 PM EST

You know, I knew you had a high opinion of yourself (given the tone of your article, your dismissal of several communities of really smart people ... a PERSONAL project to develop strong AI ??? they're wrong because they couldn't whip out a 10 line perl script that demonstrated strong AI?).

However, I'm quite sorry that I don't sit clicking "reload" on K5 (and the submission queue) incessantly. I read your article when I got around to it. And, I commented on it when I saw it was a piece of trash (which, admittedly only took about T-10 seconds after I started to read it).

I'll try to be a little faster next time. Perhaps I'll have a script send me an email (with the text of your articles) when you post them.

You're posting in a public forum. Deal with it. You could always post another response saying "please view responses in order of Rating" (oh, you can do that? what a beautiful concept). Or, you could "view them from oldest to newest" (oh, you can do that too?).

I didn't think about the fact that my comment would be first. I didn't even think about that fact that it was several days (out of touch). But, you know what. Thanks for pointing it out. I disagree with you. My points stand. And, hell, hopefully some people will be saved a waste of their time.

Regards,
Mark

PS Perhaps if you feel so strongly about this problem, you should make a submission proposing that the default display mode be "random" (or even rating ranked). Gee, that would be constructive.
Sed quis custodiet ipsos custodes? --Iuvenalis
But who will guard the guardians themselves? -- Juvenal
[ Parent ]

[fair as opposed to snide] response (none / 0) (#182)
by localroger on Sat Nov 16, 2002 at 07:58:01 PM EST

You know, I knew you had a high opinion of yourself (given the tone of your article, your dismissal of several communities of really smart people ... a PERSONAL project to develop strong AI ??? they're wrong because they couldn't whip out a 10 line perl script that demonstrated strong AI?).

No, they're wrong because they're asking the wrong questions and making the wrong assumptions. This is not the first time a large community of very smart people has been entirely wrong about something fundamental. Look up the history of plate tectonics sometime, or the germ theory of disease and the use of hygienic practice in medicine.

Granted, such universal fuckups aren't the usual pattern in science but when a field has been dead in the water for a certain amount of time, one's suspicious get aroused. And yes, I do think the problem is solvable on a personal level, not by a 10-line perl script but by a project that may take me a number of years. It has already taken seven, though not spent working continuously with funding and deadlines and all that.

However, I'm quite sorry that I don't sit clicking "reload" on K5 (and the submission queue) incessantly.

Alright, maybe that wasn't fair, but I have noticed this as a pattern when a controversial topic is discussed. It doesn't bother that much, but nearly everything you bring up has been flogged to death already twenty comments down. You might want to try at least scanning the comment tree when it's this deep to see if your points have already been beaten into the ground.

I can haz blog!
[ Parent ]

Continuing with fairness (none / 0) (#184)
by Hobbes2100 on Sun Nov 17, 2002 at 04:48:52 PM EST

Ok,

I admit it. I was hasty, harsh, and, well, down right bitchy. I apologize.

Based on the conversation generated, this article did provide some people with good food for though. It was not billed as a "definitive work" (though I tried to shoehorn it as such).

The points I mentioned were scattered around the earlier comments -- but don't you think a it nice to have a quick summary of flaws? *wink*.

Look, I guess my real gripe is this:

You seem to (and I'll grant that) you have a "good idea" buried in your head. You want to convey that to us. I'd like that idea conveyed to me.

So, what is the best way to convey it? I believe that writing carefully, doing background research, referencing ideas (both to determine their correctness and to determine your understanding of them), and placing your "new, good idea" into the context of what has come before, is the way to go.

In general, an article that is not written at the highest level (of excellence) will not get my 1) credibility and 2) attention. Why? Because the level of excellence is a heuristic as to whether or not I should spend my time reading it.

Granted, I do waste my time reading enough junk in a day. But, when the article is about something I know a bit about, I will skip the PC Magazine and go to the Knuth any day of the week (so to speak).

Regardless, we're generally here to exercise our thinking muscles. And you helped do that. Thank you.

Regards,
Mark
Sed quis custodiet ipsos custodes? --Iuvenalis
But who will guard the guardians themselves? -- Juvenal
[ Parent ]

...and in retrospect... (none / 0) (#185)
by localroger on Sun Nov 17, 2002 at 05:22:07 PM EST

...I probably should have sat on it a bit and made my argument stronger. But I haven't had a lot of free time lately, and the subject has come up in several unconnected contexts lately and it's just been getting on my nerves.

But, when the article is about something I know a bit about, I will skip the PC Magazine and go to the Knuth any day of the week (so to speak).

Well, of course; but my belief is that Knuth hasn't been born yet, or at least hasn't written The Art of Computer Programming yet. Actually Alan Turing is daydreaming about a hypothetical machine that can be used to make an interesting point in maths.

Let me make an analogy for you personally (since nobody else is likely to read it :-) illustrating my position much better. I'd have included this in the article if I'd given it more time to gel.

Suppose that the only computers any human being had ever seen were loaded Pentium IV multimedia machines prepackaged with Windows XP, dropped on our planet by a cadre of benevolent aliens. We would, of course, wonder how they work. We would eventually wonder how to fix them when they break (since the aliens dropped a limited number) and make improvements.

To this end we might have a lot of very smart people study the machines we have. I imagine a committee might spend several years reverse engineering the IDE interface. Theories would abound about exactly how data are stored on the spinning platters found inside dismantled hard drives. (Proving that the hard drive is the place where nonvolatile file storage occurs would be a non-trivial task, in the absence of all documentation.)

Eventually, with lifetimes of hard work by very talented people, we might reach a point where broken IDE cables could be resoldered, and even the occasional chip transplant could be performed. We might have performed some experiments demonstrating the possibility of magnetic data storage, on huge rotating drums instead of the compact disks supplied by the aliens, but at least proving the principle. We might test the bandwidth of signals all over the machine and form elaborate theories about how the data on each line are encoded and what they mean. Some lines (resets) are simple enough to understand, but others are a blazing mishmash of high-frequency data.

If all we have to do our analyses are, say, ca. 1930's vacuum tubes we could not even really reduce this data to bits, since we couldn't achieve the frequencies in regular use. The dominant theory for many years might be that data on lines in and out of the CPU are encoded in frequency bands (the "gestalt theory of CPU operation").

What would be missing from all of this is any clue about what makes a computer a computer, a thing that can actually be built with ca. 1930's technology if you put enough effort in. We would be blinded by the "need" for a hard drive, DVD player, and color monitor; all these frills obscure the real heart of the computer, which isn't even what we call a "CPU" today but a very small part of the CPU that could be a lot smaller if our machines weren't built by performance fetishists.

What I am looking for is the equivalent for animal brains -- that operating principle which you see in common between humans, other mammals, birds, and even some insects, and which you don't see in computers. Although the word is often used in a human-centric way I call it consciousness. I believe it is simple, fundamental, evolutionarily old, robust, and readily reproducible by machines we can build today. It is the elephant, or IP/ALU/memory/sequencer/microcode architecture that is being missed because everyone involved is obsessed with the tail, trunk, leg, reflex, frequency response, memory curve, and so on.

I certainly did not mean to insult people like iGrrrl who have invested themselves in the field, since the work is important and I may very well be wrong. I have even modified my position somewhat based on their responses to my older comment tree. Perhaps I have been a bit too contrarian at times.

But it really pisses me off that a field this incomplete and tentative is regularly mined for justifications that can be used in policymaking that almost always end up causing real misery and suffering. It is no sin to try and understand the working of things, but it is a sin to let your work be misused by others whose real interest is aggrandizing themselves at the expense of others who are already suffering enough. I always want to ask these people, when you find what you think is the gene for X, what do you think will be done about it? Because, humans being humans, something will be done. And so far the "solutions" implemented have not been pretty.

I can haz blog!
[ Parent ]

Here is a review by a real AI researcher (4.50 / 2) (#177)
by exa on Fri Nov 15, 2002 at 12:26:27 PM EST

Greetings!

I don't have time to read your posts in detail but I've done my fair share of AI research back at the university and I'll offer my first impression of your writings (sniff, sniff) I hope this is the feedback you are looking for.

I favor your skepticism of the views held by majority of "researchers" in fields connected to Cognitive Sciences. My belief is that is the right attitude towards the truth in the most complicated puzzle in science, especially when we are clouded by ignorance, confused/incompetent theories, experts with little knowledge and false facts! It is very important to understand that there has been little progress on fundamental questions in this area! (We were able to identify and solve the less important problems only!)

I don't expect you or somebody else to come up with "the" answer but I want to make you feel more confident in your research since there have been many reputable scientists sharing your stance in certain ways.

The theory you outline unfortunately misses many technical details that would make it concrete enough for me, but it still addresses an array of philosophical issues that I find worthwhile. It has some similarity to my take on the computational view of mind (not much though)

First off, a significant number of thinkers of the analytical tradition in philosophy do believe that computation is the primary mechanism for a "mind". So you are not alone in that respect against all sorts of wishy-washy vitalists and religious defenders of ancient beliefs. It's perfectly okay to analyze the function of the brain with mathematical methods.

Now, the "blank slate" hypothesis was in fact expressed beautifully in Turing's original "AI" paper published in 1950. He said that when a baby was born its brain was much like a "blank sheet of paper", later to be filled by learning.

Since then, we have tried to figure out what that "learning" process must be like. Your theory is that the connections are initially random and that the brain, in the end, assumes a fractal structure.

The first part, being initially random, actually makes sense wrt computer science. It is a permissible strategy to start with a random position in a complex search problem....

Marvin Minsky last said that he believed the brain to be operating with some sort of an "infinite" genetic algorithm. At first I disliked that idea (since I hate biologists who think the computers *should* work like the evolution of DNA), but then I understood exactly how Minsky thought the brain should be operating (I won't say it here!).

Now if any sort of GA is effective in learning, then it is perfectly reasonable that the initial connections are random! I repeat that again so you  can be proud!

However, the "fractal" hypothesis should not even be stated without an accompanying mathematical theory that solves a known machine learning problem. Sorry but I have to say that! Otherwise you fall in the same realm as numerologists, astologists, etc.

Why am I saying such a harsh thing? Because it sounds as if you do not really know what a fractal is, or even if you do you have not studied how a parallel biological system might be imitating a fractal! If you take all the well studied chaotic series resulting in fractals you will see that they require the computation of a recurrence which cannot be done easily without a computer. And in your case the brain does not only compute it, but it also re-arranges its structure according to the output of that algorithm. *and* the resulting fractal is a computational device that is _intelligent_. Note that what you say requires theoretical proof and hard empirical evidence of such a _biologically plausible parallel fractal algorithm with a property of "learning"_. That's actually very difficult to prove. Besides, you seem to have been affected by the tendency in 1970's to explain every "mysterious" thing by using a fractal. Now a sea-shell may be similar to a certain fractal, but we cannot readily assume that the brain's architecture is also a fractal!

Also note that fractal is the mathematical concept of "self-similarity" NOT HIGH COMPLEXITY. Actually a fractal has a very small Kolmogorov complexity since it can usually be generated with a few lines of ML code!!!!!!!!!!!!!!!

Disregarding your fractal offering, I think your wide and well presented skepticism will guide you to more refined results! However, doing that, you must take mathematics into account.

Good luck with your research!

__
exa a.k.a Eray Ozkural
There is no perfect circle.

GA & Complexity, musings, etc... (none / 0) (#179)
by Nascent0 on Sat Nov 16, 2002 at 12:42:25 AM EST

Skepticism of anything that is not complete, should be a given to anyone (at least, anyone with a notion of truth)!

Setup a genetic algorithm to find the most general concept in the universe. If it finds the right one it'll find both the most complex thing and simplest thing. A non-fractal, but a natural complexity factor which resembles one, that explains ever more as more details are added, evaluated, verified...

Complexity leads to simplicity, if you get it into the correct formation, it becomes information - simple & repeatable. The tendency to have overblown complexity is almost a requirement to see the simple base patterns, but not always. Any human that understands the perfection of One is capable of seeing the general patterns and, with change, learn the details but may never fully grasp how the unifying principle operates. They don't really have to, their whole brain operates on the principle, but it sure would give them an advantage when it comes to awareness and adaptability.

The thing that gets me about the whole "AI" research is just how much people over generalize into bland and nullifying (or "Leaky Abstraction" to reuse a similar phrase) conceptualizations. For instance:


How certain are you of the uncertainty of quantum level dynamics?
Do you believe in infinitely, endless pursuits? ('Infinite infinities' or 'endless universe(s)'?)
Nonnegating naughtism that lead to what?

In otherwords; since you are a finite, mortal being, what do you believe in and how do you justify your beliefs and behaviors?

It is in those questions that you should be able to recognize the pointlessness of certain commonly held "facts" and how simple concepts can lead people into the wrong direction, with infinity as the only escape... Its far too typical in today's "science", which is mostly lost in infinities!

__
Every last one is in awe but non-ones will never know.

[ Parent ]

Great comment, thanks (none / 0) (#180)
by localroger on Sat Nov 16, 2002 at 08:36:43 AM EST

One quibble: You misunderstood why I brought fractals into it. I don't believe that fractals are the source of the complexity behind intelligence. I believe that consciousness requires a platform which can store and process data; and in biological systems that platform is constructed by algorithms similar to the algorithms that are used to build fractals. This is not the only way to do it, of course, but it's the one Nature uses.

Of course there are points where the brain's structure diverges from fractal form -- nothing in the universe can be a "real" fractal since it has limited resolution. (For that matter, nothing in the universe can be a "real" number either, but we find them highly useful for some reason.)

The fractal argument is how I explain how 7 gigabytes or so of instruction wire up 10^14 or so interconnects. There simply is no other way for it to happen consistent with the rest of science. Of course the result isn't really complex and doesn't contain any more information than the code; that is, in part, the point. You can't make a small change in that initial structure by making a small change in the code. The only kind of changes you can make are very, very large, and usually we call them birth defects.

On the other hand the structure thus formed is capable, somehow, of storing and processing vast quantities of information. The analogy with a RAM chip is exact; the chip itself is just a simple structure repeated over and over, perhaps a million times, but the information it can hold is much more complex than the instructions for building the chip in the first place.

Once the platform is built I would not argue that this storage and processing capacity has anything to do with fractals, any more than the information stored in your computer's RAM has to do with microelectronic fabrication techniques. (Other elements of chaos theory do come into play when one analyses how the system interacts with the world and what qualities it must possess for us to consider it "intelligent.")

I can haz blog!
[ Parent ]

The role of genetics (4.00 / 1) (#178)
by exa on Fri Nov 15, 2002 at 12:52:18 PM EST

Here is what I have to say about the role of genetics in the brain: it needs only the instructions to build a "leaning machine".

The brain however is not the "optimal" learning machine. Several genetic traits come together to achieve that function. And such has been in the course of evolution. Countless accidents in the book of life have occured until the first DNA instance with a clue of "thinking" has been generated. That instance carried with it a means to achieve more complex behavior by taking advantage of the "outside world" to improve the control mechanisms. It was a major advancement in the field of "living robotics" when evolution discovered that reflex-based control mechanisms weren't good enough.

Those events, in the most abstract form, are comprehensible to people outside the field of biology. However, one must not forget that the brain is _not_ above genetics. It is part of an incredibly complex machinery that is human. Therefore some of the behavior-genetics relations might actually be correct. We do know of several chemical communication mechanisms in the brain, and it is not hard to understand that those mechanisms might malfunction, or operate differently, in certain individuals of the same species. Therefore, we must come to appreciate that some people might be more "aggressive" than others simply because of their bloodline.

That's a sad fact, but it is unfortunately true. We should see that genetics does play a role in people's lives, though not to the extent that simple-minded geneticist bigots think [*]. Your destiny is _not_ determined by your genes. I think we've witnessed many times the offspring of a fascist family that refuses the family's absurd principles and indulges in artistic leanings. That is _not_ mutation.

On the other hand, don't forget that evolution means "variety", and it is a fact of life that individuals are different.

If I presented your thesis, I would say that "two brains are more alike in their mental capacity than not"[+]. I would not suggest a stronger argument like yours.

Regards,

[*] I would almost replace "simple-minded" with "less-evolved" but that would not be precise!

[+] I think that "more" is something like 90-95%, but note that is in "mental capacity", an aspect of a normal human brain. I don't put disabled individuals or abnormal brains (there are many pathological situations) into this picture.
__
exa a.k.a Eray Ozkural
There is no perfect circle.

The Blank Slate | 188 comments (177 topical, 11 editorial, 4 hidden)
Display: Sort:

kuro5hin.org

[XML]
All trademarks and copyrights on this page are owned by their respective companies. The Rest © 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!