Currents of Desire
I am a contrarian by nature. This does not mean I am stupid; it often happens
that when everyone believes a certain thing, they are right. But not always.
I am frankly very suspicious of ideas with universal currency precisely because
belief is so common and pervasive that nobody asks the hard questions.
Within living memory it was believed that geological change never happens at
a rapid pace, that it took millions of years for the dinosaurs to disappear,
that the idea of asteroids hitting the Earth and causing noticeable disturbance
was ridiculous, that the continents certainly did not move around, and that
anybody who thought otherwise was some kind of radical or fool.
The reasons for these supposedly scientific beliefs were rooted in politics.
The suggestion that catastrophic change could drop out of the sky at random
said as much about the stability of civilization and empire as it did about
the formation of fossils. The sea change in geology which has occurred since
1970 was made possible by political forces. The clues were always there; any
child can tell the continents fit together like a puzzle, and the K-T iridium
layer was a klaxon waiting for any listening ear. But the hard-liners had to
die or retire, and the commonly held metaphors had to soften up to the idea.
A culture shocked by Watergate and weary of Vietnam found the idea of catastrophic
change much easier to swallow than the culture that had stood fast against Hitler
and electrified the Tennessee Valley.
My own belief in the tabula rasa is not politically motivated. It is
an extension of a personal project to develop strong Artificial Intelligence.
Being a contrarian, I approached the problem from the assumption that, since
nobody is making any progress, everybody must be wrong. Nobody ever knows
where such assumptions will lead; if the Universe were constructed a bit
differently the few people who remember an odd chap named Einstein would
know only an oddball with this crazy obsession about the speed of light.
Not to say I am right on Einstein's scale -- the jury is still out -- but the
conjecture has been very fruitful. I will explain how shortly.
The pervasive cultural belief in genetic determinism, however, most certainly is
politically motivated. Any scientist who establishes (or claims to establish)
a genetic link to any abstract behavior can be guaranteed headlines and grant
money to do further research. It may be crass, but people who have a lot of
money and power like to be told that they deserve it. They also like to be told
that it's not worth wasting money on down-and-out losers because they can never
make anything of themselves, anyway. And because they have the money and the
power they hold the purse strings and have a lot of influence over who gets
research funding and who gets published. This atmosphere poisons the entire
field in ways that affect even honest researchers.
We've all seen the articles. Two people are pictured, twins ripped asunder at
an early age yet years later showing the same interests, same physique, same
talents, in one case literally wearing the same number of rings on the same
Twin studies were pioneered and popularized by
Burt was a brazen fraud, one of the most disgustingly successful in the history
of science; he made up test subjects, made up colleagues and published their
made-up letters lauding his own accomplishments in journals he edited, took
their decidedly non-made-up salaries for himself, and after being the toast of Britain,
being knighted, and dying peacefully at an advanced age he got away with it
all. He was safely dead when his frauds were uncovered.
Burt was a profoundly evil man who left millions of victims in his wake.
His research was used to advance policies of institutionalized racism,
test-score marginalization, denial of educational opportunities, and even
forced sterilization of the "undesirables" whose genetic inferiority
was made so "obvious."
While Burt was alive there were already doubters, but they dared not contradict
the grand old master of their field. Only when he was dead could they really
investigate; and when they did, the mood was one of shock. Nobody doubted
him fully enough to suspect the true extent of his fraud.
Nobody suspected, as one researcher found in the early 1980's, that every single twin
study ever done was similarly either fraudulent or so poorly conducted
as to have meaningless results.
As for the fabulous twins pictured in Time and Scientific American,
it turned out a lot of them had been separated at much more advanced ages than
"birth" -- in one case eleven years. And given that N% of the population goes
into any given field, given a first twin in that field there is always a N%
chance that the other twin will also drift that way by pure random chance. Given
several hundred million people in America alone, that leaves at least several
million pairs of twins. If none of them remotely resembled one another it would
be just as startling as if they all wore the same kind and number of rings.
The question that floats to mind is, why do people keep doing twin studies?
Think about it. If every person who precedes you into that vast unknown has been
a charlatan, why do you follow? Surely you must understand why someone like me
hears the phrase "twin study" as "attempted fraud." Why set yourself the amazing
uphill task of proving to the world that you're not just another fraud
or incompetent? Surely there are easier and more satisfying ways to earn a
But some people just want so badly to believe that they will keep throwing
money at the subject, will throw their own lives and credibility at it, because
the whole idea is so seductively simple. Nobody does a twin study to disprove
the idea of genetic determinism. If you don't really, really want to believe it,
the experiment wouldn't seem necessary. It would be like deliberately stabbing
yourself in the abdomen to prove that it causes peritonitis.
If you do a twin study, it's obvious that you want a certain result. And
it doesn't take much fraud or sloppiness to get that result. And you will
be rewarded if you get that result.
So I am not impressed by twin studies. Next topic.
A reflex is a pattern of activity which does not have to be learned. Humans
have reflexes. Aha, say the determinists, a smoking gun!
The extreme deterministic viewpoint (which nobody will admit to believing, unless
they are schmoozing up some obvious Nazi for grant money) is that consciousness
itself is just a big old wad of reflexes too complicated to reverse engineer,
but no more "learned" than breathing. The middle ground is that there are
"tendencies" which can be inherited, such as a "tendency" to violence or a
"tendency" to emotion over intellection or a "tendency" to like rings putting
pressure on your fingers. (Really, I did hear that on TV once.) The problem is
defining that magic word "tendency."
If you have the misfortune to be 24 years old and male, even with a perfect driving
record and every possible plus you will pay triple the auto insurance of a woman the
same age with four accidents. This is because of your male "tendency" to get in
accidents. It's horribly unfair to the individuals thus targeted, and a society
interested in fairness or justice wouldn't let insurance companies get away with
this crap. (And for anyone who suspects sour grapes, I've been out of that group
for longer than I like to think about. I get even better rates than the 24-YO girl
at this point in my life, but it's still wrong.)
Nobody will admit to believing that testicles automatically make you a hothead with
a lead foot, but the thing is they make you pay even if you have your testicles
under control. Somehow that always ends up happening in the end.
Human reflexes are not very complicated. This is in direct contrast to some other
animals. Precocial birds and mammalian herbivores are born knowing how to walk.
They can walk within hours of birth, and they walk with the gait they will have
for their entire life. They cannot learn a different way to walk. This has effects.
The reason the race horse Secretariat's sperm is worth more than weapons-grade
plutonium by weight is that his genes carry the trait for a very efficient galloping
gait. A horse who inherits that trait might win the Triple Crown; one that doesn't
cannot be helped.
Humans and a lot of other mammals don't do it that way. We do have a
walking reflex; in fact, it's probably the same one that lets the quail and the
gazelle follow Mom around. But it doesn't work for us, not least of all because
we're bipedal. It also doesn't seem to work for a lot of other mammals, including
dogs and cats, which can kind of sort of walk at birth but with nothing like the
grace of a day-old gazelle. We must lose that inborn walking reflex
before we can learn to really walk, the way we will as adults. And we
can change our gait, both from moment to moment and by learning a new gait at an
advanced age. Hell, we can learn to dance.
This does not mean no reflexes at all are involved in walking. But the specific
"walking reflex" which any competent pediatrician can test reliably goes away
at the age of a few months, just like the Moro reflex. Human babies
display other precocial traits which we also lose before we learn the "adult" way
of doing things.
This seems to be a pattern in all human behavior. We are of course animals, and
we have the usual range of baggage associated with that. But our genius as a
species has been the ability to move tasks normally done by hard-wiring in the
brainstem into the cerebral cortex, our field-programmable tabula rasa.
Of course not everyone does this. I, for example, could not dance for you now
if my life depended on it. But other people can, and I'm sure I could if I
were motivated and put the effort in. We can make our feet do things
Nature never intended. Other parts of our body, too -- we can even learn
to control "autonomous" functions like our blood pressure. It's not easy and
few of us ever bother, but the ability is there.
What makes us so different from animals?
Hearing biologists use this as a defense against ideas like mine is a great
weird-out. Where is this sentiment when People for the Ethical
Treatment of Animals is organizing?
Seriously, it should be kind of obvious that there is some difference
between us and the rest of the order Mammalia. None of the others is busy
building skyscrapers, ocean liners, or atomic bombs. We consider it a triumph
of quiet genius if they manage to teach another of their kind to use a stick as
a tool to dig termites, while we use supercomputers to catalogue their success.
In more productive terms, we have not just a large cerebral cortex, but most likely
a cerebral cortex with a few extra instructions. Biologists have a love-hate
affair with this crowning achievement of human brain-growing; it takes something
like 20% of the energy we get from food just to keep it alive, so it must be doing
something for us (and must have been doing so long before we got to the level of
building skyscrapers and atomic bombs). There is an obvious 1:1 correspondence
between the one (1) species with this anomalously large cortex and the one (1)
species that builds the aforementioned skyscrapers and atom bombs. Yet when they
try to figure out how it works things keep coming up wonky.
In some parts of the brain, there is an obvious correspondence between location
and function, though the cortex is physically as homogeneous as a potato. Touch
an electric or chemical stimulator here, and you will get a memory of Mom, a
taste of apple pie, or a forty-five degree purple line segment in the upper right
hand corner of the visual field. Elsewhere a particular muscle will twitch,
or you are filled with unease about the future. In many places, though, there
is no obvious pattern. The area beneath the rear crown of the head seems to be
largely concerned with mapping 2-D visual inputs into a 3-D model of reality,
and no two people seem to map it the same.
Such probes into the cortex never elicit pain, and the other emotions they can
elicit are subdued. Emotions do not live in the cortex.
One of the epiphanies that got me started on this little project was an essay
by Stephen Jay Gould about some clever fellows researching bee-hunting wasps. One
experiment they did was meant to figure out how the wasps find the nests they
dig while they are out hunting bees; the humans waited for the wasp to leave then
moved all the landmarks around the hole a few inches in the same direction. The
wasp, upon arriving, landed a few inches in the same direction from her nest hole.
There ensued a period of confused searching, after which she finally found her nest;
then she spent several minutes hovering, obviously scanning the landscape as if
to make sure her memory would not fail her again.
I was struck by this account, even through all the behaviorist language, that the
wasp had reacted exactly as a human would if a sufficiently godlike being
pulled a similar trick on one of us. The wasp had a model of the world in its,
uh, head, maybe a smaller and lower-resolution model than the one we make but
similar in principle. It used this model the way we use ours and reacted the way
we would if we found ours in conflict with reality. Consciousness, I realized,
was a very old thing not requiring anything as complicated as a human to express
it. It was a thing computers might already be able to do, if one could only sort out
the algorithm that made it happen.
Another researcher, IIRC Erich Harth, made a point of calling Consciouss the "C-word"
because it was unspeakable in neurological circles; not being quantifiable or
measurable, it must not "really exist" in the sense that quasars and bacteria do.
That is changing a little -- the plate tectonic guys aren't being laughed at so
much -- but the attitude is still ascendant that consciousness is a murky, subjective,
unscientific thing that can't even be defined.
Consciousness (v.): the use of a certain class of "hill climbing algorithm" to evaluate
the state of the world according to some arbitrary set of critera, evaluation
of how various manipulative devices might be brought to bear to optimize its
state, and occasional use of those devices to attempt to change its state based
on these evaluations.
Now, that wasn't so hard, was it? A "hill climbing algorithm" is any answer to
the generic problem of finding the highest point in the local terrain without
the ability to see. That is, you can tell your altitude and whether you are
getting higher or lower as you move around, but you can't see any "peaks" other
than the one you're on; your task in this fog is to get to the highest peak as
fast as possible. This problem is used as a generic model for optimizing any
system with incomplete information. For example, you might have the controls of
a machine with ten knobs, none of which is labelled, and you must maximize its
throughput; the only information you have is what happens when you twiddle the
knobs. This is a perfect metaphor for what living organisms do with their
brains. Emotions are the feedback that let us know how well the machine is
performing, and we begin to see how they might be quantified.
The particular "hill climbing algorithm" used by living things is probably
closely related to one patented by the aforementioned Dr. Harth, called "alopex,"
which is interesting in its use of random thermal noise to create certain favorable
characteristics (which also happen to resemble what real people and animals do an awful
lot). You can learn more by reading his seminal paper (not online, alas) in
Science, vol. 237, p. 184, "The Inversion of Sensory Processing by
Feedback Pathways: A Model of Visual Cognitive Functions." Or you can read his
more accessible and somewhat flawed popularization The Creative Loop.
If the wasps awakened my interest in duplicating consciousness, it was Harth who
convinced me it was possible. Here were actual algorithms, implemented and
tested on actual computers, making the same mistakes and over-generalizations
that people do -- without being told to, but because such misbehavior
arises naturally from flaws in the relatively simple algorithm that produces
such fabulously complicated results.
Brains and Computers
Another weird-out comparable to the PETA-friendly invocation of our relationship
to animals is the assertion that it is crazy, unscientific, or just plain wrong
to use information theory to describe what happens in the brain.
It is true that some people get a little over-enthusiastic with the metaphor,
but what is crazy and unscientific is thinking that anything in the Universe,
including a brain, somehow functions outside of a fundamental thing like
information theory. It's no less crazy than thinking that living things can't
possibly be made of mere molecules.
Brains do not have registers and Von Neumann binary addressed memory, but they
most certainly do process and store information. How they do this is
not even much of a mystery. Neurons compete for inputs which form repeatable
patterns, and they form synaptic connections with those input sources so they
can detect those patterns ever more efficiently in the future. Feedback
sources like emotions and activity level can encourage or inhibit this process.
It is interesting to note that one of the most intense emotions possible
may involve this learning process. A state of epiphany is reached
when one makes a great deal of new connections all at once, realizing how
entire patterns of thought fit together in a previously unsuspected grand
scheme; the feeling is more intense than an orgasm but, alas, also a lot
more rare. There is some research which causes me to think the neurotransmitter
Dopamine is involved in this process. It seems to be intimately involved in
the process of forming new synaptic connections, and we are
wired to positively reinforce such experiences. Without such a mechanism
we might come to regard learning as a generally negative experience, what
with the reason we generally have to do it and all, and seek to avoid it.
The psychoactive drug cocaine works by causing the brain to release its
stores of Dopamine. The cocaine high may be an artificial epiphany, though
I'm not curious enough to try it and compare it with the natural experience.
The pattern detectors which form themselves so then serve as the pattern
library for a multi-level hill climbing
optimizer whose driving engine is not in the cortex at all, but in the thalamus.
By "multi-level" I mean that it functions in stages of abstraction, starting
out with raw inputs and progressing away from the parts of the thalamus and
cortex where the inputs are wired to areas which code for patterns of higher
abstraction and less detail. Each layer of abstraction has its own optimizer,
using the lower one as input and providing an output to the next one up the
Cutting across the top of your head is a line of special cortical areas that
are hard-wired to inputs (in the back) and outputs (in the front). These
back-to-back I/O regions are strongly associated with parts of the body in
a consistent and detailed mapping. These are where the lowest abstraction
patterns are stored. Working toward the back of the head and down the sides
we find less consistent and more abstract maps, until we reach the muddle of
the parietal regions. The visual areas are mapped separately, to the very
lower rear of the cortex, and work upward until they reach this same parietal
Working from the outputs forward, we reach "staging" areas which light up
in PET scans when we "rehearse" a movement but before we actually perform
it; then again higher levels of abstraction representing increasingly complex
movements, until we reach an ill-defined muddle between this mess and the
prefrontal cortex, which is another kind of muddle entirely.
If one examines the interareal wiring of the cortex, one finds that areas
of similar abstraction (according to the plan I've just described) are wired
together between the back (input) part of the cortex and the front (output)
part. This is consistent even in the areas we can't map because they are
muddles, if one works just by distance from the areas we do understand.
What seems obvious enough to me is this: As the hill-climbing algorithms
connected to the back of the brain evaluate our position in life by firing
pattern detectors which correspond to things that are going on, those in
the front are evaluating how to modify it. In the back information moves
from areas of low abstraction to high; in the front it moves from areas of
high abstraction to low. At each level it is evaluated, and if the net
effect seems to be a gain based on several competing scales it's passed
on to the next less abstract output. Finally, if the system finds an idea
good enough to spend energy implementing, it reaches the motor humonculous
and the relevant motions -- now broken down into specific muscle movements --
get sent down to the brainstem, where they are sharpened by more reflexive
modifiers and eventually expressed as bodily movements.
This model doesn't explain everything, but it explains a hell of a lot.
My problem at the moment is nailing down the mechanism by which the pattern
detectors are programmed; it must be simple enough for cells to do it
(and individual cells are stoooopid) and it must be self-regulating
for the level of chaos we exhibit in everyday life. Harth's own alopex
algorithm, requiring careful adjustment of feedback parameters, fails on
this point, but it's a great starting point.
How Computers Work (according to biologists)
The "obvious" paragraph above represents a thing you will never find in
any serious biology text: An explanation, however tentative, of how
the system starts with neurons firing and ends up doing what humans and
By comparison, suppose you read the following explanation of how computers
Computers are made of transistors, which allow small amounts
of electricity to switch larger amounts. Transistors can be grouped to
perform logical functions such as gates and flip-flops. Through the magic
of modern technology it is possible to put a billion transistors on a
silicon wafer. When you put enough transistors on a chip and wire them
just right, you get a computer.
Someone who had never known a computer simpler than their Win98 box might
not look askance at that last sentence, but fortunately we do know that
mere humans built the first computers, that you do not need a billion
transistors to do it, and most importantly that only a few more sentences
are needed to flesh out the essential details about what makes a computer
work. It's true that a Pentium IV is complicated, but the essential thing
that makes it a computer isn't, and the quote above is structured to hide
an ignorance that is not really forgivable by one who does have a clue.
The Plan and the Cathedral
However it really works, the brain contains on the order of 10^14 interconnections.
Those connections actually get made somehow. They are real physical things
that could be mapped. At some point they do not exist, and then as we grow
it turns out they do; and unless they are wired totally at random something
has to direct them.
For genetic determinists, that guiding principle is the genetic code, all of
seven gigabytes or so of instruction on how to grow hair, how to build a pancreas, how to
metabolize fat, how to heal scrapes and make blood clot and somewhere in all
of that how to wire up the brain. This leaves us with a serious case of eight
pounds of shit in a five pound bag, as the genetic code -- even if it were
entirely devoted to brain-growing -- is nowhere near as complicated as the
brain which grows under its direction. By something like five orders of
magnitude, at least.
There is a great deal of structure in the brain, most of which we share with
other animals that do not share our skyscraper and atom-bomb-making prowess.
The crowning achievement of our humanhood, that massive cortex which we alone
possess, is maddeningly homogeneous under the microscope, except for a very
slight thickening at the visual area V1. While it lights up in spectacular
patterns under a PET scanner depending on what we are doing, the structure
itself seems no more specialized than that of dynamic RAM. (Ooooh, a misplaced
Also, apart from an extra layer or two and its greater surface area, our cortex
is not noticeably different from that of cats, dogs, and even birds.
While the microstructure seems almost defiantly unspecialized, the areas of
the cortical sheet are wired together in a specific pattern both through the
sheet itself, and via interareal nerve bundles. This wiring is obviously
controlled by the genome, and is the same in everybody who is not massively
Within the last 30 years or so we have acquired a model for how systems like
living things can turn relatively simple inputs into outputs of great complexity;
it is called chaos theory and its most singlular expression is the fractal, a
surprisingly complex (often beautiful) pattern formed by an unexpectedly simple
expression. It is obvious that the brain (the entire body, in fact, and possibly
the entire Universe) is a fractal. This is how so little genome grows so much
and such complicated brain. It is really the only explanation science has to
offer, if one does not want to start invoking pixies and elves, so we had better
pay some attention to it.
Without going into a lot of detail, the important thing about fractals is that
it is not possible to make a small change in one. If you change the generative
algorithm even slightly, you will not get a slightly different fractal; you will
get a massively and consistently different result. This is what happens in
human deformities like Down's Syndrome. This is the smallest kind of point
mutation possible in a fractal system; the amazing thing is that Down's victims
can survive at all. There are other similar errors which are not so fortunate.
Some grow very thin cortexes that obviously don't process right; some grow
very smooth cortexes with too little surface area and probably not enough areas.
People who have these defects are amazingly consistent, just as normal people are
in the convolutions of our cortexes, which are an emergent property like protein
folding. If you make a small change in the code, you don't throw a monkey wrench
into the works, you throw a nuke.
Back to the tendency tendency
A very good point was made in the last discussion about smaller damage, like
ion channels that don't form right, distorting feedback pathways. Let's consider
these changes that don't affect the basic wiring, but may affect how it programs
I'm going to go out on a limb here because to my limited form of common sense
nothing else makes any damn sense, and say that there is such a thing as a
"properly working brain." That is a brain which is properly nourished, free
from genetic or teratogenic formative defects, with all the chemical messenger
systems functioning nominally.
It is possible for that brain to fuck up, and for reasons that are totally out
of our control.
Since the cortex is not -- can't possibly be -- programmed by the genome with
its inadequate array of instructions, it must acquire its fine
programming through experience. This can only occur as the relevant areas are
myelinized, a process that happens only after birth because of the logistical
problem of getting our fat heads out of Mom without killing her. As it happens
we can chart baby's progress easily; the eyes learn to focus and track, and
later the hands learn to grasp what the eye sees. Most parents notice when
baby figures out that objects hidden from view still exist. Later on we add
language and the beginnings of reason, the R-word animals so dramatically lack.
This programming is fraught with peril. At every level of abstraction we risk
forming an incomplete or skewed set of symbols, which will in turn affect the
patterns that can be coded at the next higher level with the limited inputs
that will be forwarded. In extreme cases people may not reach what we consider
a standard equilibrium with the world; they may be withdrawn or excessively
Abuse and neglect increase the chances of this happening. A full range of
experience decreases it, but I don't think there are ever any guarantees.
Now given this "properly functioning brain," there are a range of insults one
can throw at the system which will increase its chances of misprogramming.
Certain genetic defects figure in here, because they may interfere with
emotional feedback paths or directly inhibit our ability to maintain
electrical activity long enough to form a detectable pattern, or to form connections
when the activity is there to stimulate it. These failures will be on the
large indistinguishable from insults such as child abuse that cause the same
It might be possible to screen for some of these genetic insults. This
might even have some positive benefits, though the more likely result is
that you are not only 24 and male, you have an acetaldehyde metabolism
defect which makes your insurance rates even higher, and there's no cure.
Does this mean that alcoholism is genetically determined?
No, it doesn't. It's just a red herring. Alcoholism is a pattern which
may or may not be encouraged by certain knocks we face in the road of
life; but ultimately it's a thing that can happen to anybody. Just as
anybody might turn out to have the willpower or alternate interests to
make it irrelevant or unlikely.
The danger here is not just the one of creating social injustice, but of missing
the real and possible explanations because of focusing on some tangential
cofactor. I've given one explanation of how consciousness works here, though
it is incomplete and very unsubstnantiated; and it is one more explanation
than I have ever gotten from any source outside of myself, anywhere.
If you didn't know how cars worked and spent your life cataloguing the various
colors of drip which appeared under them, and meticulously cross-referencing
them with symptoms of ultimate car failure, you would be able to draw some
very general predictive rules about fluid drips and failure. What you would
never do is figure out how a car works. You must do that by working from the
other direction -- why is flammable fuel required? You might correctly
identify this as the source of the heat, noise, and forward motion. You
would have to speculate on mechanisms by which fire could be used to make
propulsion. You would have a lot of clues. Your mechanism has to be loud,
has to vibrate, has to do certain things under conditions of load and idling.
Working from that direction it probably wouldn't take you long to reinvent
the internal combustion engine. But you have to give up on the fucking leaks.
They're a side issue. While we concentrate on taking all the cars with red
fluid leaks off the road, nobody is figuring out how the drive train works so
we can make the cars with the red fluid leaks safe or identify their problems
at a stage when they can still be fixed.
Humans and Animals
One last thought. What is it that gives humans our skyscraper and atom-bomb
making prowess? I think the answer to that is a few extra layers of abstraction
in the prefrontal cortex, in a place where no other animal has them. Adding a
few layers like this is exactly the sort of thing one would expect a point
mutation to do; it's the opposite of the missing instructions that give us
things like Down's Syndrome. It is not just brain mass but this extra depth
of abstraction which allows us to form plans involving the far future and
distant dreams, to
plan and execute vast enterprises across a span of lifetimes.
One might wonder what humans would be like without our prefrontal cortex. But
being human we don't have to just wonder; ever clever with our little tools, we