Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

[P]
Is the Brain Equivalent to a Turing Machine?

By ZeuS572 in Technology
Mon Mar 17, 2003 at 02:00:56 PM EST
Tags: Science (all tags)
Science

From the NewScientist.com:

"The world's first brain prosthesis - an artificial hippocampus - is about to be tested in California. Unlike devices like cochlear implants, which merely stimulate brain activity, this silicon chip implant will perform the same processes as the damaged part of the brain it is replacing.
The prosthesis will first be tested on tissue from rats' brains, and then on live animals. If all goes well, it will then be tested as a way to help people who have suffered brain damage due to stroke, epilepsy or Alzheimer's disease."



This brings up the key question that this article will focus on - Is the Brain a Turing Complete Machine?

Description of a Turing Machine

Alan Turing was a mathematician and logician in 20th century. He is often considered the pioneer of computer science as we know it today. He helped the British Government crack German codes during World War II. His world-changing contribution to society was a Turing Machine.

For a history about Alan Turing, the man, see: http://www.turing.org.uk/turing/index.html

Introduced by Alan Turing in 1936, a Turing machine is a very simple kind of computer whose operations are limited to reading and writing symbols on a tape, or moving along the tape to the left or right. This tape is divided into squares, any square of which may contain a symbol from a finite alphabet, with the restriction that there can be only finitely many non-blank squares on the tape. The Turing machine can only read or write on one of these squares at once--the square located directly below its "read/write" head.

In addition to a finite alphabet, the Turing machine has finite states. The machine is guided by a set of instructions that define the following: Given the current state and the current symbol, write x new state and y new symbol, and move the head either left or right. The state table defines these instructions.

Turing machines are important because they represent what computers can and cannot do. Any deterministic algorithm can be mapped to a Turing machine algorithm. If an algorithm is described as "Turing Complete," then it meets the requirements of a Turing Machine.

The Implications

If our brain or a self-contained part of the brain could be mapped to a Turing machine, what would that signify? There are some significant developments that could come out of such a discovery.

We could start off by the creation of Artificial Intelligence (Artificial only in the sense that it's not in a living human. It would otherwise be a copy of intelligence from a real brain). If we could map the operations of the brain to a Turing machine, then we could replicate this behavior in a machine. We would have a non-organic human mind, living in a computer. Not only will it be fascinating to have such a mind, but it could serve several useful purposes. For example, brains could be sustained and utilized with less resources than that necessary for traditional humans to function. We could have three brains operating for the price of human one. Secondly, we could modify certain functions and see how the overall model of the human brain was affected. The discoveries coming from this research could help in medical discoveries to aid real humans. For example, we could replicate an Alzheimer's brain, and keep attempting to modify it until it no longer suffered from the ailment, and then replicate the steps in true humans suffering. We could use these models to find enhancements that can be done to the human brain (outside of simply curing diseases). And finally, we could find out what a human could be like with more processing power--we could speed up a human brain and find out how much more it can process, and if it changes it's personality and/or inherent nature.

Besides the discoveries in computation, we could also learn to modify the state tables of humans. Perhaps we could implant memories in humans (ala the movie "Total Recall"). Perhaps every child would be implanted with the amount of knowledge that current adults have, and they would start off with this base knowledge at an early age.

Clearly, the possibilities are endless (within the limits of human imagination!).

The Counter Argument - Non-Computational States

A showstopper argument often given is that the brain has states that can't be measured finitely, or that follow patterns that cannot be measured mathematically. If this was the case, we wouldn't be able to map the brain to a Turing machine, and thus wouldn't be able to map it to a computer in the way that we currently create algorithms and programs.

The existence of non-mathematically measurable patterns within math was first proved by Kurt Godel in Godel's Theorem, in 'response' to Bertrand Russell's Principia Mathematica.

One well known author Douglas Hofstader writes in his beautiful book, <u>Godel, Escher, Bach: An Eternal Golden Braid</u> that these brain patterns are from self-referencing loops (what he calls "strange loops"). One example he gives of a self-referencing statement that creates a loop is to evaluate whether the following MEANINGFUL statement (could be an axiom) is true or false-- "This Statement is False." The statement simply makes an assertion; though because of its self referencing nature, it creates a state which is neither true nor false (or possibly both true and false). If it was a true statement, it would be false, since it proclaims to be false (and it's true what it proclaims); if it is false, then it must be a true since it tells the truth of itself.

Hofstader believes that consciousness is comprised of such self-referencing loops.

Also, consider this--If the brain modifies itself, then we could possibly never map it to a Turing machine, because the instruction table would keep changing. The self-modification aspect would be hard to replicate.

What does the reported discovery promise?

Though this discovery in itself is not enough to say whether the brain is or isn't a Turing machine, it does bring it once step closer to it being a mapping to a Turing machine (albeit a very small step).

Though they couldn't understand how it worked, they could copy it (e.g. mimic the state table and the instruction set). It's important to keep this in perspective: They haven't yet tested it fully, but they are as closer to this than any has been before. That's what makes this quite exciting. The road to replicating AI is much longer. We would have to finish successfully replicating the hippocampus (which is the easiest part), and then move on to the more complicated parts. Presumably, if we could find a method to replicate without understanding, we should be able to use this method in any part of the brain.
"No one understands how the hippocampus encodes information. So the team simply copied its behaviour. Slices of rat hippocampus were stimulated with electrical signals, millions of times over, until they could be sure which electrical input produces a corresponding output. Putting the information from various slices together gave the team a mathematical model of the entire hippocampus."

Also, of interest is the following fact: The brain processes information in parallel, whereas a Turing machine is ultimately serial. In order to mimic a parallel machine in a Turing machine, you would have to have an exponentially larger number of states. And yet they did it with computer technology available today!

What about consciousness? The researchers haven't yet gotten that far. The experiment currently focuses on the Hippocampus. But they will try:
"The researchers developing the brain prosthesis see it as a test case. `If you can't do it with the hippocampus you can't do it with anything,' says team leader Theodore Berger of the University of Southern California in Los Angeles. The hippocampus is the most ordered and structured part of the brain, and one of the most studied. Importantly, it is also relatively easy to test its function."

If they can do it to the hippocampus without understanding its functions, who knows what will happen with deeper levels!

Part 2: Ethics

What does this mean with regards to the value of human life? If we can be easily replicated, copied, or replaced, what will be left in our inherent humanness?

Would you support behavior modification?
Would you support implanted memories? (ala "Total Recall")

One other important ethical point is that of the "life" of the machine that was created (via replication). Even if the machine is not living in the traditional sense, it still is aware of itself and has the functions of a human. Would it be ethical to then modify this behavior? Could the machine feel pain as we do? If we create weird states for it, who experiences these states?

And if you argue about protecting this machine, then what about animal research today? Do you support that?

Some More Reading

About Turing Machines:
http://www.ams.org/new-in-math/cover/turing.html
http://www-csli.stanford.edu/hp/Turing1.html
http://grail.cba.csuohio.edu/~somos/bb.html
The Myth of the Turing Machine
An Alternative View of CS:Quantum Computing

AI- Is the Brain a Turing Machine?
A Google Query
Godel, Escher, Bach: An Eternal Golden Braid
The Singularity Institute
Creating Friendly AI

Sponsors

Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure

Login

Related Links
o Google
o http://www .turing.org.uk/turing/index.html
o http://www .ams.org/new-in-math/cover/turing.html
o http://www -csli.stanford.edu/hp/Turing1.html
o http://gra il.cba.csuohio.edu/~somos/bb.html
o The Myth of the Turing Machine
o Quantum Computing
o A Google Query
o Godel, Escher, Bach: An Eternal Golden Braid
o The Singularity Institute
o Creating Friendly AI
o Also by ZeuS572


Display: Sort:
Is the Brain Equivalent to a Turing Machine? | 104 comments (77 topical, 27 editorial, 0 hidden)
Aren't you really asking..... (4.50 / 6) (#2)
by Blarney on Sat Mar 15, 2003 at 02:41:25 AM EST

If the brain relied entirely on cells which performed certain operations in a manner entirely dependent upon sensory input and signals from other cells, it would be simulable by a Turing Machine. The Newtonian deterministic billiard-ball universe is Turing Complete.

The only way that the brain could not be Turing complete would be if it was a chaotic system, influenced by even the smallest perturbations such as quantum mechanics would randomly produce. In other words, there would have to be randomness feeding our thought processes. But can we really call quantum undeterministic influences random? Perhaps they are - or perhaps they're simply part of a bigger system which our observable Universe is only a small part. So if the brain is not Turing complete, we are guided by subatomic signals that cannot be characterized by science - maybe this is what believers would call a soul.

Now, just because the Newtonian deterministic universe is Turing complete doesn't mean you can get a computer big enough to actually simulate it.....

I will elaborate on this (1.00 / 1) (#4)
by ZeuS572 on Sat Mar 15, 2003 at 02:48:29 AM EST

You're right, I should talk about this. I previosuly had a section about the universe being deterministic, but it wasn't thorough enough.

[ Parent ]
exactly (none / 0) (#40)
by suntzu on Mon Mar 17, 2003 at 05:16:19 AM EST

you'd need a computer that could contain as much information as the universe to simulate the universe. so essentially, you'd need an outside universe, with the same amount of (or more) matter to perfectly simulate the universe.

approximations are another issue of course.

[ Parent ]

Chaos and QM have nothing to do with each other (4.00 / 1) (#68)
by duckl07 on Mon Mar 17, 2003 at 05:00:03 PM EST

While I won't dispute that there may (or may not)be some non-computable functions in the brain, you are mistaken to think that QM is required for chaotic behavior. Newton's laws of motion are perfectly capable of producing chaotic systems (e.g., a tilted top spinning). So are turing machines (e.g., manblebrot set). A chaotic system is not necessarily non-computable. And while a given QM system may be chaotic, the basic equations are quite linear, even if describing a probality distribution.

Someone else here said it best. We need to be able to even understand the turing machine stuff in the brain prior to really being able to know if the brain is in fact a finite turing machine. I personally feel that this is still a purely philosophical question, and will be until we gain some more insight into the brain's functioning. Not that that is a bad thing.

[ Parent ]

Anandamide (3.66 / 3) (#7)
by artsygeek on Sat Mar 15, 2003 at 04:27:17 AM EST

I'm just wondering if there's an electronic

Anandamide essentially acts as a filter, blocking the memorization of "extraneous" information.  It keeps us from remembering every little extraneous detail.  To quote the co-discoverer of anandamide Raphael Mechoulam "Would you want to remember every face you saw on the subway?"

One correction (3.00 / 1) (#8)
by artsygeek on Sat Mar 15, 2003 at 04:29:12 AM EST

I meant to say: "electronic analogue to anandamide." but I accidentally sent it off before I could correct my error.

[ Parent ]
you did that on purpose. (nt) (3.50 / 2) (#70)
by Wah on Mon Mar 17, 2003 at 06:38:44 PM EST

;-)
--
YAR
[
Parent ]
Sp what happens if (2.00 / 1) (#16)
by Ward57 on Sat Mar 15, 2003 at 02:48:38 PM EST

one day, we can replace the entire brain with silicon without changing the personality of the human being involved? I'm sure I've read some science fiction on exactly this subject. *They* claimed the result was immortality.

Heat (3.50 / 2) (#25)
by godix on Sun Mar 16, 2003 at 05:30:51 AM EST

Considering how hot processors run I hope there's some serious cooling going on then. I imagine a nation of people with fans in their foreheads, and god help them if the fan bearings ever go.

A question I do have, if we ever replace our entire brain with silicon, will it be required by law to accept interference like current electronics is?


Love - A temporary insanity curable by marriage.
- Ambrose Bierce, The Devil's Dictionary
[ Parent ]

Not all electronics... (none / 0) (#76)
by vectro on Mon Mar 17, 2003 at 11:51:27 PM EST

... are required to accept interference; only Class B computing devices are. There are other classes of computing devices, used in medical, military, and other life-or-death situations, which are required to operate under any reasonable interference.

“The problem with that definition is just that it's bullshit.” -- localroger
[ Parent ]
Physical immortality is impossible (4.00 / 1) (#94)
by nusuth on Tue Mar 18, 2003 at 05:04:09 PM EST

But I might not care after a trillion trillion years. The universe itself will someday be very cold (likely) or very hot (unlikely.) Either case it won't be possible to transfer energy in a controlled manner, which is the crux of conceivable any machine.


[ Parent ]
-1: Bored with this topic (1.00 / 3) (#23)
by srn on Sun Mar 16, 2003 at 04:47:19 AM EST

There was good discussion on the last attempt to get an article up. Enough is enough.

Some scattered comments (4.90 / 11) (#27)
by jacob on Sun Mar 16, 2003 at 08:58:52 AM EST

First, self-modifying Turing machines have the same computational power as regular Turing machines. (Just consider what it would happen to complexity theory if this weren't true!)

Second, your implications section is pure pie in the sky. People have been working on writing programs to simulate cognitive functions for 60 years now, and the results are depressingly bad -- the upshot is that we've got no idea how even very small subparts of the brain work. This has nothing to do with Turing-completeness: if the brain computes functions a Turing-machine can't, there are still other functions it computes that the brain can, and we don't have those ones down yet; and the belief that there exists a Turing machine that somehow performs cognitive functions hasn't made things any easier.

Another point to be made is that Turing-machines are good for reasoning about decidability of decision problems (questions with yes or no answers) but not as good for reasoning about other kinds of problems. Brains don't get all their input at once, for example, and have the ability to choose what inputs they receive (you can turn your head, smell, close your eyes, touch things, and so on) and these capabilities don't necessarily have good representations in Turing-machine world. So if you want to meaningfully equate brains and Turing-machines, you need to describe a way of taking brain problems and posing them as decision problems.


--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

BrainFuck - a Turing-complete language (4.57 / 7) (#28)
by bdesham on Sun Mar 16, 2003 at 02:19:42 PM EST

A good example of a Turing-complete programming language is BrainFuck. Although pretty much any other language has a much larger set of features, BF shows how you have to do things on a "pure" Turing Machine-- that is, one with only the eight required Turing commands ([, ], <, >, ., ,, +, and -).

For example, adding two values together -- an operation usually done with a single "+" -- is accomplished with the command [<+>-]<. Multiplication? Take a look at >[-]>[-]<< <[>[>+>+<<-] >[<+>-] <-] >>>[<<<+>>>-]<<< . Heck, even the "Hello World" program is ungodly long. The prime number checker, then, can give you brain problems if you try to figure it out.

Sure, BF ain't pretty, but it'll give you a new appreciation of high-level programming languages and how much they simplify the process of programming as compared to a simple Turing machine (which could, theoretically, do anything that, say, C could do).

--
"Scattered showers my ass." -- Noah
Real men... (5.00 / 1) (#29)
by Matt Oneiros on Sun Mar 16, 2003 at 04:36:17 PM EST

program in malbolge.

hello world:
(<`$9]7<5YXz7wT.3,+O/o'K%$H"'~D|#z@b`{^Lx8%$Xmrkpohm-kNi;gsedcba`_^][ZYX WVUTSRQPONMLKJIHGFEDCBA@?><;:9876543s+O<oLm<p> "http://www.acooke.org/andrew/writing/malbolge.html">http://www.acooke.org/andrew/writing/malbolge.html

created it, as he says, it may not be turing complete

Lobstery is not real
signed the cow
when stating that life is merely an illusion
and that what you love is all that's real
[ Parent ]

I prefer... (none / 0) (#32)
by The Solitaire on Sun Mar 16, 2003 at 10:35:05 PM EST

A couple small rocks, some graph paper, and a really long roll of toilet paper.

I need a new sig.
[ Parent ]

I was going to ask... (none / 0) (#51)
by BerntB on Mon Mar 17, 2003 at 12:51:33 PM EST

(Please don't hate me for this joke, even though I probably deserve it...)
[I prefer...] A couple small rocks, some graph paper, and a really long roll of toilet paper.
I was going to ask if you didn't want a pen or something, too? Then i noticed the part with toilet paper.

Note that my question about pen is unasked so please don't answer! I'm not going to read any answer anyway. :-)

[ Parent ]

bah. Program in Dis instead. (none / 0) (#34)
by pb on Sun Mar 16, 2003 at 11:25:06 PM EST

(=<;:^876Z43W10/.-,P*)('K%$H"!~}C{z@>}|{z]rwIu4V2qC/nQPO*<(gJeG#b!m_BAi>Z=<Wt9NqRpP3lGLjDI,Ad?D'O;$?\=<;kz876543210/.'&%$#G!~}C{z@xwvu;srqp6nm3kji/z
---
"See what the drooling, ravening, flesh-eating hordes^W^W^W^WKuro5hin.org readers have to say."
-- pwhysall
[ Parent ]
Hmm... (none / 0) (#77)
by vectro on Mon Mar 17, 2003 at 11:54:52 PM EST

... for bonus points, write a brainfuck backend for GCC. Then compile your C++ code for a Turing machine.

“The problem with that definition is just that it's bullshit.” -- localroger
[ Parent ]
Oh (4.00 / 4) (#30)
by Big Sexxy Joe on Sun Mar 16, 2003 at 06:22:37 PM EST

Is that why I have an infinite amount of tape coming out of both of my ears?

I'm like Jesus, only better.
Democracy Now! - your daily, uncensored, corporate-free grassroots news hour
A computer is not a Turing machine (3.30 / 13) (#35)
by acronos on Mon Mar 17, 2003 at 12:40:44 AM EST

Because so many people get this confused, I thought it important to make this distinction.

Computers can do several things a Turing machine cannot.

The first and most significant difference is that a computer can use interrupts.  Interrupts allow a computer to stop a currently running task and start a different one based on a trigger such as a timer or key press.

This means a computer can recover from the "halting problem" while a Turing machine cannot.  Use of a timer interrupt can determine when an algorithm has been running too long and stop or alter that algorithm.  An example of this in practice is that newer versions of Windows can survive the crashing of a single program as long as the crashing program is not running in the core levels of the OS.  Even low level crashes could be stopped if the OS were designed properly, which hasn't happened yet.

The second difference is in the design of inputs.  A traditional Turing machine is really just an algorithm that you put in motion.  After it starts, there are no additional inputs.  This means that a the Turing machine as Turing modeled it would not be able to interact with it's environment.  However, today's computers are not limited in this way.  They can have thousands of simultaneous inputs from networking to cameras to keyboards all changing the internal behavior of the computer.

The Universal Turing machine was intended to prove that extraordinarily complicated systems can be made from very simple designs.  It has been distorted by people with little imagination into saying, that since a Turing machine is simple and has problems, and a computer is essentially a Turing machine, that a computer must be simple and have the same problems.  This does not follow.  We are not even close to finding the real limitations of our current computing hardware, much less our future hardware.  It is currently human complexity limitations and processor speed that are limiting computers, not their fundamental hardware design. The biggest problem is the very human limitation on how much complexity can be brought into a problem before everyone's minds are blown.  The second problem programmers run into is speed.  The speed component is what forces programmers to have finesse and greatly increases complexity, feeding back into the first problem.  The jury really is still out on whether we will achieve human level AI.  Those who say differently on either side are selling smoke oil.

However, to answer your question, the brain is definitely NOT a Turing machine, but my gut feel is that it could probably be emulated on something similar to one.


Inputs and Interrupts (4.33 / 3) (#36)
by iwnbap on Mon Mar 17, 2003 at 02:32:23 AM EST

Imagine you have a two-head Turing machine, one head read-write, the other head read-only and can only move left-to-right.  This gives a pretty good model of inputs and is provably equivalent to a one-track machine.  Similarly output can be done with a write-only tape, where the head moves only left-to-right.

You're correct that interrupt modelling is harder/more dubious on a Turing machine, but consider, most processors/interrupt controllers have the option of masking interrupts. If the program decides to mask the timer interrupts, then you're back in a similar situation.  You can model a machine where at random intervals it gets thrown into some state and the head pushed back to some position. If the machine has decided to overwrite the instructions at this point, then it's behaviour will be similar to a TM.

A more fundamental problem is that computers do not have infinite quantities of tape.  Nor do brains.


[ Parent ]

fundamental hardware changes (2.00 / 1) (#54)
by acronos on Mon Mar 17, 2003 at 01:36:20 PM EST

All of the things you suggest are modifications to the fundamental Turing machine design and therefore not a clasical Turing machine. I agree that these differences can be solved, if for no other reason than that they have been in a modern computer. However, when you change the design of the Turing machine I don't think we are still talking about exactly the same thing. My point was that there are differences between a computer and an unmodified Turing machine.

I do find a modern computer to basically be a much better designed and useful Turing machine. A clasical(meaning unmodified) Turing machine can emulate most of the functionality of a modern computer and a modern computer can emulate much of the functionality of a clasical Turing machine. I know it is somewhat of a technicality to bring up the differences, but I think the distinction is important because so many people forget or do not understand that many of the "impossible" things on a computer are fundamentally solvable when viewed in a different direction.

A more fundamental problem is that computers do not have infinite quantities of tape. Nor do brains.

I absolutely agree with you here. I say if we are going to modify the Turing machine, lets just give it infinite speed so that the two infinities cancel out the halting problem.

[ Parent ]

You have some funny ideas about Turing machines (5.00 / 2) (#56)
by jacob on Mon Mar 17, 2003 at 02:25:51 PM EST

If you want to go the literalist route, then TMs are nothing whatsoever like modern computers. There's no monitor, for one, nor is there byte-addressible RAM or reasonable CPU instructions or anything of the sort. Saying that something is equivalent to a Turing machine is not, and hasn't ever been, an attempt to say that with only a few cosmetic changes a Turing machine works the same way as what you're looking at. Instead, it's saying, "Any question that this thing can answer, a Turing machine could also answer if you found a proper way of asking, and vice versa." The point of describing a modified Turing machine with interrupts is just to point out that interrupts, while they might make programming more convenient, don't allow you to answer any questions that Turing machines can't answer.

This is the whole point of Turing machines, and the one application domain for which Turing machines are far superior to real computers: they're simple enough that things can be easily proven about them, and then more complicated machines (such as real computers) can be shown to the same expressive power they have, so all the results about Turing machines carry over to machines you actually care about using. That's why we know, for instance, that an idealized infinite-memory Pentium IV can't solve the halting problem, even though proving it directly based on the specs would be intractible.

--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

[ Parent ]

technically (none / 0) (#58)
by suntzu on Mon Mar 17, 2003 at 02:36:41 PM EST

so all the results about Turing machines carry over to machines you actually care about using

This isn't proven (because it would have to be reproven for every minute architecture change), but whatever, it is generally accepted by the vast majority of computer scientists. I definitely believe it.



[ Parent ]
indeed. (nt) (none / 0) (#60)
by jacob on Mon Mar 17, 2003 at 02:48:23 PM EST



--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

[ Parent ]
practice vs theory (5.00 / 4) (#37)
by suntzu on Mon Mar 17, 2003 at 04:59:36 AM EST

In practice, nothing's a Turing machine (since nothing has infinite storage or memeory). What is believed is that a Turing machine is an idealized version (i.e. has infinite resources) of an electronic computer (but with the same basic capabilities: state transitions, storage/memory, and input/output which may be real-time or may be simulated).

However, the objections you mentioned can be encoded into a Turing machine. Any sequence of commands or interrupts could be simulated if known before hand. For instance, the interesting thing about the halting problem is not that it is practically applicable to crashed programs, but that it shows the fundamental limitations of what a program can and cannot determine algorithmically. If a computer could "solve" the halting problem, it could analize the code before hand, and for any sequence of inputs, determine whether the program would halt. It would need no timer. For instance, if a program hangs, and becomes unresponsive in windows, you don't get an error message saying "after analyzing this input, it is determined that this program will not stop, and so you might as well close it" (and likewise, you don't get a message saying "don't worry, this program will eventually halt, would you like to wait a little longer?"). That's because this is not possible. You can't determine beforehand whether an arbitrary program will halt. The reason that timing out isn't a solution, is because, essentially, timing out makes the number of executed steps finite. Go past the limit, program dies. But this could be easily encoded into TM, yet it still doesn't solve the halting problem.

The really interesting thing about the halting problem is not the narrow applicability to halting, but the mapping to other problems. It can be proved that if the halting problem is undecidable, then so are a host of others (infinitely many, actually), including whether a program will produce a certain output given arbitrary input, or whether it will reach a certain state given arbitrary input.

Practically, yes, the limits of what we can usefully do with computers are still well within the realms of engineering practice and not CS theory. However, if you imagine that something is equivalent to a TM (or, as you pointed out with the brain, that you believe it can be simulated on one), then whatever you can prove about the TM, you should necissarily believe about that other thing. So if you believe that the brain maps to a TM, it follows that you believe the brain cannot decide an arbitrary case of the halting problem, and can only use nondeterministic heuristics such as timing.



[ Parent ]
You missed my point (1.75 / 4) (#52)
by acronos on Mon Mar 17, 2003 at 12:55:10 PM EST

Any sequence of commands or interrupts could be simulated if known before hand.

That is precisely the problem. With a modern computer they do not have to be known before hand. That is why a modern computer is different from a Turing machine.

or, as you pointed out with the brain, that you believe it can be simulated on one

I don't believe the brain can be simulated on a Turing machine. The brain responds to the environment, a Turing machine does not. However, a modern computer does. I don't know if current hardware designs are sufficient to the task of emulating human level intelligence. However, I do know that current hardware can emulate all of the low level processes that we are currently aware of in the brain. And, it can emulate the currently known action of human neurons if they are sufficiently modeled, as will be proven if the experiment referenced in the article pans out. There is a fundamental problem at this time in that only a few hundred thousand simplified neurons can be modeled on my computer at brain speeds due to processing power restraints. Far fewer can be modeled using more complex neuron models. I believe I have read somewhere that the brain has around a hundred billion neurons. We are not even in the human brains league yet. It should be interesting to see what we are able to do with the few hundred million neurons that a computer should be capable of modeling if we keep Moore's law up for another ten years.

I am not trying to imply that the solution to human level AI is only possible by modeling human neurons. That is just one solution that I think is very likely to work, but not possible yet. Emulation is always much slower than native code.

then whatever you can prove about the TM, you should necissarily believe about that other thing.

No. There is not a one to one correlation. As you say, a Turing machine exceeds today's computers by having infinite tape. And, todays computer can take live input and a Turing machine cannot.

If a computer could "solve" the halting problem, it could analize the code before hand, and for any sequence of inputs, determine whether the program would halt.

You are implying that since a human can see that an infinite loop will never end and a computer supposedly cannot, that there are things a computer cannot do. However, I can write algorithms that can detect for infinite loops and therefore "solve" the halting problem. So, if I can write programs that solve simple halting problems, who is to say that someone smarter than me couldn't write an algorithm to solve complex or generic halting problems. I, as a human, write code that runs into the halting problem all the time. So, clearly I couldn't solve the halting problem in complex cases either. I think you, as a human, are in a halting problem of you own in that you are stuck on this idea that there are things a computer cannot do.

[ Parent ]

actually, you may have missed mine (5.00 / 1) (#57)
by suntzu on Mon Mar 17, 2003 at 02:29:40 PM EST

That is precisely the problem. With a modern computer they do not have to be known before hand. That is why a modern computer is different from a Turing machine.

Hmm, this is the whole point i was trying to make, and it may not have been that clear. My point is, Turing machines are better for proving the theoretical limits of computers than they are for telling you what a computer you can build today can practically do for you. The point is that, if you know the inputs, you can (well most people would agree at least) simulate a computer on a Turing machine. It doesn't matter that with a real computer you don't have to know the inputs before-hand. In fact, the halting problem is predicated on not knowing the input till you get the tape your going to get. The point of the halting problem is that there exists no algorithm general enough to take any TM and any input and determine whether it will halt. Sure, if you limit the set of possible inputs, then it may be possible (depending on the imposed limits) to determine whether the machine will halt. It may seem like a nitpicky distinction, but it's not. It means that you cannot write an operating system that will take any program and any input, and determine whether than combination will halt. Besides, once you get to the point of determining, based on input entered, whether a program will halt, you've essentially lost the gain from being able to take future input. All you care about is whether the program with the current input will halt, which puts you back in the same situation as the TM-based halting problem. The point you seem to be missing is that deciding (by some timing or interrupt based heuristic) that a specific program has been put into an infinite loop is not a solution to the halting problem. If you believe it is, you need to reread the actual description of the halting problem, not a high level summary of it.

I am not trying to imply that the solution to human level AI is only possible by modeling human neurons. That is just one solution that I think is very likely to work, but not possible yet.

We agree here.

Emulation is always much slower than native code.

There's line of reasoning that says that you wouldn't actually have to simulate neural activity, just the shuffling around of symbols that neural activity makes possible. This is another "nitpicky" distinction, but again an important one. At present, that's the line of reasoning I find most convincing.

No. There is not a one to one correlation. As you say, a Turing machine exceeds today's computers by having infinite tape. And, todays computer can take live input and a Turing machine cannot.

I didn't claim there was a one to one correlation. I claimed that if you believe there's a one to one correlation, then what's provable about a TM follows for the equivilent. I didn't claim that you should believe that the brain or a computer was equivilent to a TM (though I claimed that I personally thought it was).

You are implying that since a human can see that an infinite loop will never end and a computer supposedly cannot, that there are things a computer cannot do.

I claimed no such thing. My claim was more along the lines of "a human cannot see any infinite loop." I don't believe a human can solve the halting problem in all cases. It can spot come cases of it, but not all. This is the difference between a problem that is recursive and a problem that is recursive enumerable.

However, I can write algorithms that can detect for infinite loops and therefore "solve" the halting problem.

No, you can write algorithms that can detect some infinite loops, under a limited set of (likely platform specific) circumstances. You cannot write an algorithm that can detect all infinite loops. You cannot solve the halting problem (notice the lack of quotes) you can "solve" halting issues for certain circumstances. But that is a far weaker claim than being able to solve the halting problem.

That's the thing. Turing machines can show that certain things cannot be done, even giving a universal language and infinite resources. It makes no claims about, e.g. code written in C++ on x86. But it follows that if these interesting problems cannot be decided on this more powerful platform, they certainly can't be decided on a less powerful platform. You cannot write a program that solves the halting problem. You can write one that keeps your machine from locking up, but that's a totally different, much weaker claim.

So, if I can write programs that solve simple halting problems

There is no "simple" halting problem. There is the halting problem, and there are cases of determining whether very specific things will halt. Those are very different.

who is to say that someone smarter than me couldn't write an algorithm to solve complex or generic halting problems.

Any number of more elegant proofs than I can provide. Search google or everything2 or something.

I think you, as a human, are in a halting problem of you own in that you are stuck on this idea that there are things a computer cannot do.

<sarcasm>LOL.</sarcasm> There are provably things a computer can't do. Earlier in the comments thread, someone provided the name of a project that attempted to show that interrupts and such provided a machine that was more powerful than a TM. This claim was, according to the poster, proven false. You should look around for that one.



[ Parent ]
Thank you. (4.00 / 1) (#71)
by acronos on Mon Mar 17, 2003 at 07:02:18 PM EST

Thank your for answering respectfully even though I foolishly left a jab in the post.  After doing some more research, clearly I misunderstood the halting problem.  Thank you and the other poster for pointing this out.  I am learning.

My misunderstanding was that there were some problems that a Turing machine could not solve but that a human could.  That is the way it has been explained to me in the past, so I am not alone in my misunderstanding.  Now that I have read the actual proof, I understand that that is not what is being said.  Nowhere do they claim a human could solve the contradiction any better than the Turing machine did.

So here is my current understanding.  Please correct me if I am wrong.  Since language, mathmatics, and logic continue to evolve, I believe they must not be fundamental properties of the universe - rather they are models we use to model the universe as we experience it.  So when we use our models the examine our models, sometimes we get feedback and this causes a mess and contridictions or infinite loops.  This property is demonstrated very clearly by first Godel in mathematics then by Turing in computer science.  Basically, there is no model that can model everything including itself.  A corelary is that there is no function that can predict infinite loops in all functions because the analysis of itself can cause such a loop.

[ Parent ]

no problem (5.00 / 1) (#92)
by suntzu on Tue Mar 18, 2003 at 04:08:18 PM EST

Eh, I had a lot of programming to do, and a final to study for, so I had to have some way to procrastinate =).

so I am not alone in my misunderstanding

Yeah, there are definitely plenty of misconceptions about Turing machines. One is that it's proved that a TM is equivalent to any modern computer. It's not, because you'd have to reprove it for every little architecture change, but it's generally considered a very safe assumption to make. I'm not sure whether this has been proven for quantum computers also, but, from what I know about them (admittedly not a ton on the technical/mathematical side), I imagine it's the same.

Since language, mathmatics, and logic continue to evolve, I believe they must not be fundamental properties of the universe - rather they are models we use to model the universe as we experience it.

I think we're still not sure in a lot of cases what is a model in the approximate-still-need-to-refine-the-accuracy sense, and what's a model in the if-this-is-true-then-it's-actually-analogous sense. Limits of a TM would probably fall into the second category. The idea language, encoding, and even information is somewhat hard to really define, but for the ways that those are usually thought about, the stuff we know about how we can shunt symbols around, so to speak, is very much a fundamental property of that information. Whether information (and by further extension, possibly intelligence and sentience, but that's a whole 'nother topic, and one that's even less well defined) is a fundamental emergent property of matter is very much another, very open question. Godel, Escher, Bach, is a good starting point (or, if you've already learned about the individual topics it brings together, an interesting angle on how it all ties together) for learning about this stuff. But if you're really interested, you should probably check out more rigorous and thorough treatments of the subject.

So when we use our models the examine our models, sometimes we get feedback and this causes a mess and contridictions or infinite loops.

Whether this is the cause for some of the things we see, I don't know. We can make a true statement that's subtly different though. Certain ideas or concepts (including a machine that decides the halting problem) by their hypothetical existence prove their inexistence. So if they don't exist, they do exist, but even if they do exist, we can show that the existence causes a contradiction. This is the basis for every proof by contradiction, whether in calculus, number theory, or computer science. It's why any idea that takes advantage of a computer that can solve the halting problem is not just impractical, but meaningless. There is no way, no matter how much you try, to create a machine that does that, and so the concept of it's hypothetical existence exists, but the concept itself doesn't actually exist. I'm pretty sure about those last few sentences, someone correct me if I'm wrong. That's what I've gathered so far.

Basically, there is no model that can model everything including itself.

This is true, but I think, for a different reason. To model something completely accurately, you need to hold as much information as that system. But how can a system hold all of the information of the surrounding system (even if that's less information than in the simulator) plus it's own information over again (you're asking it to hold all of it's information twice over, and you can't have something hold more information than it holds)? This is sort of a fuzzy way of saying it, but I haven't come up with a better way yet. I'm sure someone's formally defined this sort of thing already though, so there are probably better explanations.

A corelary is that there is no function that can predict infinite loops in all functions because the analysis of itself can cause such a loop.

Hmm... I think that's slightly off. It's not so much that the self-analysis causes an infinite loop, but that it allows a situation with a paradox to arise. Most proofs I've seen of the halting problem don't rely so much on infinite loops but more a "look, if this is possible, than it causes a paradox, the only thing we assumed was that it was possible, therefore it must be impossible." The thing is, you're not really supposed to think about the halting problem in terms of infinite loops (since there are things other than infinite loops that could cause a lack of halting). It's more like, having a decidable algorithm for that problem causes a paradox, therefore that algorithm can't exist.



[ Parent ]
Perhaps you should learn about computability (5.00 / 2) (#59)
by jacob on Mon Mar 17, 2003 at 02:47:35 PM EST

I'm just sayin'. A lot of the points you're bringing up have, in fact, already been pretty well nailed down. In particular:

"So, if I can write programs that solve simple halting problems, who is to say that someone smarter than me couldn't write an algorithm to solve complex or generic halting problems."

Alan Turing. This is one of the most significant and basic results of computability theory, and a result that you really have to understand in order to be able to reason with any kind of acuity about what computers can do and what they can't. Read this.


--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

[ Parent ]

You are right (none / 0) (#72)
by acronos on Mon Mar 17, 2003 at 07:03:15 PM EST

Thank you for the link. It was helpful.

[ Parent ]
That's not the halting problem (4.00 / 1) (#96)
by ph317 on Wed Mar 19, 2003 at 03:57:59 AM EST


"The Halting Problem" is not a matter of your code being able to detect and break loops within itself, as you seem to think.  The real halting problem (the one significant in this whole Turing discussion) can be paraphrased:

There is a Turing-Complete programming language L.  Write an algorithm which takes as input an arbitrary program P written in language L, and outputs whether P, if it was run, would terminate execution at some point in time, or be stuck in an infinite loop.

Of course, the problem is, any algorithm you write which can do so is doing essentially the computation equivalent of actually running program P to see if it ever halts or not.  If P has an infinite loop, then so do you, and your algorithm never returns an answer.

Believe me, you haven't solved the halting problem in your code, or you'd be very famous and very rich for it by now.

[ Parent ]

Flipside (4.50 / 2) (#42)
by gazbo on Mon Mar 17, 2003 at 07:02:47 AM EST

Computers don't have infinite storage and hence are provably less powerful than Turing machines. Computational power and real-world usefulness are not the same thing. Discuss.

-----
Topless, revealing, nude pics and vids of Zora Suleman! Upskirt and down blouse! Cleavage!
Hardcore ZORA SULEMAN pics!

[ Parent ]

was about to point this out (4.00 / 1) (#46)
by khallow on Mon Mar 17, 2003 at 12:20:28 PM EST

Infinite storage is a big difference IMHO. And it doesn't take that much to add interupts or the ability to handle changing data to a Turing machine. For example, you can have multiple Turing machines writing to the same tape. And the interupt problem has been solved. Write the old state to tape, process the interupt, load old state back, and continue where you left off.

If the interupt has to be noticed immediately (ie, no moving down a tape to check a particular region), then internal states that correspond to the interupt can be inserted into your machine (normally internal states are dependent only on the previous state of the machine - it's prior internal state and the symbol it currently reads on the tape). Ie, the machine normally processes the "A" set of instructions. When an interupt occurs, it gets shifted into the "B" set. After processing the interupt, the machine reverts to the "A" set.

Stating the obvious since 1969.
[ Parent ]

You, me and every other old CS student :-) (none / 0) (#50)
by BerntB on Mon Mar 17, 2003 at 12:43:43 PM EST



[ Parent ]
A computer is less powerful than a turing machine (4.66 / 3) (#55)
by zakalwe on Mon Mar 17, 2003 at 02:05:48 PM EST

The first and most significant difference is that a computer can use interrupts. Interrupts allow a computer to stop a currently running task and start a different one based on a trigger such as a timer or key press
This adds no computational power. You can easily model interrupt triggers as a simple turing machine input - Given any given turing machine, just modify it so that there is an intermediate state between every state transition that checks for an interrupt. Interrupt triggers can then be signalled by the "oidd" inputs, and the new TM can do everything the old one can do, plus do whatever it wants for interrupt events (such as switch to a new 'program'.
This means a computer can recover from the "halting problem" while a Turing machine cannot. Use of a timer interrupt can determine when an algorithm has been running too long and stop or alter that algorithm.
You seem to misunderstand the halting problem. It is trivial to design a turing machine that can be proven to halt - the issue is that there is no algorithm that can decide whether a turing machine will halt for any turing machine. Its perfectly possible to design a turing machine with the equivalent of a "timer" that is guaranteed to stop after X elements processed.

In actual fact, real-world computers do not have the halting problem, but this is because they are more limited than turing machines - they are really just finite state machines, and so it is (theoretically) possible to finitely map all possible reachable states to determine if it will halt.

This means that a the Turing machine as Turing modeled it would not be able to interact with it's environment. However, today's computers are not limited in this way.
The "environment" is really irrelevant to the point of a turing machine - it is a description of what is computable, not what is doable. What the inputs and outputs correspond to, if anything, is irrelevant. For any input, you can design a turing machine that proccesses it and produces an output, but whether that input consists of an encoding of sound/visual/smell data, and the output of action/audio or text data is a matter of definition. We have devices that interpret the state of computer's memory as visual data, displayed on monitors. There is no reason why a turing machine can't output data to be interpreted the same way. The "environment" is essentially whatever you input, and the "interaction" is described by its output.

[ Parent ]
Input/Output (4.50 / 2) (#69)
by Simon Kinahan on Mon Mar 17, 2003 at 05:10:49 PM EST

Computers are both more and less limited than Turing machines. On the one hand, a Turing machine can run algorithms that require indefinitely large amounts of space, whereas computers cannot. On the other hand, computers can, as you say, interact with their environment.

The TM is intended as an abstraction for the purely computational things a computer can do. The issue of infinite space is not really relevant in this idealised world, because for any given computer that cannot solve a given problem due to lack of space, you can imagine a bigger one that cannot.

There are things computers can do, for instance if you add analogue components or noise generators to them, which TMs cannot do, but it is still an open question whether these add any computational ability over a TM.

You seem to have missed the point of the halting problem a bit: the point is that there are reasonable-seeming problems that cannot be solved by a TM program that always halts, not that Turing machines can sometimes go into states where they will never halt. Computers really cannot solve the general halting problem. Turings proof is constructive, so if you give me a program that "solves" the halting problem, I can give you back a program for which it will not work, either because it never halts, or because it returns the wrong result. You can find Turing's proof on the web, I'm sure. It is possible to determine whether restricted classes of programs will halt, but these classes of programs cannot be Turing complete.

Simon

If you disagree, post, don't moderate
[ Parent ]

jesus, you don't recover from the halting problem! (4.00 / 1) (#81)
by delmoi on Tue Mar 18, 2003 at 12:27:04 AM EST

This means a computer can recover from the "halting problem" while a Turing machine cannot.

the halting problem is not something to recover from!!!!

God damn, the halting problem is a problem to be solved like the traveling salesman problem or the prime factoring problem.

Specifically the "problem" is wether another problem can be solved, or in other words, to solve the halting problem you would write a function that takes another function and inputs and returns true if the function would ever return. "halting" in the Turing machine world means the same thing as 'returning' in systems like C++ or Java or whatever. It means your done. It's a good thing when a turning machine halts, not something you're supposed to recover from.
--
"'argumentation' is not a word, idiot." -- thelizman
[ Parent ]
Welcome to my sandbox (4.77 / 9) (#43)
by iGrrrl on Mon Mar 17, 2003 at 09:12:49 AM EST

I'd have to say that the brain is not any kind of Turing machine. It's too plastic, stochastic, and dependent upon metabolic states.
If the brain modifies itself, then we could possibly never map it to a Turing machine, because the instruction table would keep changing.
Guess what? Portions of the brain constantly modify themselves. Modifications come in short- and long-term changes in synaptic strength and in local changes in the "wiring diagram."

This is, I think, one of the biggest hurdles for the project. Researchers use the hippocampus for most of the synaptic studies on cellular models of learning and memory. The choice is made in part due to the regularity of the architecture, and in part due to the fact that the hippocampus is necessary for translating short-term memory into long-term memory. No one knows how this happens, but one standard underlying hypothesis is that long- and short-term synaptic plasticity are crucial for this function.

Each pyramidal cell in hippocampus receives input from many different other neurons, both stimulatory and inhibitory. Specific synaptic can be strengthened or inhibited. On top of that, the chances of neurotransmitter release to any given stimuli are quite less than 1. The network is governed by stochastic that can be modified by changing the probability of transmitter release. Given that activity changes the gene expression profile of a given neuron, one hypothesis states that each cell in the network may have an entirely unique profile of transmitter receptors and ion channels -- a profile that is plastic given activity. I've written more about thiselsewhere

In sum, I think they're trying to do something very difficult, I salute the attempt, and I'll be stunned if they succeed.

I don't think it's impossible. People have done some remarkable work stimulating the first layer of processing in the olfactory system (sense of smell). In fact, they've done it well enough to make an artificial dog's nose. (Disclaimer: I'm talking about my husband.) However, I think that encoding discreet stimuli, even through the combinatorial means thought to prevail in olfaction, is easier than the kinds of networks in the hippocampus.

But I could be wrong.

The researcher mentioned in the New Scientist article, Dr. Theodore Berger, published something very interesting about their preliminary work. Here's the abstract:

A new type of biosensor, based on hippocampal slices cultured on multielectrode arrays, and using nonlinear systems analysis for the detection and classification of agents interfering with cognitive function is described. A new method for calculating first and second order kernel was applied for impulse input-spike output datasets and results are presented to show the reliability of the estimations of this parameter. We further decomposed second order kernels as a sum of nine exponentially decaying Laguerre base functions. The data indicate that the method also reliably estimates these nine parameters. Thus, the state of the system can now be described with a set of ten parameters (first order kernel plus nine coefficients of Laguerre base functions) that can be used for detection and classification purposes.
[Gholmieh G. Soussou W. Courellis S. Marmarelis V. Berger T. Baudry M. A biosensor for detecting changes in cognitive processing based on nonlinear systems analysis. Biosensors & Bioelectronics. 16(7-8):491-501, 2001 Sep.]

This paper is important to the so-called hippocampal prosthesis because they claim they are basing the design on input-output parameters measured in slices of hippocampus, and the best data would come from slices grown on microchips. However, in the paper they used only field potentials (rather than single cell measures), albeit with arrays of electrodes that could give spatio-temporal information. Given what I said above about the cell-specificity of synaptic plasticity, this muddies the data. The goal of the published experiments, however, was not to gather data in advance of a prosthesis. From the paper:

Our goal is to quantify accurately the nonlinear characteristics of neuronal activity over time and online for two general purposes: (1) to identify biologically-based nonlinear dynamics for incorporation into biomimetic systems designed for pattern recognition and (2) to reliably detect systematic changes caused by exposure to chemical-biological agents.
In other words, they were trying to make a tissue-based biosensor. The model would detect changes dues to outside influence. The data from such an experiment are invaluable for the creation of a silicon-based bioimitator. My skepticism arises out of a sense that the mathematical model (based on this data and reported in the paper) has the limitation of not including self-modification. The analysis focused on finding changes based on outside influences (chemicals). Strictly speaking, one could call changes in input patterns from the natural hippocampus to the artificial one an outside influence. However, that doesn't account for local processing.

All that said, I didn't think much of the write-up above. As much as the popular press have tried to make of this coming experiment (which hasn't even succeeded in vitro, let alone in vivo), the article above stretches even further into the realm of arm waving.

--
You cannot have a reasonable conversation with someone who regards other people as toys to be played with. localroger
remove apostrophe for email.

Only engineering problems... :-) (4.00 / 2) (#47)
by BerntB on Mon Mar 17, 2003 at 12:29:58 PM EST

Guess what? Portions of the brain constantly modify themselves. Modifications come in short- and long-term changes in synaptic strength and in local changes in the "wiring diagram."
This is not really a problem (the article was wrong about that). Self modifying code is certainly older than Lisp; probably older than the von Neumann computer architecture. (Changes in neurones could be modeled by having state for different neurones stored. "Just" a question of implementation.)

Regarding new cells with connections... well, the work to find out when and how that happens (so it could be simulated in software) seems ridiculously hard to do experimentally without a full cell simulator.

But there are no theoretical reasons (in the mathematical sense) everything couldn't be simulated on a Turing machine.

Of course, I'm not qualified to have an opinion on the biology part. And it was quite a while ago I studied computer science, too. :-)

[ Parent ]

Oh, by the way, thank you! (3.00 / 1) (#49)
by BerntB on Mon Mar 17, 2003 at 12:34:55 PM EST

(I fscked up with Preview/Post button and posted too early, hence an appendix.)

Thanks for a very well written and interesting comment!

If you ever write a book on the subject, please spam me about it!

[ Parent ]

Huh? (3.00 / 1) (#53)
by jmzero on Mon Mar 17, 2003 at 01:23:12 PM EST

I'd have to say that the brain is not any kind of Turing machine

Based on what?

I agree that it's a difficult proposition to reconcile brain function with something that works like a modern computer - but that's very different than saying the brain is not any kind of Turing machine.  

The brain's state may have a lot of different information in different forms (ie, we can't just model it based on connections or just this or just that), and changes over time may make it difficult to determine how one state moves to another - but, again, this doesn't mean it's not a Turing machine.  It means it's a complex Turing machine.

In order for it not to be conceivable as a Turing machine, I think we'd need to show states for which the next state wasn't predictable - and I, for one, think that's a silly enterprise.
.
"Let's not stir that bag of worms." - my lovely wife
[ Parent ]

Probably a philosophical difference (5.00 / 3) (#62)
by iGrrrl on Mon Mar 17, 2003 at 03:19:44 PM EST

...but, again, this doesn't mean it's not a Turing machine. It means it's a complex Turing machine.
I don't know whether infinite complexity counts.

From Philosophy of Mind pages, found at the top of the author's google search, referring directly to the assumptions that underlie calling the brain an Turing machine:

First there is the assumption that we are computational. Second, is the assumption that we have a finite number of brain states.
There are neurobiologists who operate under the first assumption. In many cases, they make the point quite well. Modeling of cable properties can give a very good approximation of neuronal output (to spike or not to spike) in response to input. The problem for me, philosophically, is the stochastic nature of whether the spike will result in neurotransmitter release. In the hippocampus, the structure in question, there are very good data to suggest that the probability of an action potential resulting in transmitter release is around 0.3 in the basal state. The trick, to my mind, is that this number is not absolute and will be influenced by a long list of factors. Many of those factors don't actually reside in the brain itself.
In order for it not to be conceivable as a Turing machine, I think we'd need to show states for which the next state wasn't predictable - and I, for one, think that's a silly enterprise.
Why? Just because something can be modeled within a very defined set of parameters and under circumstances different from its native environment, we can say with certainty what the possible states are in situ?

I have no arguments that the models are useful, or that the process of doing this kind of experiment is immensely valuable. My view of functionalism is a bit jaded. We can learn a lot from reductionist work, but I don't think it is the be-all, end-all of our attempts to understand the brain. Even philosophy of mind specialists don't seem to thingk that the Turing machine model is at all useful. Again from the Washington University pages:

However, what the Turing machine really provides for cognitive scientists is not a notion of equivalence and a means for justifying purely functional descriptions of the mind, but rather a very general notion of computation. The Turing machine describes the whole space of possible computational systems, so anything computational lies in the Turing machine delimited space. Since any given real system is subject to many, possibly infinite, computational descriptions, any given real system will lie at many points in this space. Thus, the notion of Turing machine equivalence provides little information concerning what is 'the same' about any two real systems - it simply notes the fact the systems share one point in Turing machine space. Functionalists provide no means of distinguishing one computational description of the brain from another. It is highly unlikely that they are all important to having a mind. Thus, we need a better way of understanding the relationship between real computational systems than that provided by the Turing machine.

--
You cannot have a reasonable conversation with someone who regards other people as toys to be played with. localroger
remove apostrophe for email.
[ Parent ]

Indeed (4.00 / 1) (#63)
by jmzero on Mon Mar 17, 2003 at 04:00:17 PM EST

The problem for me, philosophically, is the stochastic nature of whether the spike will result in neurotransmitter release...

If we know all the factors, will we be able to predict the next state?  If yes, then I think we're looking a Turing machine.  If no, then is the deciding factor a probability (say, a quantum scale event)?  I wouldn't guess so, but I also wouldn't really claim to know.  In any case, a Turing machine with a Quantum Magic 8-Ball decider isn't that much more interesting than a Turing machine.  And if we can't predict the state, and the next state isn't decided by a coin toss - then what decides it?

Of course this is all impractical - I'm fairly confident we'll never know all the states.  In addition, our limited access to the state information may mean the whole model is useless.

I don't know whether infinite complexity counts

Well, a body can only have a certain number of states.  It's only so big, although mine seems to expand over time.  The number of states could be "practically infinite" though, again potentially making the model useless.

Why? Just because something can be modeled within a very defined set of parameters and under circumstances different from its native environment, we can say with certainty what the possible states are in situ?

Yeah, I think we're largely talking to different things.  I have no ideas of the practicalities here.

Thus, we need a better way of understanding the relationship between real computational systems than that provided by the Turing machine.

I agree with this quote without really knowing enough about the subject to agree.  Turing machines often provide an interesting way to talk about things, but seldom are useful in dealing with those things (even computing things).  
.
"Let's not stir that bag of worms." - my lovely wife
[ Parent ]

"Brain is essentially a Turing machine" (none / 0) (#93)
by nusuth on Tue Mar 18, 2003 at 04:55:38 PM EST

vs. "Brain can be modeled on a Turing machine."

These two are distinct things. You can model a glass of water very well on a Turing machine-like computer, but you can't drink your computer.

Of course the situation with Turing machines and brains is a bit more complicated because we don't care that if the Turing machine consumes oxygen and produces CO2. You just might care whether you can drink that perfect water replica. If the machine can control a body, which acts in an "intelligent manner" (whatever that means), it can be said that particular Turing machine is essentially a brain.

But even then, whether the brain is essentially a Turing machine question would be unanswered. A glass of water is not a generic computer, even if it may be modeled on one such that you can drink the computer.

I'm not saying the brain is not "essentially" a Turing machine. There must be a rigorous definition of essentially to reject such an argument. However I do think brain's internal working is not best understood as a symbol machine and a Turing machine is "essentially" a symbol-manipulating machine.


[ Parent ]

Let me guess (2.00 / 1) (#87)
by medham on Tue Mar 18, 2003 at 08:34:02 AM EST

You're an eliminative materialist.

The real 'medham' has userid 6831.
[ Parent ]

Re: Let me guess (none / 0) (#89)
by iGrrrl on Tue Mar 18, 2003 at 08:55:02 AM EST

Nope.

Most neuroscientists get a little nervous and edge away when someone starts talking about consciousness. I cop-out of the whole question by taking the same view toward philosophies of mind as I do toward religion: We're blind men feeling different parts of the elephant and making pronouncements. No one has the right answer, yet no one is entirely wrong.

--
You cannot have a reasonable conversation with someone who regards other people as toys to be played with. localroger
remove apostrophe for email.
[ Parent ]

I read (none / 0) (#90)
by medham on Tue Mar 18, 2003 at 08:59:55 AM EST

About half of that before I realized that the author was some sort of Wiroid. What are you going to do to give me my time back? I'd strongly suggest Jerry Fodor's essays on the hubris of some (not all, some) MRIists; several are available from the LRB web site.

The real 'medham' has userid 6831.
[ Parent ]

More Edge that Wired (none / 0) (#91)
by iGrrrl on Tue Mar 18, 2003 at 09:21:44 AM EST

I like Jaron Lannier, but he does suffer a bit from having been a wunderkind. Sometimes I think he should heed his own advice.

--
You cannot have a reasonable conversation with someone who regards other people as toys to be played with. localroger
remove apostrophe for email.
[ Parent ]

The brain is not a turing machine (3.00 / 1) (#80)
by delmoi on Tue Mar 18, 2003 at 12:18:24 AM EST

A turing machine is a very spesific thing, A turing machine can almost certanly do anything a brain can do, though.
--
"'argumentation' is not a word, idiot." -- thelizman
[ Parent ]
Question from one biology person to another.... (5.00 / 1) (#83)
by n3uropil on Tue Mar 18, 2003 at 01:31:02 AM EST

However, in the paper they used only field potentials (rather than single cell measures), albeit with arrays of electrodes that could give spatio-temporal information.

I'm just starting out graduate study in neuroscience as a neurophysiologist at the University of Illinois at Urbana-Champaign. I'm very skeptical of this project (as a scientist) but I'm very excited about it, and truly wish they get good results. I wonder why they are using ERP's instead of MEMS microelectrodes in a massive array such as those Bruce McNaughton's lab has used for years (~128 max recording sites I think).

You could get higher temporal and spatial resolution and really match the integrating software to the pyramidal cell APs, and perhaps even investigate the function of regular oscillations such as the Theta rhythm, etc. Since you seem to have more experience than I in this arena, I'm interested in your opinions of this. I think they are making the right choice in using the hippocampus due to it's structure and robust behavioral assays, but I'm somewhat surprised they aren't going intracellular...

Sincerely,

n3uropil

[ Parent ]

opinions (5.00 / 1) (#88)
by iGrrrl on Tue Mar 18, 2003 at 08:45:46 AM EST

McNaughton's work is pretty amazing, and when he started it was as audacious as the project in question. This group is using both acute slices and organotypic cultures grown on microchips. I don't think they could go intracellular with that technique, not and still have live cells. Still, one would hope for at least single unit recording, which McNaughton can do. It may not be technically feasible, though, since the researchers using slices can't control where the cells will be in relation to the electrodes.

One other issue is that people forget the full 3-dimensional structure of the hippocampus. Working with a slice is different from McNaughton's embedded electrodes in awake behaving rats. His spatial problems are different, because he's looking at several areas of the hippocampus simultaneously. It could be a little more difficult for him to see network properties. Slices lend themselves a bit better to that, because of the architecture. The inputs and outputs are in the same general plane, once you cut the thing.

I'm with you, though: highly skeptical and very interested. In some ways I tend to be too skeptical. They succeeded with Dolly the Sheep about ten years before I thought they would.

--
You cannot have a reasonable conversation with someone who regards other people as toys to be played with. localroger
remove apostrophe for email.
[ Parent ]

I should know (3.00 / 5) (#45)
by 8 out of 10 doctors on Mon Mar 17, 2003 at 11:28:25 AM EST

As someone who studies brains for a living and Turing machines for a hobby, I'm glad k5 is talking about this. The piece is generally well supported/cited (even considering the challenger posts) and raises interesting questions without trying to answer them difinitively.

In short, I like it.

Turing machine, or Chinese room II (3.00 / 1) (#48)
by ChaosD on Mon Mar 17, 2003 at 12:33:17 PM EST

I still think that any discussion about turing machines and brain function should include a reference to John Searle's Chinese Room thought experiment.
-----------------------------
There are no stupid questions
i cannot agree that it belongs even near. (5.00 / 1) (#66)
by Prophet themusicgod1 on Mon Mar 17, 2003 at 04:33:32 PM EST

until you can prove to me that a single one of your nerve cells in your ear can understand english. i'm almost convinced the entire reason i'm here is to debunk each and every person who mentions your referenced material as anything other than humour. if that be my only use in this here world, fine...but the inference is *not* valid.
"I suspect the best way to deal with procrastination is to put off the procrastination itself until later. I've been meaning to try this, but haven't gotten around to it yet."swr
[ Parent ]
Hmm ... (5.00 / 1) (#67)
by Simon Kinahan on Mon Mar 17, 2003 at 04:49:41 PM EST

I agree that Searle's constant repetitous assertion that the man in the room does not understand Chinese is not really of any relevance. However, the point he was trying to make - that there's no reason to believe that syntax is sufficient or even necessary for semantics - is valid.

Simon

If you disagree, post, don't moderate
[ Parent ]
Clarification (none / 0) (#97)
by ChaosD on Wed Mar 19, 2003 at 04:15:16 AM EST

The chinese room argument is a philisophical one, not a technological one. The Chinese room represents a machine - the man a process (not a single cell). The question is: Can an emulation of a process be said to be intelligent. This point I think helps to define a philisophical boundry between strong and weak AI.
However, your comment got me thinking: You could look at the Chinese room as a description of a blind process. The man (or process) can have no comprehension of the 'bigger picture', but seems to generate valid output. This would seem to support the view that intelligence is, or at least can be a process.
Either way, the idea gave me an alternative view to process (or algorithm) based AI, which is why I posted the link.
-----------------------------
There are no stupid questions
[ Parent ]
Searles axioms (none / 0) (#95)
by Znork on Tue Mar 18, 2003 at 05:09:40 PM EST

Reading through the referenced page I come across the axioms that Searle wishes to use to prove that strong AI is impossible.

(A1) Programs are formal (syntactic).
(A2) Minds have mental contents (semantics).
(A3) Syntax by itself is neither constitutive of nor sufficient for semantics.

Programs are indeed formal, but so are the laws of physics and chemistry that govern the behaviour of human minds. I can agree that any attempt at strong AI will fail if implemented as a set of rules for generating specific output from specific input. But the formality of programs is irrelevant if the program is instead used to merely guide the assimilation and symbolization of input into an artifical neural network, as the output then becomes a response based on experience rather than any formal guiding rules.

Searles approach, and indeed many approaches claiming that strong AI is impossible, makes the grave error of thinking program logic is what makes the AI. In a viable design it would merely lay the foundation.

Of course, a strong AI implemented in such a fashion would have the same problems that a human mind does; erraneous encoding of experience, or flawed experience generating flawed output, etc. Of course, it would also mean it would have no more problem with (those so by sci-fi authors loved) logical errors or paradoxes than a human does.

It would be possible to add subprocessing ANN units that could perform more rigorous logical analysis and evaluation of various input or output to make such an AI less flawed than the human mind tho.

[ Parent ]

Othersitish (4.80 / 5) (#64)
by jforan on Mon Mar 17, 2003 at 04:00:42 PM EST

"If an algorithm is described as 'Turing Complete,' then it meets the requirements of a Turing Machine."

Actually, only systems can be Turing Complete.  In algorithms theory, the term completeness kind of implies "if and only if" (iff); a Turing Complete machine can implement all "functionality" implementable by a Turing Machine, and visa versa.

The definition of a Turing Machine, however, has nothing to do with how quickly algorithms can be process various input (except that given certain inputs a Turing Machine must process the result (halt) in a finite amount of time) many algorithmically-fast designs of various computations systems are Turing Complete, including various parallel Turing Machine designs.  This should be mentioned alongside the parallel algorithm argument you made:

"Also, of interest is the following fact: The brain processes information in parallel, whereas a Turing machine is ultimately serial."

To be Turing Complete, this doesn't really matter, unless the level of parallelism is defined by the size of the input, which would probably not be the case with the brain.

Additionally, the algorithms used to implement various functionality on a Turing Machine could alter their instruction set by first creating an instruction set separate from the "left right write read" instruction set that is intrinsically available to the Turing Machine.  It doesn't matter that "the self-modification aspect would be hard to replicate", but rather that it can exist on a Turing Machine.  This is computation >theory<, not practice, after all.

My point in general is that the actual implementation of the mapping is not relevant to the concept of Turing Completeness; only the  provability of the existence of that mapping.

I guess I wouldn't be so picky, except that
"the key question that this article will focus on [is '] Is the Brain a Turing Complete Machine? [']".

Jeff

I hops to be barley workin'.

and it has to use decimal , right? (none / 0) (#75)
by Fen on Mon Mar 17, 2003 at 09:49:22 PM EST

There has been nothing not infected with decimal yet.
--Self.
[ Parent ]
Some additional notes (none / 0) (#101)
by lukme on Fri Mar 21, 2003 at 07:32:26 PM EST

"Also, of interest is the following fact: The brain processes information in parallel, whereas a Turing machine is ultimately serial."

You are 100% correct, however you forgot to mention that a multi-tape turing machine is computationally equivalent to a single tape turing machine. So from a mathmatical standpoint it is easier to only consider the single tape machine. Furthermore, I believe the same is true for multi-head turing machines (ie - networked turing machines)

Additionally, the algorithms used to implement various functionality on a Turing Machine could alter their instruction set by first creating an instruction set separate from the "left right write read" instruction set that is intrinsically available to the Turing Machine.

Let us also bring into consideration, the universal turing machine which has the states of the machin encoded on a tape that it reads. This is usually read only, however, if you really wanted to, you could have the TM modify that tape. Ugly, yes. Important for a TM to do from a computation point of view, probably not.

A good reference for this stuff: introduction to automata theory, languages and computation by hopcroft and ullman.


-----------------------------------
It's awfully hard to fly with eagles when you're a turkey.
[ Parent ]
I prefer... (none / 0) (#103)
by jforan on Wed May 21, 2003 at 04:57:43 PM EST

Michael Sipser's Theory of Computation book.

Of course, I took his class, so I might be a bit biased.

Jeff

I hops to be barley workin'.
[ Parent ]

What! (3.50 / 2) (#65)
by miah on Mon Mar 17, 2003 at 04:08:46 PM EST

But is the brain resistant to halting problems? ;)

Religion is not the opiate of the masses. It is the biker grade crystal meth of the masses.
SLAVEWAGE
Sure... (none / 0) (#74)
by Repton on Mon Mar 17, 2003 at 09:10:05 PM EST

Every brain process will halt --- it's called "death"..

--
Repton.
They say that only an experienced wizard can do the tengu shuffle..
[ Parent ]

Death by Chinese Water Torture (none / 0) (#78)
by hymie on Mon Mar 17, 2003 at 11:57:00 PM EST

Exactly. That's why the brain, humans, and in fact the entire universe are all less powerful than a Turing machine. None of them have potentially unlimited capacity.

Oh, and the Chinese Room argument is classic begging the question. The rules understand Chinese.

[ Parent ]

Uh, no it's not (none / 0) (#79)
by delmoi on Tue Mar 18, 2003 at 12:11:35 AM EST

But is the brain resistant to halting problems?

Just letting you know, you don't actualy know what the halting problem (singular) actauly is.
--
"'argumentation' is not a word, idiot." -- thelizman
[ Parent ]
Rats (3.50 / 2) (#73)
by bartok on Mon Mar 17, 2003 at 08:02:33 PM EST

The prosthesis will first be tested on tissue from rats' brains, and then on live animals. Last time I checked, rats were still considered animals.

Not totally in vivo... (none / 0) (#82)
by n3uropil on Tue Mar 18, 2003 at 01:16:57 AM EST

They first plan on testing the prosthesis on hippocampal brain slices maintatined in artificial cerebro-spinal fluid. They then plan on moving to intact rats, followed by monkeys.

n3uropil

[ Parent ]

nowadays... (none / 0) (#86)
by mikelist on Tue Mar 18, 2003 at 06:35:59 AM EST

they often remove the tissue from a specimen, preliminary tests may not require a live subject.
Obviously, sooner or later one will have to be installed in a living creature in order to test its effectiveness.

[ Parent ]
Why Turing machine? (4.33 / 3) (#85)
by olethros on Tue Mar 18, 2003 at 05:52:17 AM EST

Why does it have to be a turing machine? And why model parallelism with a single turing machine and not with a number of TMs in parallel.

Also, there are other types of TMs that don't suffer such limitations as the TM itself, such as the GTM. And as someone already mentioned, computers are not Turing Machines.

The biggest problem with this prosthetic is not associated with whether the brain is a 'TM' or not. It is that they model a limited interface (only taking into account electrical impulses - not taking into account neuromodulators) - and that the modelling, however fine the simulation, might be inaccurate because of lack of plasticity.

If it works it will show that lack of plasticity is not detrimental to the short-term function of the brain. However I doubt that it will work since neuromodulators do have a strong effect on brain function... including the function of the hippocampus.

-- Homepage| Music
I miss my rubber keyboard.

consciousness (3.00 / 1) (#98)
by manmanman on Wed Mar 19, 2003 at 11:35:42 AM EST

here is a very interesting article about what could be consciousness.


Computer scientist, Montpellier, south of France.
Penrose (5.00 / 1) (#99)
by RandomAction on Thu Mar 20, 2003 at 08:35:09 AM EST

Roger Penrose has a take on this, for him the brains apparent ability to see the answer of an incomputable problem, is central to the difference between a Turing machine and an organic mind. He gives the example of aperiodic tiling; a Turing machine will never halt when determining if a set of aperiodic tiles will cover an infinite plane. However a human can instantly see that they can. From this he determines there must be some non-computable physical law that the brain leverages to achieve this. He theorises that this ability and in fact consciousness stem from quantum level events taking place in brain cells.

Lecture on Turning and Penrose, and their ideas

Personally I think it might be possible to create a neural net complex enough to simulate this ability, through recognition of these problems and learning an appropriate response.

Some thoughts on AI (none / 0) (#100)
by freality on Thu Mar 20, 2003 at 04:12:13 PM EST

I'm trying to make one. I've got some thoughts at an SF project page. Freality Machine Intelligence.

Alan Newell's book Unified Theories of Cognition is a good one to look into for anyone serious about this.

Where does nat. lang. fall on the chomsky hierachy (none / 0) (#102)
by lukme on Fri Mar 21, 2003 at 09:53:09 PM EST

Where does natural language fall on the chomsky hierachy?

My guess is that natural language is at least recursively enumerable. We can still communicate and know what the other person means, eventhough the language is not grammacally correct.


-----------------------------------
It's awfully hard to fly with eagles when you're a turkey.
The brain (none / 0) (#104)
by frijolito on Wed Jun 18, 2003 at 07:46:30 PM EST

...will someday be hacked into. Mark my words.

Is the Brain Equivalent to a Turing Machine? | 104 comments (77 topical, 27 editorial, 0 hidden)
Display: Sort:

kuro5hin.org

[XML]
All trademarks and copyrights on this page are owned by their respective companies. The Rest 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!