I have some niggles. To whit:
If you assume that computers and brains aren't the same kind of thing, (that is, a brain as a system isn't Turing-based, but rather some superset, perhaps) then hopefully you'd be able to describe what kind of system the brain actually is. Otherwise, you'd be able to simulate a brain on a computer.
I don't assume either that the brain is or is not Turing machine equivalent, and I'm not sure which is more likely. To my mind the key to the question is another question: how does the brain give rise to subjective experience ? Once we know that, we'll know whether or not that phenomenon can arise in a digital computer. Until then, we can only guess.
One point that does need to be made even everyday digital computers are not straightforwardly TM equivalent. They only have the required unbounded amount of storage once you take the rest of the world into account (I believe the normal analogy is to having a factory capable of turning all the matter in the universe into paper tape), and once you do that you have to consider that real computers have IO peripherals, which Turing machines don't. I'm not sure anyone has ever thought through the philosophical consequences of this.
If the brain does turn out to be TM equivalent, it'll have to be the brain+world combination in some form that actually has the formal equivalence, because of the need for unbounded storage, and we'll have to have pretty good formal models of both the brain and the world as it interacts with the brain. That knowledge is some way off. If the brain turns out not to be TM equivalent, my suspicion is the key will be that it is an analogue, not digital, device. There are systems, such as some chaotic systems, for which you cannot produce digital models. My hunch is that the brain falls into this category. This is, however, just a hunch.
Consider for the moment how much you actually have memorized. I know pi to at least 12 places; I don't multiply two-digit numbers by adding numbers, but rather, I look them up. I can touch-type without thinking about it. I can form entire sentences without really thinking about which words I want--they just appear to me. I can say them just as easily; in fact, when I think of them, I essentially am saying them to myself.
This is true. Most of our brain works below the level of conciousness. Bits of experience, memory and fantasy only seem to become concious once they reach some level of intensity or significance. Conciousness almost seems to be a kind of monitoring system. For instance, as I type this, I form the ideas into sentences which fit into the predetermined structure, as they occur to me. While I'm certainly concious of what I'm doing, and conciously checking for errors and inconsistencies, I can't really claim to be conciously determining what I type.
Actually this is one of the things that makes me most suspicious about the idea of equivalence. We've built computers to simulate some of the highest level, most verbal activities of the human mind. They do logic, arithmetic and certain aspects of language perfectly, whereas we do them deeply imperfectly. However, we then expect to be able to build on top of that base something like the lowest level, the instinctual, semi-concious level, of our minds. I see no particularly good reason why this is possible, and, indeed, I suspect the idea would never have been thought realistic where mathematicians and engineers not such verbal, logical, people.
As to the rest of it, well, I'm sure we'll find out more about how it actually works in the future, but this is one example why computer AIs aren't virtual brains yet: not enough RAM to hold everything it might need, even if it knew it all. The closest thing we have to *that* is the CYC project, and that's been going on forever, but at least they're working on it.
Personally, I think CYC is silly :) If such a project ever really does produce an AI, its going to be an utterly different kind of mind to a human one. Human minds are resolutely fixed in their environment, whereas all CYC seems to be is a big collection of data and rule of inference, with next to no environment.
And yes, of course this question is interesting because it raises even more questions. My point was only that assuming that the answer is either way raises so many more questions that there's no net gain in knowledge. Better, I think, to observe both possibilities and the issues they raise than to jump either way.
If you disagree, post, don't moderate
[ Parent ]