Last night on New Year’s Eve I was hanging out at my older brother’s house in Magnolia, where I met a friend of his who is working towards a Ph.D. in neuroscience. His particular field involves creating neuro-computational models of the brain, which I found very fascinating.

Though I’ve written a lot on this blog about eliminative materialism, which is an anti-reductionist view, a lot of what contemporary neuroscience does is quasi reductionist, in fact. The computational theory of mind is basically a modern version of functionalism, arguing that minds are fundamentally information-processing machines. The eliminativist view, however, is not incompatible with computationalism. A lot of mind theorists — and I say mind theorists because it’s not a formal pronouncement of science — think that computationalism will end up being eliminativist. Eliminativism is in my opinion just a few theoretical steps ahead of the scientific pronouncements, though the critics say eliminativism is “premature” at this point. Neuroscience is nowhere near a complete model of the brain. Instead there is a kind of model pluralism going on, where various models explain different processes, with no overarching, philosophically-satisfying picture of what the mind really does.

But the real question that computational neuroscience is fascinated with is whether and when a physically-realizable computational model could match the human brain. That is to say, whether multiple realizability is possible. John Searle from Berkeley argued that multiple realizability was basically impossible in his essay, Minds, Brains and Programs. The Chinese Room is supposed to show that a machine could never understand the way humans do. There is supposed to be something incredibly unique and exceptional about the way the human brain “secretes” understanding, according to Searle. Without some essential ingredient, like human brain milk or what have you, understanding is not possible.

I think Searle’s argument is near-sighted because he assumes a relatively low level of information processing. He also builds an internal “understanding process” into his model, and says that that process is really external to the model. And like most of my objections to theories like this, he uses a semiotic theory that attaches “understanding” outside the plane of signifiers., therefore disallowing any causal connection to take place.

But recall the argument Hans Moravec, the absent-minded genius of AI robotics, made in the essay When Will Computer Hardware Match the Human Brain? If the information processes in the brain are on such a high order of magnitude, it makes sense that the same level of information would be required to match the capacities of the human brain. Which, at this point, is not possible. Searle’s argument is an unimaginative one, since it assumes (or rhetorically asks us to intuit) that a process can be simulated with less information than the original.

Searle relies on analog systems for his analogy, using phrases like “water pipes” and “valves”. What Ai researches have in mind is not some clunky Frankenstein, but complex systems capable of high magnitude information processing and content management. Searle can only think of “syntax” as the closest approximation of understanding an Ai unit can achieve. Yet with an advanced matrix for assigning truth values to syntactical arrangements, with the possibility of confirming those values and associating them with other values, seems to be a better approximation of understanding than Searle allows.

These kinds of processes are certainly realizable. Moravec estimates that research within semiconductor companies makes it quite clear that existing techniques can reach processes that could potentially match the computational complexity of the human brain. When memory capacity reaches tens of billions of bits and multiprocessor chips reach over 100,000 MIPS, Moravec argues this is comparable to the human brain. Circuitry is also increasingly incorporating a growing number of quantum interference components. Hence, the development of the quantum computer. As production techniques for those tiny components are perfected, they will begin to take over the chips, and the pace of computer progress may steepen further.

Even though Searle would still say that no matter how much information goes into the system, it is not capable of understanding. It seems rather ridiculous to not allow the system any means of defining its variables through some sort of confirmation method other than the copy-method the system has already. Understanding is relational and associative; it’s not something that happens when copying and pasting. And it seems highly likely that neuroscience will eventually be able to give complex enough models which can then be used in artificial systems to simulate the exact same processes that take place in the human brain.