Opinion

Whenever one confesses to having studied philosophy, the inevitable reaction is, “Why would you bother with that?” The implication being that philosophy is an entirely useless field of study, on a par with the, unbelievably, not-at-all mythical “Feminist Dance Therapy”. But, as philosopher Stephen Law points out, philosophy grads excel at in-demand skills such as analytical reasoning, and verbal and written communication skills.

Philosophy, in a nutshell, teaches you how to think and how to communicate that thinking clearly. No doubt, of course, you associate philosophy with the sort of convoluted, obfuscating, jargon-loaded waffle of the post-modernists and Marxists. In fact, that’s bad, pseudo-philosophy. A good philosophy education in fact equips you with the mental tools to cut right through such prolix garbage.

Even specialised areas of philosophy, such as Philosophy of Mind, can prove unusually useful. How so, I hear you scoff.

Well, consider such heavily contested areas as animal rights. Philosophy of Mind is crucial, here: can other animals reason, like human beings, for instance? Is there a critical difference between the minds of humans and non-humans? The answers weigh heavily on how we treat other animals.

An even more pressing issue is so-called “AI”.

Is “AI” really “Artificial Intelligence”? Are machines capable of true thinking? Are they capable of consciousness? (To answer that one, of course, we have to define first what “consciousness” actually is – and that’s a far harder task than most people take for granted.)

Another current issue is neural implants such as Elon Musk’s Neuralink.

According to the popular media, these implants are “mind-reading”. “AI makes mind-reading possible.” “Neuralink Allows Man to Control Computer with his Thoughts!” “This mind-reading AI can see what you’re thinking.”

Needless to say, none of that is true.

The only reason it sounds so plausible to so many people is because, at least to some degree, they’ve imbibed the mind-brain identity model of consciousness. That is, that the mind is the brain and its processes, and nothing but.

(Although, I suspect most people harbour an unspoken mash-up of two distinctly different theories: mind-brain identity, or Physicalism, and Dualism. Dualism is the idea that the mind is somehow separate from the body, and brain. I rather suspect then that most people are right, even if for muddled reasons. But it definitely seems that, while the mind is undeniably related to a great degree to the brain, there is still “something else”. That “something else” is, I believe, some inexplicably dualist phenomenon.)

Mind-brain identity, or Physicalism, first came into vogue in the 1950s. Advances in neuroscience seemed to make it a slam-dunk that the mind was just the brain. But, philosophers being what they are (argumentative smart-arses), they quickly pointed out problems with Physicalism. If thoughts are just brain states, then a thought of a unicorn must be identical to a particular brain-state. But how is a unicorn identical to a brain-state? If two or more people think of a unicorn, then they would presumably have an identical brain state to one another. But neuroscience doesn’t seem to show anything like that: people can have similar thoughts but with different brain states.

Furthermore, all thoughts are about something. But brain-states do not seem to be “about” anything: they just “are”.

Then there’s qualia. OK, I derided jargon before, but “qualia” is a pretty simple concept: it’s the “what-it-is-like” of experience. The classic thought experiment is Mary, raised in a black-and-white room. Mary has never seen colour. She can read a (monochrome) book describing its shape, its texture, its scent, and above all its colour.

Then, one day, Mary wakes up to find a rose in her room. “So, that’s what it’s like!”

It seems impossible to describe qualia — subjective experience — in terms of brain states.

Pain is another example. We can describe the behaviour of nerve cells, and so on, but that does nothing to convey what pain actually feels like. We might wince, imagining what someone in terrible pain is feeling — but we cannot feel it.

So, what has all this to do with Neuralink?

Well, first off, thoughts are not simple codes. Even in terms of brain states. Thoughts are complex inter-relationships between multiple simultaneous processes in multiple regions of the brain.

Certainly, as it happens, we can map brain states correlated to thoughts to some degree. But these are second-hand reports of thinking. Mapping them is no more reading thoughts than observing the movements of cars in peak-hour traffic is reading the minds of the drivers. Or, watching me tap keys on my keyboard, to spell out words and sentences, is reading my mind.

Of course, these sequences of keystrokes are strongly correlated with what I’m thinking, but they’re nowhere near to capturing the cacophony of the Cartesian theatre going on in my head while I write. Not just thinking of what to write next, but hearing the music playing while I do so (“Grantchester Meadows” by Pink Floyd) and the plethora of memories, emotions, and thoughts, that triggers. Sidewise observing what’s happening outside my library window. What it is like to be sitting in this chair.

Let’s consider some of the Brain Computer Interface (BCI) devices creating so much hype. One such device was implanted in a quadriplegic, non-verbal patient. While she cannot speak, she can make facial expressions: the “AI” is trained by having the patient “read” different phonemes (parts of speech) over and over, until the computer learns to correlate her brain activity with the phonemes.

A second device is implanted in another non-verbal patient. This one detects pulses sent to his vocal cords. It combines those with predictive, “auto-correct” text, to produce a limited vocabulary. I imagine there’s a lot of “ducking” going on.

It must be noted, then, that both these devices need extensive training to decrypt the pulses sent to vocal cords. They cannot “read minds” out of the box.

A third device is implanted in a patient who can talk and move his head and shoulders. To use the device, he has to try to move his hand as if moving a computer mouse. The nerve pulses are picked up by the computer, which, with sufficient training, learns to correlate them to the desired movements.

Which is no more “mind reading”, much less “telekinesis” as some have claimed, than my actually moving a mouse with my hand. None of these devices have direct access to their subjects’ thoughts. All they are doing is picking up motor impulses and, with much training, correlating them with the desired movement. The computer no more reads their minds than my hand reads my mind when I desire to pick up a coffee cup.

This is not to belittle the technology, by the way. Clearly, they’re marvelous advances in neural interfacing. As they mature, no doubt they will vastly improve the lives of many currently helpless people.

But they’re not mind-reading machines.

Philosopher of Science VN Alexander’s “We Are Not Machines” webinar series critiques such evangelistic notions of AI and BCIs, in part by underscoring just how vastly more complex biological processes are than computer processes. This is true, but it leaves open at least the conceptual possibility that sufficiently complex computers could “think” and “read minds”.

To put this notion to rest, we only need to consider John Searle’s classic, “Minds, Brains, and Programs” thought experiment. To be sure, Searle was writing in the 1980s, at the dawn of the computer revolution, but his paper has stood the test of time.

Searle proposed a putative “Chinese Room”. In this room, sealed off from the outside world apart from an “In” and an “Out” slot, sits a non-Chinese understanding operator. Outside the box, a Chinese speaker writes something down in Chinese on a slip of paper and passes it through the slot. The operator looks at the symbol, consults a table, and writes down the appropriate response symbol, which is passed out again.

To the person outside, it looks as if they have asked a question in Chinese, which the box has answered in Chinese.

But note that the person inside the box still has no knowledge of Chinese. They have simply received a symbol and responded with a symbol a data table instructs them to. Conceivably, these data tables could achieve any level of complexity and the principle would still hold. Entire conversations could take part with the box, but the person in the box will have no idea what they are “about” at all. Much less are they reading the mind of the person outside the box.

This is what is happening in the Neuralink case. The operator in the Chinese room — the computer — is receiving inputs, and responding with the outputs it has been trained to. At no point in the process does the operator “understand” what it is doing.

Much less read anyone’s mind.

Even if the AI really were a genuine Artificial Intelligence, it wouldn’t.

To grasp this point, consider any interaction you have with another person. You don’t just listen to their verbal reports (speech), you closely observe their body language, their facial expressions. Each little twitch of the mouth, movement of the eye. Blinking, twitching, gesturing, all of it is registered and interpreted. You learn to correlate those signals with what others report (by speech or action) that they’re thinking. Like a poker player, you read a hundred “tells” in every conversation. This is what, in their electronic way, neural link devices are doing.

But are you reading their mind? Of course you aren’t. Not even you, the genuinely conscious, intelligent being.

So, what makes you think a computer can possibly do what you cannot?

Punk rock philosopher. Liberalist contrarian. Grumpy old bastard. I grew up in a generational-Labor-voting family. I kept the faith long after the political left had abandoned it. In the last decade...