When carmakers hack brains

You got to see this youtube video! Hectically cut sequences of busy young scientists in high-tech laboratories wearing lab coats, nerdy looking guys are soldering electronic circuits and stare into oscilloscopes, we are taken on a roller coaster ride through an animated brain chockful of tangled nerve cells. And in between all this, on stage at the California Academy of Sciences,  car and rocket manufacturer Elon Musk announces his latest vision in a messianic pose: The symbiosis of the human brain with artificial intelligence (AI)!  This time his plan to save mankind does not involve mass evacuation to Mars, but will be realized by a revolutionary Brain Machine Interface (BMI), designed and manufactured by his company Neuralink. You may have guessed it, this has caused a tremendous media hype all over the world. The verdict in the press and on the net was: “Musk at his best, a bit over the edge, but if HE announces a breakthrough like that there must be something to it”. The more cautious asked: “But couldn’t this be dangerous for mankind? Do we need a new ethic for stuff like this?”

Well, I don’t think so. Because all this is pure hype. And maybe a bit of bad science. The issue here is  not that it will just take a bit longer until we can use BMIs to download our thoughts and upload new content to the brain, and thus become hyper intelligent. The point is also not that Musk, as usual, grossly exaggerated what he will achieve with his BMI. No, brain AI symbiosis via BMI is a fluke because it is based on three fundamental errors. One concerns a misunderstanding of the principles of BMIs, another one is the prevalent but false notion of the brain as a computer, and the third error is a misconception about what AI is. These errors are unfortunately very popular, even among scientists. All the more reason to take a closer look.

Mr. Musk writes a paper

The video, which was viewed more than 3 million times, gave us colorful images, beautiful people, and spectacular announcements. Even a neurosurgeon came on stage, in full surgical gown. On the down side, the information content of the presentation was very limited. Fortunately, regarding technical details Mr. Musk refers us to a ‘scientific article’ in Biorxiv . Which he had published as ‘single author’! The paper describes components of a BMI, such as electrodes implanted surgically in the brain which can record its electrical activity. After training, the brain may use the electrodes to communicate with a computer and thus control a ‘machine’. This might be a robot arm or the movement of a mouse cursor. The article describes in a very sketchy manner some elements of a non-functional BMI prototype: Electrodes to record brain activity, a custom chip to process and transmit the signals, and a surgical robot to insert the electrodes into the brain. However, almost everything else of what is needed for a BMI is missing. Most importantly, something the brain could control – Mr. Musk describes only parts of a BMI!

Remarkably, he has published the paper as single author, despite the fact that it is completely clear that he could not have written the article by himself at all. This is a minor problem, given the other flaws of the paper. Nevertheless, this is a clear violation of internationally accepted publication ethics. It should also be pointed out that the animal experiments mentioned (but not described) in the paper were ‘approved’ by an internal review board of the Neuralink company only. This may be OK in the US,  but in Germany would fortunately be unthinkable. The fact that the article raises unfounded hopes in desperate paraplegic patients is just another matter of ethical concern.

An ad disguised as scientific paper

The basic structure and functional principle of the BMI components described in Neuralink Paper are all but new. Various groups worldwide have already developed them in a similar form and used them on selected patients, for example with spinal cord injuries. After long training some of these paraplegic patients were then enabled by the BMI to perform very simple functions, such as grabbing a cup. The article does describe a number of technical improvements that could potentially enhance the functionality of a BMI. These include very thin electrodes, the ability to record from several thousand (instead of several hundred) electrodes, and a surgical robot for electrode implantation. Nevertheless, it remains nothing more than promise that all this will actually improve the functionality of the BMI, as this is nowhere demonstrated. Instead, we read phrases like “It is plausible to imagine…”. Throughout the article the claim that the BMI can fuse the human brain with a computer, let alone AI, remains unproven speculation.

BMIs don’t decode brain code

The article and video deliberately generate hype and suggest potential uses of BMIs which no serious BMI researcher would consider possible. Spikes of neurons, even if they come from several thousand locations in the brain, do not allow the recording of thoughts, ideas and feelings. A BMI may control a machine through brain activity via training of the brain to generate electrical activity in large ensembles of nerve cells. These neural activities probably have never occurred before at the location of the electrode in this way. The brain learns to trigger a specific action that was previously unknown and foreign to the brain. The success of a BMI depends on the plasticity of the brain to learn such tricks. This is why it takes such a long time until even the simplest functions (e.g. cursor up, cursor down) can be executed. In a substantial proportion of patients this does not work at all.  It is unclear whether, as Mr. Musk claims without proof, an increase in the number of electrodes can significantly improve brain control over a machine. Researchers in the field doubt this.

The brain is not a computer

Mr. Musk’s misunderstanding about how his BMI works is essentially based on the belief that the brain (or mind) works like a computer. The computer metaphor of the brain is almost ubiquitous, and the initial plan of the billion-dollar Human Brain Project was based on it. But that doesn’t make it any more true, it just makes it more expensive. A computer is an automaton that uses programmed instructions to transform input into strictly determined output (which of course could be made random). From the user (input) level down to binary machine code the instructions manipulate symbols. The instructions are completely abstract, they have no content or physical reference to the task the computer is executing for the user. Only our brains assign content to these symbols (signs). The idea of a program code in the brain is untenable because a code is nothing else than a set of mapping rules. One sign is assigned to another, i.e. symbolic representation. Symbolic representation cannot explain consciousness or thinking, which deal with the real world, that is content. Symbolic representation as an explanation for higher brain functions such as reasoning or intelligence only shifts the problem. It falls short of giving content to the activities of the mind. The signs and symbols need meaning, but fortunately the brain has no problem of providing it without programs and codes. Feelings, thoughts, intentions etc. are the coordinated activity of billions of nerve cells and fantasillions of connections between them. Cognition is ’embodied’. The thought of a tree is the electrical activity and neuronal connectivity that occurs when looking at that tree. A memory of this tree is a reenactment of this complex spatial electrochemical state.

Surely the brain is able to handle code. Not only externally, during programming of a computer. Also internally, when speaking and writing. Language is a such a code, i.e. symbolic representation. But language is not required for feeling, thinking, acting. Just think of a dog chasing a squirrel. Language is only a means for these faculties. Since cognition proper does not use codes or programs, there is nothing to read out or import into the brain. One could try, for example, to measure the activity of each of the 80 billion nerve cells simultaneously when looking at a tree. Such a recording would need to include the state of the hundreds of trillions of connections between them. Even if this would be technically possible (which it isn’t), this effort would pointless. Because then one would only have an image of the electrical thunderstorm of this particular brain when looking at this particular tree. But another person’s nerve cells produce different connections and other activities when looking at the same tree. One reason for this will be anatomical (individual brains are different), but more importantly, different brains have a different history (experience) going back many years. And this sensory and cognitive history has contributed and is materialized in the specific connectivity and activity when looking at the tree. One would have to know a brain’s lifetime history in order to make sense of the thunderstorm, that is to detect a tree in it. Even if you puncture the brain with electrodes until there is nothing left of it – down and uploads of content will not be possible. There is no symbiosis of brain and AI.

Artificial intelligence (AI) is not intelligent at all

If it hadn’t be around for more than 60 years, the term ‘artificial intelligence’ could be a brilliant marketing stunt of Mr. Musk. It is almost Orwellian newspeak, because AI in its now widely used form has nothing to do with intelligence. On the contrary, AI (like other computer software) is used where it is necessary to carry out complex activities for humans that do not require intelligence. The identification of cats or tumors on digital pictures, for example. Or translating languages. Or predicting pizza orders in a particular area of town. Or the autonomous driving of a car. Human intelligence has given all these tasks content and context, created rules for them, and derived a specific job for the AI to execute. A job on which the AI must be trained. AI may then recognize patterns in the data without knowing what is at stake, what the content or meaning of the data is that needs to be analyzed. It is ignorant to the task which it is solving, like any other old fashioned computer software. If we would claim that the cats in the photo are canaries, AI would find ‘canaries’. The tumors could be sausages. One might confuse this with intelligence because AI is often applied to activities that indeed require intelligence, such as speaking a language or being an expert in tumor pathology. Unfortunately, ‘machine learning’, which is a much more rational term, may also encourage misunderstanding. Isn’t learning an intelligent activity?

By training with data sets in which input and output are apriori given, AI generates a statistical data model. If all goes well, the data model will generate reliable output on any input. The AI has ‘learnt’, but without any concept, without a spark of intelligence. But aren’t there ‘neural networks’ at work here? An artificial brain, after all? Again the analogy leads us astray. Because the ‘neuronal network’ of the software has some structural similarities with interconnection of nerve cells of the brain (many cells are connected in layers, there are threshold values and amplification factors for the transmission of a signal), it still does not function like a brain. And if it did, we wouldn’t even be able to tell. Simply because we don’t know how the brain works:  How it produces consciousness, feelings, and memory, learns, generalizes, and produces a concept of the world and itself. If we replace the suggestive terms for the description of the structural elements of an artificial neural network (neuron, synapse, etc.) with terms like threshold function, weights, biases, gradients, hidden layers, backpropagation etc., it becomes much clearer that we are not talking about the brain. The faulty logic of the neuronal network analogy culminates in the circular approach of some colleagues who are using ‘neuronal networks’ of a computer program to understand how the brain functions. This is completely circular, because you need to know how the brain works to create a computer program that works like a brain which can tell you how the brain works.

Mr. Musk’s advertising campaign is very reminiscent of a TED talk of a certain Henry Markram in July 2009, i.e. almost to the day exactly 10 years ago. Mr. Markram is the spiritual father of the Human Brain Project, which is funded by the EU with over one billion Euros. In his TED talk, he announced that the Human Brain Project would make it possible to simulate the human brain in a computer. This would enable us to understand perception and thought, perhaps even our physical reality. He concluded that in 10 years (today) a hologram will give his TED Talk.

 

Further reading:

Ari N. Schulman. Why Minds Are Not Like Computers  https://www.thenewatlantis.com/publications/why-minds-are-not-like-computers

Robert Epstein. The empty brain https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer

Ed Yong. The Human Brain Project Hasn’t Lived Up to Its Promise https://www.theatlantic.com/science/archive/2019/07/ten-years-human-brain-project-simulation-markram-ted-talk/594493/

 

A German version of this post has been published as part of my monthly column in the Laborjournalhttp://www.laborjournal-archiv.de/epaper/LJ_19_09/22/index.html

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s