Artificial intelligence: Critique of chatty reasoning

Why AI is not intelligent at all, and therefore can’t speak, reason, hallucinate, or make errors
(This is a slightly extended version of my June column as “der Wissenschaftsnarr” in the German Laborjournal: “Kritik der schwätzenden Vernunft“)
The ongoing debate whether ChatGPT et al. are a blessing for mankind or the beginning of the reign of the machines is riddled with metaphors and comparisons. The products of Artificial Intelligence (AI) are “humanized“ by means of analogy: They are intelligent, learn, speak, think, reason, judge, infer decide, generalize, feel, hallucinate, are creative, (self-)conscious and make errors, are based on neuron-like structures, etc. At the same time, functions of the human brain are described using terms like computer, memory, storage, code, algorithm, and we are reminded that electric currents flow in the brain, just like in a computer. Befuddled by the astounding achievements of chatting and painting bots, many now argue that generative AI displays features of “real” intelligence, and that it is just a matter of more programming and time until AI surpasses human cognition.
The camp of those who think AI is intelligent proves its point with a long list of what AI can do that all look pretty intelligent. The doubters, however, are not convinced; they complain that AI still lacks certain “functionalities” of intelligence, following Tesler’s theorem: “AI is whatever hasn’t been done yet.”
In the following, I will argue that the current AI debate is missing the point, completely. Instead of simply marveling at AI’s putative intelligence, we should ask what intelligence, thinking, language, consciousness, etc. actually are – to measure AI against them.
Continue reading