Artificial intelligence has conquered every field, from healthcare to finance to education and creativity. But behind this revolutionary force lies an anomaly that is as fascinating as it is dangerous: so-called Ai hallucinations. We are talking about moments when a model, with the confidence of someone who thinks he knows everything, invents facts, distorts data, proposes absurd or misleading solutions. And he does so without the slightest doubt, as if the truth is always and in any case on his side.
It is not a detail. When it comes to medical diagnosis, financial analysis or public data, such a mistake can turn into a disaster. It is like a surgeon, in the middle of an operation, relying on a wrong map drawn by someone who has never been in the operating room.
The beauty-and at the same time the limitation-is that we humans, even as children, have an extraordinary ability: that of guessing the meaning of things with very few clues.
Ai, on the other hand, often stumble when they need to abstract, generalize, or just really "get it." Even the most advanced models-such as GPT-4 or Claude 2-can fall into these Ai hallucinations. Some perform better: Claude 2, for example, showed only a 5 percent incorrect response rate in the most recent tests. But just go up to less refined models like GPT-3.5 or LLaMA 13B, and the error rate can spike up to 21 percent. And it's not just a question of data: it's a question of structure, of cognitive architecture, of what distinguishes intelligence from pure computational power.
Research has begun to respond. How to solve the problem of Ai hallucinations? There are those who propose "feeding" AI with verifiable external sources, those who work on the way we formulate questions, those who build self-critical mechanisms within the model itself. But the truth is that, today, the risk remains. And understanding it is the first step to not suffering it.
If you want to learn more, the report published by Visual Capitalist offers a detailed analysis of the most reliable AI models and current techniques to mitigate the phenomenon of Ai Hallucinations.
Read IBM's thoughts on the hallucinations of the Ai
And as artificial intelligence evolves, we must stay as awake as ever. Because behind every perfect answer may lie a well-constructed lie i.e., a perfect Ai hallucination.
If you are curious about the limits of Ai read this post
- Fei-Fei Li, Co-Director of Stanford's Institute for Human-Centered Artificial Intelligence and Professor at the Graduate School of Business.