Hallucinations
What are AI hallucinations and how can they be avoided? You can find out more about this topic here.
What are hallucinations?
Possible causes
Hallucinations result from the way generative models work: they probabilize the next output based on learned patterns, not on real knowledge retrieval as in a database. If a query is made for which no learned factual knowledge is available, the model "hallucinates" an answer that fits structurally but may be incorrect in terms of content. Causes include, for example
Examples and risks
Example 1: A language model is asked about a person who does not appear in the training data. It could compile a biography and freely invent awards, dates of life, etc.
Example 2: An AI system for medical advice "hallucinates" a non-existent study to support a recommendation because it cannot answer the question.
Such hallucinations are problematic because users often trust the AI answers. In creative applications (e.g. writing stories), invented details may be uncritical. But in scientific, medical or legal contexts, hallucinated facts can provoke serious errors of judgment. For example, there was a well-known case in which an AI tool cited several invented court rulings in a legal brief – clearly damaging the lawyer's credibility.