Instances where an AI language model generates text that is convincingly wrong or misleading, like the AI is confidently presenting false information as if it were true.
An LLM hallucinates when it generates a description of a fictional city called ‘Centauri’ with detailed maps and population statistics, despite there being no such place in reality.