Definition
An AI hallucination occurs when a language model generates false information, presenting it as true with confidence. LLMs don't 'know' what is true: they generate statistically probable text. For businesses, this means every AI output must be verified, and RAG systems are preferred because they anchor responses to real documents.
Related terms
EXPLORE