Back to glossaryGLOSSARY · Concepts

Hallucination

When an LLM produces output that sounds confident but is factually wrong. The fundamental challenge of using LLMs for anything with stakes. Mitigated by RAG, citations, and human-in-the-loop review. GPT-5.5 reported 60% drop in hallucinations vs GPT-5.4.

How it works

Hallucinations come from the model's training: it learns to produce plausible-sounding text, not to verify facts. When asked about topics underrepresented in training, the model fills in plausible-but-wrong details. Mitigations: ground answers in retrieved context (RAG), require source citations, run automated fact-checks, and route low-confidence answers to humans.

Example

An agent without RAG asked 'What's the new feature in our v2.5 release?' might invent a feature based on what 'v2.5 release' typically contains. Same agent with RAG retrieves the actual release notes and answers correctly with a citation.

Related terms

Need to actually use Hallucination?

We build production AI systems that put these concepts to work. 30 minutes, we map your use case.