Overview
This page is the atomic definition. The mitigation tooling lives at evaluation and rag.
Definition
A hallucination is a confident model output that is not grounded in the input or in verifiable facts. Examples: a fabricated citation, a wrong historical date, a non-existent API method, an invented function signature, or a confident contradiction of the source document the model was told to summarize. Hallucinations come from the model’s autoregressive nature; it samples the most probable next token, not the most truthful one. Mitigations include grounding with retrieval-augmented-generation, requiring citations, constraining output with structured-output, and evaluation with a golden-set.
When it applies
Plan for hallucinations in any model output that an end user might trust. Stakes scale with domain: medical, legal, and financial outputs need stricter grounding and verification than chitchat.
Example
A model asked to summarize a paper cites “Smith et al. (2023)” with a plausible title and journal, but the paper does not exist. The user copies the citation into a literature review; the error is caught only on submission.
Related concepts
- retrieval-augmented-generation - the primary mitigation for factual hallucinations.
- evaluation - the discipline that catches hallucinations before they ship.
- llm-as-judge - one technique for scoring groundedness at scale.
- rag-citations - citation requirements that make hallucinations visible.
- structured-output - schemas that constrain the space of fabricable outputs.
Citing this term
See Hallucination (llmbestpractices.com/glossary/hallucination).