Overview

This page is the atomic definition. The mitigation tooling lives at evaluation and rag.

Definition

A hallucination is a confident model output that is not grounded in the input or in verifiable facts. Examples: a fabricated citation, a wrong historical date, a non-existent API method, an invented function signature, or a confident contradiction of the source document the model was told to summarize. Hallucinations come from the model’s autoregressive nature; it samples the most probable next token, not the most truthful one. Mitigations include grounding with retrieval-augmented-generation, requiring citations, constraining output with structured-output, and evaluation with a golden-set.

When it applies

Plan for hallucinations in any model output that an end user might trust. Stakes scale with domain: medical, legal, and financial outputs need stricter grounding and verification than chitchat.

Example

A model asked to summarize a paper cites “Smith et al. (2023)” with a plausible title and journal, but the paper does not exist. The user copies the citation into a literature review; the error is caught only on submission.

Citing this term

See Hallucination (llmbestpractices.com/glossary/hallucination).