An AI hallucination occurs when a language model generates information that sounds confident and plausible but is factually incorrect or entirely fabricated. LLMs don't 'know' facts — they predict likely text sequences, which sometimes produces convincing nonsense.

How Hallucination Works

Common hallucinations: citing non-existent research papers, generating plausible but wrong code, inventing API endpoints that don't exist, or confidently stating incorrect dates or statistics. The model has no concept of truth — it generates probable-sounding text.

Mitigation strategies: RAG (ground responses in real documents), temperature reduction (less creative = fewer hallucinations), chain-of-thought prompting, citation requirements, and human review. No approach eliminates hallucinations entirely.

Key Concepts

  • Confabulation — When the model fills gaps in knowledge with plausible but fabricated information — a core hallucination mechanism
  • Grounding — Providing real documents or data in the prompt so the model bases responses on facts rather than imagination
  • Uncertainty Calibration — Well-calibrated models express uncertainty when they're unsure — poorly calibrated ones state fabrications confidently

Learn Hallucination — Top Videos

Hallucination Educators

OpenAI
OpenAI

@openai

AI Coding

OpenAI’s mission is to ensure that artificial general intelligence benefits all of humanity.

1.9M Subs
456 Videos
36.2K Avg Views
2.18% Engagement
View Profile →
Academind
Academind

@academind

AI Coding

There's always something to learn! We create courses and tutorials on tech-related topics since 2016! We teach develop...

929K Subs
752 Videos
17K Avg Views
2.39% Engagement
View Profile →

Frequently Asked Questions

Can hallucinations be eliminated?

Not completely with current technology. They can be significantly reduced through RAG, better prompting, temperature tuning, and human review — but LLMs can always generate incorrect information.

Why do LLMs hallucinate?

LLMs are trained to predict the next likely token, not to determine truth. They don't have a fact database — they have statistical patterns. When the training data doesn't cover a topic well, the model extrapolates plausibly but incorrectly.

Want a structured learning path?

Plan a Hallucination Lesson →