1:56:20 What Is Hallucination?
AI Hallucination
An AI hallucination occurs when a language model generates information that sounds confident and plausible but is factually incorrect or entirely fabricated. LLMs don't 'know' facts — they predict likely text sequences, which sometimes produces convincing nonsense.
How Hallucination Works
Common hallucinations: citing non-existent research papers, generating plausible but wrong code, inventing API endpoints that don't exist, or confidently stating incorrect dates or statistics. The model has no concept of truth — it generates probable-sounding text.
Mitigation strategies: RAG (ground responses in real documents), temperature reduction (less creative = fewer hallucinations), chain-of-thought prompting, citation requirements, and human review. No approach eliminates hallucinations entirely.
Key Concepts
- Confabulation — When the model fills gaps in knowledge with plausible but fabricated information — a core hallucination mechanism
- Grounding — Providing real documents or data in the prompt so the model bases responses on facts rather than imagination
- Uncertainty Calibration — Well-calibrated models express uncertainty when they're unsure — poorly calibrated ones state fabrications confidently
Learn Hallucination — Top Videos
1:56:20
34:05
3:31:24 Hallucination Educators
@leilagharani
Excel. Power Query. Copilot. ChatGPT. Power BI. PowerPoint. You use them every day to automate Excel and your work - s...
@openai
OpenAI’s mission is to ensure that artificial general intelligence benefits all of humanity.
@academind
There's always something to learn! We create courses and tutorials on tech-related topics since 2016! We teach develop...
Frequently Asked Questions
Can hallucinations be eliminated?
Not completely with current technology. They can be significantly reduced through RAG, better prompting, temperature tuning, and human review — but LLMs can always generate incorrect information.
Why do LLMs hallucinate?
LLMs are trained to predict the next likely token, not to determine truth. They don't have a fact database — they have statistical patterns. When the training data doesn't cover a topic well, the model extrapolates plausibly but incorrectly.
Want a structured learning path?
Plan a Hallucination Lesson →