Glossary - Model Hallucination
AI can do some amazing stuff, but sometimes it just makes things up. That’s what people call hallucinations, and knowing how they work helps you figure out when to trust the output—and when to double-check.
What is Model Hallucination?
Model hallucination is when an AI confidently gives you an answer that looks right but isn’t. It can either be totally false, made up, or simply not backed by the information it was given.
How Does Model Hallucination Work?
AI models are trained to predict what words come next, not to double-check the truth. When they don’t know something—or the prompt nudges them the wrong way—they often “fill in the gaps” with guesses that sound convincing but aren’t accurate.
Key Features
Hallucinations usually show up as confident, well-written statements that are wrong. Sometimes they contradict the source, sometimes they add details that don’t exist, and other times they just can’t be verified at all.
Benefits
When accuracy doesn’t matter—like writing fiction, brainstorming wild ideas, or coming up with creative taglines—hallucination can actually be useful. It gives you unexpected suggestions you might not have thought of yourself.
Use Cases
In serious areas like healthcare, finance, or law, hallucinations are dangerous because wrong information can cause real harm. They’re also a big risk when summarizing documents or pulling out structured data, where every word needs to match the source.
Types of Model Hallucination
Hallucinations don’t all look the same, and researchers usually group them into a few categories. Some are about faithfulness—whether the AI sticks to its source or drifts away from it. Others are about factuality—whether what it says is true in the real world or not. On top of that, mistakes can come from gaps in knowledge, sloppy reasoning, mixing up names or numbers, or simply losing track of the context.
Intrinsic
The model directly goes against the source material.
Extrinsic
When it adds extra details that weren't in the source.
Non-factual
Claims something that’s just plain false in the real world.
Unverifiable
Shares information you can’t actually check or prove.
Knowledge gap
The model doesn’t know the answer, so it makes one up.
Reasoning error
It trips up on logic and draws the wrong conclusion.
Entity confusion
Mixes up names, dates, or numbers.
Context drift
Misinterprets the input or wanders away from the given context.
How to Choose the Right One
You don’t really choose a “kind” of hallucination—you decide how much risk you’re okay with. If you’re working on something creative, a bit of guesswork is fine. If the stakes are high, you’ll want tighter guardrails, like giving the model reliable data to work from and letting it say “I don’t know.” There isn’t a perfect cure for hallucinations. What usually works is mixing a few small habits. Give the model good sources so it doesn’t have to guess. If it doesn’t know, let it say so. For tasks where accuracy really matters, you can fence in its answers with rules or fixed formats. And at the end of the day, treat it like any other draft—check it yourself or have someone else look it over before you use it.