Hallucinations
You ask a question. The AI responds. It sounds confident, clear, maybe even helpful. But… it’s made the whole thing up.
That’s what we call a hallucination.
In simple terms, it’s when an LLM (i.e. ChatGPT) gives you an answer that’s factually wrong but sounds totally believable.
Why does this happen?
Because models like ChatGPT don’t “know” things. They’re not pulling facts from a database or verifying sources. As discussed in my previous post, they are predicting the next likely word in a sentence based on patterns in the data they were trained on.
Most of the time, that’s enough. But when the data’s thin, or the prompt is vague, the model starts to guess. And those guesses can turn into convincing fiction; fake citations, imaginary laws, people who don’t exist.
The scary part? The more articulate the model, the more conviction in the output... the harder it is to spot.
In real-world settings, like legal work, medical advice, or enterprise search, that’s a big problem. Hallucinations can lead to bad calls, wasted time, or worse.
So how can we mitigate?
That’s why building with AI models isn’t just about what the model can do. It’s about knowing where it breaks and putting the right guardrails in place. That might mean grounding it in trusted data, setting clear prompt limits, citing the sources used or just teaching it to say “I don’t know.”