Safety, Alignment & Ethics
Hallucination
When an AI confidently produces false information it has invented.
Definition
Hallucination is arguably the most commercially significant limitation of current LLMs. A hallucinating AI states fabricated facts with the same confidence as accurate ones — inventing statistics, citing non-existent papers, making up names, or misattributing quotes. This happens because language models generate plausible-sounding text rather than retrieving verified facts. Mitigations include retrieval-augmented generation, grounding AI in authoritative sources, and requiring human review of any factual claims.
Related Terms
Confabulation
Another term for hallucination — the AI fills gaps in knowledge with plausible-sounding fiction.
RAG (Retrieval-Augmented Generation)
Combining an LLM with a search system so it can look up current or specific information before responding.
Grounding
Connecting AI outputs to verified real-world information to reduce hallucination.
AI Safety
The field focused on preventing AI from causing harm — intentional or unintentional.
Heard enough terminology — ready to talk outcomes?
We translate AI concepts into measurable business results. No upfront fees — you pay only when independently verified results are delivered.
Disclaimer
This definition is provided for educational and informational purposes only. It represents a general explanation of a technical concept and does not constitute professional, technical, or investment advice. Artificial intelligence is a rapidly evolving field; terminology, techniques, and capabilities change frequently. Coaley Peak Ltd makes no warranty as to the accuracy, completeness, or currency of the information provided. Nothing on this page should be relied upon as the sole basis for commercial, technical, legal, or investment decisions without independent professional advice.
Document reference: ISO_webpage_knowledge-base_glossary_v1
Last modified: 29 March 2026
Knowledge Base·Safety, Alignment & Ethics·Hallucination