Safety, Alignment & Ethics

Hallucination

When an AI confidently produces false information it has invented.

Definition

Hallucination is arguably the most commercially significant limitation of current LLMs. A hallucinating AI states fabricated facts with the same confidence as accurate ones — inventing statistics, citing non-existent papers, making up names, or misattributing quotes. This happens because language models generate plausible-sounding text rather than retrieving verified facts. Mitigations include retrieval-augmented generation, grounding AI in authoritative sources, and requiring human review of any factual claims.

Why this matters for your business

In any customer-facing, legal, financial, or compliance context, all AI-generated factual claims must be verified before use. This is not optional — hallucinations can cause real reputational and legal harm.

Heard enough terminology — ready to talk outcomes?

We translate AI concepts into measurable business results. No upfront fees — you pay only when independently verified results are delivered.

← Back to glossary

Disclaimer

This definition is provided for educational and informational purposes only. It represents a general explanation of a technical concept and does not constitute professional, technical, or investment advice. Artificial intelligence is a rapidly evolving field; terminology, techniques, and capabilities change frequently. Coaley Peak Ltd makes no warranty as to the accuracy, completeness, or currency of the information provided. Nothing on this page should be relied upon as the sole basis for commercial, technical, legal, or investment decisions without independent professional advice.

Document reference: ISO_webpage_knowledge-base_glossary_v1

Last modified: 29 March 2026

Knowledge Base·Safety, Alignment & Ethics·Hallucination