Models
September 5, 2025

Why language models hallucinate

Langauge models hallucinate because standard training emphasizes accuracy over admitting uncertainty, encouraging guessing. Hallucinations stem from statistical pressure during next-word prediction and persist due to evaluation methods rewarding confident errors.

Despite increasing capabilities, language models still hallucinate, confidently producing plausible but false statements, because current training and evaluation systems prioritize accuracy over uncertainty.

When models are assessed only on right answers, they are incentivized to guess rather than say “I don’t know,” as abstention yields no points. The research shows hallucinations naturally arise during next-word prediction, especially for low-frequency facts, due to statistical learning dynamics.

To curb this, OpenAI argues for reforming evaluation metrics: penalize confident wrong answers more and reward uncertainty or partial credit. Changing how benchmarks are scored may realign models towards more trustworthy behavior.

#
OpenAI

Read Our Content

See All Blogs
Gen AI

Exploring OpenClaw: The self-hosted AI assistant revolution that is reshaping everything

Deveshi Dabbawala

February 18, 2026
Read more
LLM Models

The comprehensive guide to building production-ready Model Context Protocol systems

Deveshi Dabbawala

February 11, 2026
Read more