Models
September 5, 2025

Why language models hallucinate

Langauge models hallucinate because standard training emphasizes accuracy over admitting uncertainty, encouraging guessing. Hallucinations stem from statistical pressure during next-word prediction and persist due to evaluation methods rewarding confident errors.

Despite increasing capabilities, language models still hallucinate, confidently producing plausible but false statements, because current training and evaluation systems prioritize accuracy over uncertainty.

When models are assessed only on right answers, they are incentivized to guess rather than say “I don’t know,” as abstention yields no points. The research shows hallucinations naturally arise during next-word prediction, especially for low-frequency facts, due to statistical learning dynamics.

To curb this, OpenAI argues for reforming evaluation metrics: penalize confident wrong answers more and reward uncertainty or partial credit. Changing how benchmarks are scored may realign models towards more trustworthy behavior.

#
OpenAI

Read Our Content

See All Blogs
LLM Models

The definitive guide to LLM use cases in 2025

Deveshi Dabbawala

October 23, 2025
Read more
Gen AI

The GenAI Divide Report is a Trojan Horse for MIT NANDA

Rishabh Sood

October 14, 2025
Read more