Models
September 5, 2025

Why language models hallucinate

Langauge models hallucinate because standard training emphasizes accuracy over admitting uncertainty, encouraging guessing. Hallucinations stem from statistical pressure during next-word prediction and persist due to evaluation methods rewarding confident errors.

Despite increasing capabilities, language models still hallucinate, confidently producing plausible but false statements, because current training and evaluation systems prioritize accuracy over uncertainty.

When models are assessed only on right answers, they are incentivized to guess rather than say “I don’t know,” as abstention yields no points. The research shows hallucinations naturally arise during next-word prediction, especially for low-frequency facts, due to statistical learning dynamics.

To curb this, OpenAI argues for reforming evaluation metrics: penalize confident wrong answers more and reward uncertainty or partial credit. Changing how benchmarks are scored may realign models towards more trustworthy behavior.

#
OpenAI

Read Our Content

See All Blogs
ML

Top 15 AWS machine learning tools

Cricka Reddy Aileni

August 26, 2025
Read more
AWS

New AWS enterprise generative AI tools: AgentCore, Nova Act, and Strands SDK

Deveshi Dabbawala

August 12, 2025
Read more