Despite increasing capabilities, language models still hallucinate, confidently producing plausible but false statements, because current training and evaluation systems prioritize accuracy over uncertainty.
When models are assessed only on right answers, they are incentivized to guess rather than say “I don’t know,” as abstention yields no points. The research shows hallucinations naturally arise during next-word prediction, especially for low-frequency facts, due to statistical learning dynamics.
To curb this, OpenAI argues for reforming evaluation metrics: penalize confident wrong answers more and reward uncertainty or partial credit. Changing how benchmarks are scored may realign models towards more trustworthy behavior.