Models
December 4, 2024

Claude 3 models raise safety concerns over hallucination prompts and factual accuracy

Users reported Claude 3 producing convincing but false legal citations, raising concerns about its reliability in legal-tech. Anthropic acknowledged the issue, prompting calls for stricter safeguards and human oversight.

​Users have reported that Claude 3, Anthropic's advanced AI model, generates highly realistic yet entirely fabricated legal citations a phenomenon known as "hallucination." This issue has raised significant concerns about the reliability of AI in legal-tech applications, where accuracy is paramount.

Anthropic has acknowledged the problem, emphasizing the need for improved safeguards and transparency. Experts recommend implementing retrieval-augmented generation (RAG) techniques and maintaining human oversight to mitigate such risks.

The incident underscores the importance of cautious integration of AI tools in legal settings, ensuring that technological advancements do not compromise the integrity of legal processes.

#
Anthropic

Read Our Content

See All Blogs
Gen AI

Anthropic’s Claude Managed Agents platform accelerates AI agent deployment for teams

Deveshi Dabbawala

April 9, 2026
Read more
AI safety

Everything you need to know about Anthropic's Project Glasswing

Deveshi Dabbawala

April 8, 2026
Read more