Models
September 3, 2025

Cybercriminals weaponizing Claude: Anthropic issues warning

Anthropic warned its Claude AI tools were weaponized by cybercriminals, including North Korean actors. Misuse included ransomware creation and healthcare attacks. Experts caution this highlights AI’s growing role in sophisticated threats.

Anthropic disclosed that its Claude AI models have been weaponized in advanced cybercrime campaigns.

Threat actors, including North Korean groups, exploited Claude to fraudulently secure tech jobs, generate working ransomware code, and conduct automated cyberattacks against healthcare and government systems. Although Anthropic swiftly banned the malicious accounts and reinforced safeguards, cybersecurity experts warn this is a sobering sign of how rapidly AI is amplifying cyber threats.

The incident underscores the dual-use nature of AI technology: while enabling innovation, it can also empower malicious actors, raising urgent questions about safety, governance, and international controls.

#
Anthropic

Read Our Content

See All Blogs
Gen AI

Anthropic’s Claude Managed Agents platform accelerates AI agent deployment for teams

Deveshi Dabbawala

April 9, 2026
Read more
AI safety

Everything you need to know about Anthropic's Project Glasswing

Deveshi Dabbawala

April 8, 2026
Read more