Models
September 3, 2025

Cybercriminals weaponizing Claude: Anthropic issues warning

Anthropic warned its Claude AI tools were weaponized by cybercriminals, including North Korean actors. Misuse included ransomware creation and healthcare attacks. Experts caution this highlights AI’s growing role in sophisticated threats.

Anthropic disclosed that its Claude AI models have been weaponized in advanced cybercrime campaigns.

Threat actors, including North Korean groups, exploited Claude to fraudulently secure tech jobs, generate working ransomware code, and conduct automated cyberattacks against healthcare and government systems. Although Anthropic swiftly banned the malicious accounts and reinforced safeguards, cybersecurity experts warn this is a sobering sign of how rapidly AI is amplifying cyber threats.

The incident underscores the dual-use nature of AI technology: while enabling innovation, it can also empower malicious actors, raising urgent questions about safety, governance, and international controls.

#
Anthropic

Read Our Content

See All Blogs
LLM Models

The definitive guide to LLM use cases in 2025

Deveshi Dabbawala

October 23, 2025
Read more
Gen AI

The GenAI Divide Report is a Trojan Horse for MIT NANDA

Rishabh Sood

October 14, 2025
Read more