Models
September 3, 2025

Cybercriminals weaponizing Claude: Anthropic issues warning

Anthropic warned its Claude AI tools were weaponized by cybercriminals, including North Korean actors. Misuse included ransomware creation and healthcare attacks. Experts caution this highlights AI’s growing role in sophisticated threats.

Anthropic disclosed that its Claude AI models have been weaponized in advanced cybercrime campaigns.

Threat actors, including North Korean groups, exploited Claude to fraudulently secure tech jobs, generate working ransomware code, and conduct automated cyberattacks against healthcare and government systems. Although Anthropic swiftly banned the malicious accounts and reinforced safeguards, cybersecurity experts warn this is a sobering sign of how rapidly AI is amplifying cyber threats.

The incident underscores the dual-use nature of AI technology: while enabling innovation, it can also empower malicious actors, raising urgent questions about safety, governance, and international controls.

#
Anthropic

Read Our Content

See All Blogs
AWS

Day 4 at AWS re:Invent: Experience-Based Acceleration (EBA) partners announced and a big bang close

Deveshi Dabbawala

December 4, 2025
Read more
AWS

Privacy safe synthetic ML data generation with AWS Clean Rooms

Sharan Sundar Sankaran

December 3, 2025
Read more