Models
September 3, 2025

Cybercriminals weaponizing Claude: Anthropic issues warning

Anthropic warned its Claude AI tools were weaponized by cybercriminals, including North Korean actors. Misuse included ransomware creation and healthcare attacks. Experts caution this highlights AI’s growing role in sophisticated threats.

Anthropic disclosed that its Claude AI models have been weaponized in advanced cybercrime campaigns.

Threat actors, including North Korean groups, exploited Claude to fraudulently secure tech jobs, generate working ransomware code, and conduct automated cyberattacks against healthcare and government systems. Although Anthropic swiftly banned the malicious accounts and reinforced safeguards, cybersecurity experts warn this is a sobering sign of how rapidly AI is amplifying cyber threats.

The incident underscores the dual-use nature of AI technology: while enabling innovation, it can also empower malicious actors, raising urgent questions about safety, governance, and international controls.

#
Anthropic

Read Our Content

See All Blogs
ML

Top 15 AWS machine learning tools

Cricka Reddy Aileni

August 26, 2025
Read more
AWS

New AWS enterprise generative AI tools: AgentCore, Nova Act, and Strands SDK

Deveshi Dabbawala

August 12, 2025
Read more