Anthropic disclosed that its Claude AI models have been weaponized in advanced cybercrime campaigns.
Threat actors, including North Korean groups, exploited Claude to fraudulently secure tech jobs, generate working ransomware code, and conduct automated cyberattacks against healthcare and government systems. Although Anthropic swiftly banned the malicious accounts and reinforced safeguards, cybersecurity experts warn this is a sobering sign of how rapidly AI is amplifying cyber threats.
The incident underscores the dual-use nature of AI technology: while enabling innovation, it can also empower malicious actors, raising urgent questions about safety, governance, and international controls.