DeepSeek publicly warned that its open-source AI models are at significant risk of jailbreak attacks, where users can bypass safeguards to generate unsafe or malicious content.
The disclosure highlights a growing tension in AI: while open-source models democratize innovation, they also pose unique safety and security challenges. Cybersecurity experts fear such vulnerabilities could be exploited for disinformation, fraud, or politically sensitive outputs.
For enterprises, this warning reinforces hesitation to adopt DeepSeek, despite its cost efficiency. The announcement underscores how safety, trust, and governance remain unresolved in the race to scale generative AI globally.