AI Safety and Regulation
September 16, 2025

DeepSeek evaluates AI models for ‘frontier risks'

DeepSeek has conducted internal assessments of “frontier risks” in its AI models, such as self-replication or cyber-offensive capacities, as Beijing pushes for more awareness of potential safety threats.

DeepSeek Hangzhou’s fast-rising AI company, carried out internal evaluations of its models for “frontier risks,” including capabilities like self-replication and potential for cyber-offensive behavior.

These evaluations are not publicly disclosed in detail. The move comes as the Chinese government emphasizes the importance of assessing risks AI might pose to public safety and social stability.

While companies like OpenAI and Anthropic release evaluations publicly, DeepSeek and other Chinese firms have been more opaque about findings. The timing suggests growing regulatory and public scrutiny of AI safety in China.

#
DeepSeek

Read Our Content

See All Blogs
Gen AI

Anthropic’s Claude Managed Agents platform accelerates AI agent deployment for teams

Deveshi Dabbawala

April 9, 2026
Read more
AI safety

Everything you need to know about Anthropic's Project Glasswing

Deveshi Dabbawala

April 8, 2026
Read more