DeepSeek Hangzhou’s fast-rising AI company, carried out internal evaluations of its models for “frontier risks,” including capabilities like self-replication and potential for cyber-offensive behavior.
These evaluations are not publicly disclosed in detail. The move comes as the Chinese government emphasizes the importance of assessing risks AI might pose to public safety and social stability.
While companies like OpenAI and Anthropic release evaluations publicly, DeepSeek and other Chinese firms have been more opaque about findings. The timing suggests growing regulatory and public scrutiny of AI safety in China.




