Models
March 23, 2026

Creating with Sora

OpenAI emphasizes safe creation with Sora by using red teaming, content moderation, and safeguards against harmful outputs, ensuring responsible video generation while addressing risks like misinformation, bias, and misuse.

OpenAI highlights a strong focus on safety while developing Sora, its AI video generation model. The company works with red teamers and domain experts to test risks such as misinformation, bias, and harmful content before wider release. It also builds safeguards to prevent misuse, including content moderation and restrictions on sensitive outputs.

By collaborating with artists, designers, and researchers, OpenAI gathers feedback to improve both usability and safety.

This approach ensures that Sora supports creative use cases while reducing potential risks linked to realistic AI-generated videos and their impact on trust and authenticity in digital content.

#
OpenAI

Read Our Content

See All Blogs
Gen AI

The Arm AGI CPU for agentic AI infrastructure just launched

Deveshi Dabbawala

March 31, 2026
Read more
Uncategorized

Stanford and MIT research reveals that "Agents of Chaos" are compromising scalable autonomous AI

Siddharth Menon

March 31, 2026
Read more