Models
April 6, 2026

Introducing the OpenAI safety fellowship

OpenAI’s Safety Fellowship is a research program that funds external experts to study AI safety and alignment. Fellows receive mentorship, compute resources, and stipends to produce impactful safety research.

OpenAI’s Safety Fellowship is a pilot program designed to support external researchers working on AI safety and alignment challenges. Running from September 2026 to February 2027, it provides fellows with financial support, mentorship from OpenAI researchers, and access to significant computing resources.

Participants are expected to produce meaningful outputs such as research papers, datasets, or benchmarks.

Key focus areas include robustness, misuse prevention, privacy, and scalable safety methods. The program aims to expand collaboration beyond OpenAI and strengthen global efforts to ensure advanced AI systems are developed and deployed safely.

#
Anthropic

Read Our Content

See All Blogs
AI safety

Anthropic's AI agents just outpaced human researchers in safety tests

Deveshi Dabbawala

April 16, 2026
Read more
Gen AI

Anthropic’s Claude Managed Agents platform accelerates AI agent deployment for teams

Deveshi Dabbawala

April 9, 2026
Read more