OpenAI’s Safety Fellowship is a pilot program designed to support external researchers working on AI safety and alignment challenges. Running from September 2026 to February 2027, it provides fellows with financial support, mentorship from OpenAI researchers, and access to significant computing resources.
Participants are expected to produce meaningful outputs such as research papers, datasets, or benchmarks.
Key focus areas include robustness, misuse prevention, privacy, and scalable safety methods. The program aims to expand collaboration beyond OpenAI and strengthen global efforts to ensure advanced AI systems are developed and deployed safely.




