Models
March 25, 2026

Introducing the OpenAI Safety Bug Bounty program

OpenAI’s Safety Bug Bounty program rewards researchers for identifying AI safety risks like prompt injection and data leaks, aiming to prevent misuse and improve system reliability through community-driven vulnerability reporting.

OpenAI’s Safety Bug Bounty program focuses on identifying real-world risks in AI systems beyond traditional security flaws. It invites researchers and ethical hackers to report issues such as prompt injection, data exfiltration, and harmful agent behavior.

Submissions must demonstrate reproducible impact, and rewards are based on severity, with top payouts reaching up to $100,000. The program excludes low-impact jailbreaks and prioritizes vulnerabilities that could lead to misuse or user harm.

By collaborating with the broader security community, OpenAI aims to proactively detect risks, strengthen safeguards, and ensure safer deployment of AI technologies across its products.

#
OpenAI

Read Our Content

See All Blogs
Gen AI

The Arm AGI CPU for agentic AI infrastructure just launched

Deveshi Dabbawala

March 31, 2026
Read more
Uncategorized

Stanford and MIT research reveals that "Agents of Chaos" are compromising scalable autonomous AI

Siddharth Menon

March 31, 2026
Read more