Models
March 25, 2026

Introducing the OpenAI Safety Bug Bounty program

OpenAI’s Safety Bug Bounty program rewards researchers for identifying AI safety risks like prompt injection and data leaks, aiming to prevent misuse and improve system reliability through community-driven vulnerability reporting.

OpenAI’s Safety Bug Bounty program focuses on identifying real-world risks in AI systems beyond traditional security flaws. It invites researchers and ethical hackers to report issues such as prompt injection, data exfiltration, and harmful agent behavior.

Submissions must demonstrate reproducible impact, and rewards are based on severity, with top payouts reaching up to $100,000. The program excludes low-impact jailbreaks and prioritizes vulnerabilities that could lead to misuse or user harm.

By collaborating with the broader security community, OpenAI aims to proactively detect risks, strengthen safeguards, and ensure safer deployment of AI technologies across its products.

#
OpenAI

Read Our Content

See All Blogs
Gen AI

Anthropic’s Claude Managed Agents platform accelerates AI agent deployment for teams

Deveshi Dabbawala

April 9, 2026
Read more
AI safety

Everything you need to know about Anthropic's Project Glasswing

Deveshi Dabbawala

April 8, 2026
Read more