Models
March 25, 2026

Introducing the OpenAI Safety Bug Bounty program

OpenAI’s Safety Bug Bounty program rewards researchers for identifying AI safety risks like prompt injection and data leaks, aiming to prevent misuse and improve system reliability through community-driven vulnerability reporting.

OpenAI’s Safety Bug Bounty program focuses on identifying real-world risks in AI systems beyond traditional security flaws. It invites researchers and ethical hackers to report issues such as prompt injection, data exfiltration, and harmful agent behavior.

Submissions must demonstrate reproducible impact, and rewards are based on severity, with top payouts reaching up to $100,000. The program excludes low-impact jailbreaks and prioritizes vulnerabilities that could lead to misuse or user harm.

By collaborating with the broader security community, OpenAI aims to proactively detect risks, strengthen safeguards, and ensure safer deployment of AI technologies across its products.

#
OpenAI

Read Our Content

See All Blogs
Whitepaper

Whitepaper on AI Matic’s Intelligent Document Processing

Akash Chandrasekar

May 13, 2026
Read more
AWS

How we cut a 3-hour AWS observability investigation down to 11 minutes

Sarankumar S

May 12, 2026
Read more