Models
April 23, 2026

OpenAI launches GPT 5.5 Bio Bug Bounty

OpenAI launched a Bio Bug Bounty for GPT 5.5, inviting researchers to test safety systems. The program offers rewards up to $25,000 for identifying jailbreaks in sensitive biology scenarios.

OpenAI introduced a bio bug bounty program focused on improving the safety of its GPT 5.5 model by inviting external researchers to test its safeguards. The initiative targets vulnerabilities related to biological and chemical risks, encouraging participants to find “universal jailbreak” prompts that can bypass protections.

Rewards can reach up to $25,000 for successful findings, with participation typically limited and governed by strict agreements.  

The program reflects a shift toward proactive safety testing, where companies rely on external experts to identify weaknesses before real world misuse occurs, especially in high risk domains like biosecurity and advanced AI capabilities.

#
OpenAI

Read Our Content

See All Blogs
AI in healthcare

Building a Production-Grade AI Platform for Healthcare Denial Management

Paushigaa S

April 29, 2026
Read more
Gen AI

Enterprise AI Will Be Built on Hyperscaler Agent Platforms

Prashanna Hanumantha Rao

April 23, 2026
Read more