OpenAI introduced a bio bug bounty program focused on improving the safety of its GPT 5.5 model by inviting external researchers to test its safeguards. The initiative targets vulnerabilities related to biological and chemical risks, encouraging participants to find “universal jailbreak” prompts that can bypass protections.
Rewards can reach up to $25,000 for successful findings, with participation typically limited and governed by strict agreements.
The program reflects a shift toward proactive safety testing, where companies rely on external experts to identify weaknesses before real world misuse occurs, especially in high risk domains like biosecurity and advanced AI capabilities.
.jpg)

.jpg)

