Shared from twixb · openai.com

GPT-5.5 Bio Bug Bounty

openai.com·Apr 23, 2026

OpenAI has launched a Bio Bug Bounty program for GPT-5.5, inviting researchers to identify a universal jailbreak that can bypass its bio safety challenge. The program offers a $25,000 reward for the first successful jailbreak and runs from April 23 to June 22, 2026, with testing occurring from April 28 to July 27, 2026.

OpenAI is launching a Bio Bug Bounty program for GPT-5.5, specifically targeting researchers with expertise in AI red teaming, security, or biosecurity to identify universal jailbreaks that could bypass bio safety prompts. This initiative highlights a growing focus on ensuring AI safety in biology-related applications, offering a $25,000 reward for successful contributions. If you have relevant expertise, consider applying to leverage your skills in enhancing AI safety and security.

Powered by twixb

Want more content like this?

twixb tracks your favorite blogs and social media, filters by keywords, and delivers personalized key learnings — straight to your inbox.