In a significant move to strengthen safeguards around advanced AI systems, OpenAI has introduced the GPT-5.5 Bio Bug Bounty ...
OpenAI is incentivizing security researchers with a $25,000 reward to find vulnerabilities in its new AI model, GPT-5.5, ...
OpenAI is incentivizing security researchers with a $25,000 reward to bypass the safety guardrails of its latest AI model, GPT-5.5, through a bio bug bounty programme.
OpenAI has opened a GPT-5.5 Bio Bug Bounty programme that invites selected researchers to test whether the company’s latest model can be pushed past safeguards designed to block dangerous biological ...
OpenAI has unveiled GPT-5.5, calling it its most capable and intuitive AI model yet, alongside a $25,000 Bio Bug Bounty to test its biosafety guardrails. The programme invites vetted researchers to ...
OpenAI has launched a restricted Bio Bug Bounty for its new GPT‑5.5 model, offering up to $25,000 for a single 'universal jailbreak' that can bypass all five questions in a biosafety challenge without ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results