Introducing the OpenAI Safety Bug Bounty program

OpenAI Blog / 3/25/2026

📰 NewsSignals & Early TrendsIndustry & Market MovesModels & Research

Key Points

  • OpenAI has launched a Safety Bug Bounty program aimed at discovering and reporting AI safety issues and potential abuses.
  • The program targets multiple risk areas, including agentic vulnerabilities, prompt injection, and data exfiltration.
  • It is designed to surface real-world failure modes in AI systems by enabling external researchers to test and disclose findings.
  • The initiative signals OpenAI’s emphasis on proactive vulnerability research and community involvement in AI safety mitigation.
OpenAI launches a Safety Bug Bounty program to identify AI abuse and safety risks, including agentic vulnerabilities, prompt injection, and data exfiltration.