
From now on, the reward for “critical findings” can reach up to $100,000. The company is also launching a short-term incentive that will last until the end of April — doubling the rewards for IDOR vulnerability reports.
At the same time, OpenAI updated its cyber grants program — with priorities: AI agent security, ensuring model confidentiality, eliminating flaws using AI, and protecting against sophisticated APT attacks. For the first time, micro-grants in the form of API credits appeared, which should accelerate development. The company also signed a partnership agreement with SpecterOps, which will conduct realistic “red team” attacks across OpenAI’s infrastructure. The goal is to integrate security into the architecture and models at an early stage of development.
OpenAI’s bug bounty program was launched in April 2024, and cyber grants have been in operation for two years. During this period, 28 research initiatives have been funded — from prompt injection protection to secure code generation. New directions, such as agentic security, have emerged in response to the launch of Operator and Deep Research agents.
OpenAI is sending a clear signal: the future of AI is impossible without strong cybersecurity. Involving hackers, academia, and startups in creating protection is not just a strategy, but a new standard for responsible AI development.