OpenAI Codex Security Plugin Launches 5 Major AppSec Workflows
The OpenAI team has launched the Codex Security plugin, integrating 5 application security workflows, including Security Scan, Threat Model, Finding Discovery, Validation, and Attack Path Analysis.
The plugin can scan PRs, commits, branches, and entire repositories, build project-specific threat models, discover vulnerabilities such as authorization bypass, SSRF, and injection, and validate through PoC and debugging traces, generating attack path reports.
Developers and security teams are accelerating the adoption of AI security tools, shifting funding from traditional static scanning to context-aware agents. The OpenAI Codex ecosystem and integrated repositories benefit, while traditional AppSec tool providers and high false-positive processes face pressure.
Source: Public Information
ABAB AI Insight
OpenAI previously tested Codex Security internally under the name Aardvark. The release of this plugin continues the evolution from a general code agent to a dedicated security agent. During the 2025 internal testing, SSRF and cross-tenant vulnerabilities were discovered and quickly fixed, with early precision significantly reducing the false positive rate.
On the capital front, OpenAI bundles the usage rights of Codex Security with ChatGPT Pro/Enterprise subscriptions, packaging AI computing power with security scanning for sale. The strategic motive is to reduce enterprise security labor costs through automated threat modeling and validation while locking developer traffic and data within the OpenAI platform ecosystem.
Similar to GitHub Copilot code review extensions or Snyk AI-enhanced scanning, AI-driven AppSec is currently in the mid-to-late stage of transitioning from rule-based detection to context agents, with large model companies rapidly increasing their control over developer security toolchains.
Essentially, this represents a technological substitution: AI agents replace manual threat modeling and validation processes with repository context. The mechanism relies on large model inference and isolated environment execution, significantly reducing false positives, shifting security pricing power from traditional scanning tools to AI platforms with full-stack understanding capabilities, accelerating the concentration of industry capital towards infrastructure providers like OpenAI.
ABAB News · Cognitive Law
The more security scanning relies on context, the closer false positives approach zero, making manual review an exception rather than a necessity. When threat models are automatically generated by AI, human effort shifts from finding vulnerabilities to training AI to understand boundaries. The earlier tools turn security into agents, the later teams become victims of vulnerabilities.