OpenAI CEO Calls Out Competitor Anthropic's Marketing of New Safety Model Claude Mythos as 'Fear-Based Marketing'
OpenAI CEO Sam Altman, in a rare move, directly criticized competitor Anthropic during a Core Memory podcast recorded with tech journalist Ashlee Vance, stating that the marketing surrounding its new safety model Claude Mythos is 'fear-based marketing.' He suggested that the true intent behind such narratives is to find the most effective justification for the idea that 'AI should only be in the hands of a select few trusted individuals.' He used the metaphor, 'We built a bomb that’s about to drop on your head, but we’ll sell you a $100 million bunker, as long as we choose you as a customer,' alluding to Mythos's closed and selective deployment path in high-risk cybersecurity scenarios.
Altman also acknowledged that cutting-edge models do pose 'reasonable and serious safety concerns,' but emphasized that safety risks should not be turned into a marketing tool for monopolistic access. He reiterated that OpenAI has its own distinct product and release rhythm regarding 'how to keep powerful models relatively open.' In contrast, Anthropic has repeatedly emphasized in its system card and technical blog that Claude Mythos has surpassed most human experts in automated vulnerability discovery and constructing attack chains for browsers and operating systems. Therefore, it is only offering trial access through Project Glasswing to a select few companies like Apple, Amazon, and Microsoft, while calling on enterprises to strengthen their infrastructure in advance of the 'AI-driven wave of vulnerability discovery.'
Source: Public Information
ABAB AI Insight
Altman and Anthropic's debate over Mythos essentially represents a clash of two approaches to 'governance and commercialization of cutting-edge models.' One route (represented by Anthropic) emphasizes 'extremely high capability + extremely narrow distribution,' rationalizing a highly closed, whitelist customer supply model by describing the model as a kind of 'cyber weapon.' The other route (represented by OpenAI) attempts to maintain a middle ground of 'high capability + relatively broad accessibility,' seeking a balance between safety controls and market scale. Altman refers to the former narrative as 'fear marketing,' essentially warning that once 'safety' is shaped as a property tool, it becomes a legitimate source for a few companies and governments to expand their technological control.
From an industry structure perspective, the issue exposed by Mythos is not that 'the model is too powerful,' but rather 'how safety dividends are distributed.' Anthropic's public materials show that Mythos has achieved near or superhuman levels of vulnerability discovery and exploitation capabilities on benchmarks like SWE-bench, USAMO, and Terminal-Bench, and has uncovered numerous high-risk flaws in complex codebases like Firefox. If this capability is only held by a very few laboratories and their selected corporate clients, it means that 'whoever first possesses cutting-edge models will first grasp systemic vulnerability intelligence and repair windows,' with safety capabilities themselves beginning to layer power and information asymmetry atop the tech stack.
Altman's 'bomb + bunker' metaphor highlights an economic issue: the current cybersecurity AI route is prone to evolve into a closed loop of 'creating risks—selling protection.' When companies are told that 'models capable of automatically discovering vulnerabilities on a large scale are imminent,' the only viable strategy is to purchase detection and repair services from the same camp—whose pricing power, in turn, depends on the laboratory's exclusive control over model capabilities and distribution rhythm. This transforms what should be part of 'public defense capabilities' into highly profitable proprietary security products.
On a deeper level, this is an institutional struggle within the AI industry over whether general capabilities should be monopolized by a few. Anthropic emphasizes 'initially small-scale defensive deployment, then exploring paths for safety openness' in its Mythos system card, echoing governance logic closer to nuclear technology and offensive cyber weapons; OpenAI, on the other hand, attempts to maintain a platform positioning of 'strong models + broad APIs,' avoiding locking itself into a narrative of 'only I can safely control this.' Whichever route gains the upper hand will determine whether future AI capabilities are closer to 'global public infrastructure' or 'strategic assets tightly controlled by large companies and nations in the name of safety.' This is far more than just a public relations battle between two companies; it is about pre-defining the power boundaries of the entire AI era.