Flash News

Anthropic's Proposal to Expand Mythos Usage Rejected by White House

Anthropic proposed to add about 70 companies and institutions to use the Mythos vulnerability detection tool, which, combined with the existing approximately 50, would reach about 120. However, the White House rejected this due to security and computing power concerns.

Mythos was released in early April and is currently limited to testing by critical infrastructure companies, with no public release plans.

The White House is concerned that Anthropic lacks sufficient computing power, and the addition of commercial users may encroach on government resources. Additionally, there is dissatisfaction regarding Anthropic's hiring of several former Biden administration officials and its ties to liberal organizations; former researcher Collin Burns was directly replaced from a government AI assessment position.

AI security and computing power companies are increasing their compliance investments with the government, shifting funding from commercial expansion to restricted critical infrastructure testing. Anthropic's government collaboration is hindered, benefiting competitors with influence in the White House.

Source: Public Information

ABAB AI Insight

Anthropic had previously strictly controlled Mythos within the scope of critical infrastructure testing. This proposal to expand usage being rejected continues its shift from being "close to the Biden era" under the Trump administration to facing "difficulties in rebuilding political trust." Earlier, it signed computing power agreements with Amazon, Google, and Broadcom, but new capacity has not yet come online.

In terms of capital strategy, Anthropic is attempting to increase enterprise customers and revenue by expanding Mythos authorization, but the White House prioritizes its own computing power usage and has doubts about personnel backgrounds, leading to obstacles in commercial expansion. The company is forced to allocate more resources to restoring government trust and maintaining existing key clients.

Similar to cases where OpenAI and Google faced restrictions on government AI tool access, Anthropic is in a transitional phase from "commercial testing expansion" to "strict government control + computing power binding" for AI security tools. Advisors like David Sacks have publicly questioned its computing power capabilities multiple times.

Essentially, this reflects regulatory changes: the traditional mechanism allowing AI companies to autonomously decide the commercialization scope of their tools is directly intervened by the White House under the pretext of national security and computing power priorities. Anthropic's political connections and current capacity status have become core variables in the approval process, restructuring the access mechanism for AI security tools from "enterprise autonomous promotion" to "dual approval of government computing power and trust."

Anthropic

Source

·ABAB News
·
3 min read
·14d ago
分享: