Flash News

NSA Uses Anthropic Claude Mythos Model Despite Pentagon Listing as Supply Chain Risk

The U.S. National Security Agency is using Anthropic's advanced AI model Claude Mythos Preview, despite Pentagon officials designating Anthropic as a supply chain risk. Sources reveal that the NSA primarily utilizes the model to scan for vulnerabilities in its own infrastructure, with indications of broader use within the Department of Defense.

This situation contrasts with the Pentagon's formal designation of Anthropic, which arose from a breakdown in negotiations over usage restrictions, including limits on mass surveillance and autonomous weapon systems. Anthropic has previously filed lawsuits regarding the supply chain risk label, with some court rulings temporarily blocking the designation, but internal use continues.

Source: Public Information

ABAB AI Insight

This internal use exposes the practical prioritization of national security agencies between advanced AI capabilities and institutional constraints. Claude Mythos's performance in vulnerability discovery and automated attack chain generation makes it a practical tool for scanning critical infrastructure, even amid political and contractual disputes at the official level. The NSA's choice indicates that, driven by the need for cyber defense, technical efficacy can override formal risk labels, amplifying the execution tension between different departments within the government.

From a broader structural perspective, this reflects the dual positioning dilemma of cutting-edge AI models in the national security supply chain. Anthropic's model has been used for network classification and intelligence analysis, with its network capabilities posing potential offensive risks while also providing defensive advantages. The Pentagon leverages supply chain risk to push negotiations, while actual usage continues, highlighting the elastic limits of institutional constraints when faced with productivity enhancements. Similar dynamics have historically been common during the militarization phase of new technologies, where the redistribution of power and capital in technological control often occurs through internal exceptions rather than public policy.

In the long term, such events accelerate the shift in AI governance from contractual aspects to capability assessments. The differentiated control of model access by platform providers combined with internal pragmatism within the government will reshape risk pricing and resource allocation. Regardless of the ultimate legal outcome, the NSA's continued reliance signals that, under the pressure of geopolitical technological competition, institutional inertia is unlikely to fully block the absorption of high-performance tools, further reinforcing the central position of AI capabilities in the global power structure reassessment.

AI

Source

·ABAB News
·
2 min read
·24d ago
分享: