Flash News

Trump Administration Shifts to Proposed AI Pre-Review Mechanism

The Trump administration is shifting from a non-intervention stance on AI to discussing the establishment of an AI task force through an executive order to implement a government review mechanism for new AI models before their release.

The White House has informed executives from Anthropic, Google, and OpenAI about the plans, directly prompted by Anthropic's launch of the Mythos model last month, which has a strong capability to identify software security vulnerabilities and could trigger a major cybersecurity "cleansing."

After assessing the risks, Anthropic did not publicly release the model, and the government plans to require "first access rights" to new models without preventing their final release. The policy is being promoted by White House Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent.

Source: Public Information

ABAB AI Insight

The Trump administration previously maintained minimal intervention in AI, and this shift continues the trajectory from complete laissez-faire to national security-driven regulation. Earlier, the Pentagon severed services with Anthropic over a $200 million contract dispute, leading to litigation, yet the NSA and the U.S. military's Maven system continue to use Anthropic technology, indicating the government's actual reliance on cutting-edge AI far exceeds its public stance.

On the capital front, the White House is attempting to restore comprehensive cooperation by meeting with Dario Amodei while promoting the pre-review mechanism to embed national security reviews into the commercial release process. The strategic motive is to mitigate the political risks of AI-driven cyberattacks while maintaining a supply-demand balance with leading labs to avoid outright confrontation.

Similar to the Biden administration's AI security executive order and export control pathways, or the EU's AI Act pre-assessment framework, the U.S. AI regulation is in the early stages of transitioning from non-intervention to selective national security reviews, significantly enhancing the negotiating position of large labs with the government.

Essentially, this represents a regulatory change: cybersecurity risks will partially shift the control of AI releases from companies to government pre-reviews. The mechanism is driven by the potential destructiveness of high-capability models, forcing executive intervention, shifting pricing power from pure market speed to a dual threshold of compliance and safety, accelerating the concentration of industry capital towards a few players with both government relations and technological advantages.

ABAB News · Cognitive Law

The stronger the model's vulnerability capabilities, the faster the government pre-review will come; national security is always the ultimate accelerator of regulation. The longer the non-intervention lasts, the more urgent the review becomes upon shifting, as reliance and fear are two sides of the same coin. The more the labs are needed by the government, the harder it is to completely block policies; cooperation becomes the common channel for both risks and benefits.

AIWhite House

Source

·ABAB News
·
2 min read
·9d ago
分享: