White House Discusses Government Review Mechanism for AI Model Releases, Rapid Shift in Actions Continues
Kevin Hassett, Director of the National Economic Council, stated on Fox Business Wednesday that the government is exploring an executive order requiring AI models to undergo government review similar to the FDA before going live.
Just two days ago, The New York Times reported that this was "under consideration". After Trump took office, he rescinded Biden's AI safety executive order, but now there is a shift towards stricter pre-review; within 24 hours, the White House quickly downplayed the comments, stating that Hassett's remarks were taken out of context, emphasizing "partnership" rather than regulation.
Despite the softened tone, the White House is still pushing for intelligence agencies to conduct pre-assessments before models are publicly released. The Commerce Department's AI Standards Center will expand the scope of voluntary assessment agreements this week to include Google DeepMind, Microsoft, and xAI.
Source: Public Information
ABAB AI Insight
Trump previously signed an executive order banning federal agencies from using Anthropic products and criticized its executives. This review discussion was triggered by Anthropic's disclosure last month of Mythos's powerful vulnerability discovery capabilities, continuing the "America First + National Security First" AI policy path, contrasting sharply with the voluntary agreements of the Biden era.
On the capital front, the White House is shifting computational power and data resources towards approved companies through the expansion of the CAISI voluntary assessment agreement and pre-assessments by intelligence agencies, moving funding from pure market competition to a "safety compliance premium" track. The motivation is to ensure that U.S. intelligence agencies prioritize access to new model capabilities under pressure from competition with China and Russia, while also providing federal contracts and legitimacy endorsements to approved entities like OpenAI and Anthropic.
The lengthy FDA-like approval process for new drugs has led to market access disparities, and early export controls have impacted Huawei. Currently, the U.S. AI industry is at a critical stage of transitioning from "innovation first" to "national security control".
Essentially, this represents a regulatory change: achieving a transfer of pricing power from pure market competition to a national security screening mechanism through pre-release assessments. The mechanism allows intelligence agencies to study new capabilities in advance, creating an asymmetric advantage, while using the guise of "partnership" to mitigate industry backlash, accelerating capital concentration towards a few AI platforms that meet government security standards, thus forming new barriers to entry.
ABAB News · Cognitive Law
The more regulation is justified in the name of security, the more valuable the pricing power of approved companies becomes.
When the government wants to regulate but not take responsibility, the ones who truly profit are always the first to get approved.