Musk's Computing Power Strategy Faces Doubts: Excess Capacity After Hoarding
Harrison Rolfes, a senior research analyst at PitchBook, pointed out in an interview with Axios that Musk has repeatedly "overexerted" in AI computing infrastructure—building at maximum scale but with insufficient actual consumption capacity on the product side, leading to excess capacity ultimately being taken by competitors.
In 2024, xAI negotiated with Oracle for about $10 billion in server leases for training Grok 3, but after the talks broke down due to dissatisfaction with progress, they built the Memphis Colossus 1 data center, and the GPU capacity freed up by Oracle was subsequently signed by OpenAI.
Additionally, Musk had requested Nvidia to prioritize the supply of 12,000 H100 chips originally intended for Tesla to xAI and the X platform, causing delays of several months in Tesla's autonomous driving and Optimus projects.
Source: Public Information
ABAB AI Insight
Musk has been hoarding GPUs at an extremely aggressive pace since 2024, with Colossus 1 originally planned to exceed 100,000 cards. Previously, chips were also allocated from Tesla to xAI, but Rolfes pointed out that the actual utilization rate is only 11%, and the number of Grok users is far from filling the capacity, continuing his style of "build first, optimize later."
In terms of capital pathways, Musk has highly centralized the computing resources of Tesla, xAI, and SpaceX, prioritizing Grok and xAI training. However, there have been multiple instances of internal resource competition and capacity waste, resulting in some GPUs flowing to competitors like Anthropic through leasing and other means.
Similar to the overcapacity seen in the early data center constructions of Google and Meta, as well as Amazon AWS's early self-built centers that later opened for leasing, Musk's ecosystem is currently in a transition phase from "computing power frenzy" to "capacity matching and external monetization."
Essentially, this is about capital concentration: Musk has concentrated capital highly on AI training through inter-company resource mobilization, but due to lagging product growth, some capacity has overflowed and been absorbed by external competitors. Mechanically, aggressive construction has achieved short-term leadership but also provided competitors with a low-cost expansion window.
ABAB News · Cognitive Law
The more aggressively computing power is built, the greater the risk that the product side cannot absorb it. Hoarding chips before finding demand often results in excess becoming ammunition for competitors. The real strategy is not who can grab computing power first, but who can truly utilize it fully.