Codex Founder Tibo Resets Rate Limits for All Paid Plans
Codex founder Tibo announced that to celebrate good progress this week, he has reset the rate limits for all paid plans of Codex.
This move aims to allow users to make fuller use of GPT-5.5 for building, and although the reset incurs additional costs, the team believes that the "vibes are good" and it is worth doing.
In market mechanisms, the accelerated increase in Codex usage by OpenAI paid users concentrates developer and enterprise subscription funds on high-intensity use cases. OpenAI exchanges short-term consumption for user stickiness and feedback, while traditional coding tools and low-tier free plans are under pressure. Heavy AI developers and OpenAI benefit from increased product activity.
Source: Public Information
ABAB AI Insight
Tibo, as the head of Codex engineering, has previously tested user feedback through minor rate adjustments. This reset of all paid plans continues his product philosophy of "allowing developers to build freely," aligning with the growth strategy after Codex reaches "escape velocity" in early 2026, prioritizing long-term user loyalty over short-term marginal costs.
In terms of capital pathways, OpenAI is directing additional computing resources to paid users by temporarily removing limits to stimulate GPT-5.5 usage and task generation. Resources are shifting from conservative quota management to high-activity conversion, with the strategic goal of accelerating Codex's penetration from a supportive tool to a primary development platform while collecting more real usage data to optimize the next generation of models.
Similar cases include the explosive growth of paid users after the early free period of GitHub Copilot and multiple temporary limit increases by Anthropic Claude to stimulate enterprise adoption. Currently, Codex is in a rapid growth phase transitioning from rate control to high-intensity autonomous agent usage.
Essentially, this is about capital concentration: AI coding infrastructure achieves user lock-in through temporarily opened computing power. The mechanism is that under controllable marginal calling costs of GPT-5.5-level models, a short-term "free lunch" strategy significantly enhances stickiness, leading to a concentration of pricing power from conservative resource allocation to heavy-paying users and the OpenAI platform, while accelerating the entire developer ecosystem's migration to AI-native workflows.