Solana Labs Co-founder Anatoly Yakovenko: AGI Has Not Arrived Yet, But Its Capabilities Are Approaching
Anatoly Yakovenko, co-founder of Solana Labs, stated that Artificial General Intelligence (AGI) "has not arrived yet, but is very close," describing the current phase as a critical state: the system has not yet been universally recognized as AGI, but is approaching human-level capabilities in many practical aspects. Similar views have appeared in recent English research and investment reports, such as Sequoia's analysis of the next generation of AI agents, which directly titled an article "This is AGI," arguing that long-sequence, multi-step autonomous agents are functionally close to the working definition of general intelligence.
The timelines assessed by various institutions have also shifted significantly forward: AIMultiple's aggregation of nearly 9,800 expert predictions shows that the latest update gives a probability of about 10% for achieving AGI by 2026, and a 50% probability by 2041, much earlier than mainstream expectations a decade ago; some entrepreneurs and researchers (including Elon Musk and certain lab founders) are betting more explicitly on the 2026–2030 timeframe. Concurrently, media outlets like The Atlantic have pointed out that some labs have already introduced "AGI trigger clauses" in their internal contracts and governance frameworks, but there is a high degree of divergence in external perceptions regarding whether it can already be considered AGI.
Source: Public Information
ABAB AI Insight
Yakovenko's statement highlights a key shift in industry consensus from "AGI is far away" to "AGI may be at the doorstep, but we have not reached an agreement on the definition." On one side, Sequoia and some entrepreneurs and investors openly claim that "AGI has functionally arrived," while on the other side, comprehensive forecasts from MIT roadmaps and AIMultiple still provide more cautious medium- to long-term probability distributions. This cognitive dissonance will directly affect capital allocation: when a significant portion of decision-makers believe "AGI is very close," they will price in extreme scenarios in advance—this includes risks of workforce restructuring, competition for computing power, and regulatory abrupt stops.
From a technological trajectory perspective, the core of "approaching" is not abstract intelligence, but the commodifiable ability to execute general tasks: multimodal capabilities, tool invocation, long-sequence agents, and self-improvement architectures are rapidly combining, nearing the threshold of "cross-domain, transferable, and self-iterative" capabilities. Once these abilities are packaged as scalable "intelligent infrastructure," whether AGI is philosophically valid becomes secondary—economic and power structures will reorganize ahead of the concept, which is why some venture capitalists directly label the investment window from 2026 to 2030 as "AGI."
At the financial and institutional level, the expectations of AGI are already repricing assets: computing power, semiconductors, robotics, data centers, and security stocks are gaining premiums, while labor-intensive industries that can be replaced by cognitive automation face downward pressure on long-term valuation multiples. Meanwhile, the competition for control over "models + computing power + data" is accelerating among sovereign entities and large platforms, with regulatory bodies beginning to embed "AGI-related risk" language in their terms, viewing it as a variable that could alter financial stability and national security, rather than a purely technical issue.
Historically, the judgment of "AGI approaching" indicates that a new long technological cycle's tail end is being pulled closer: each previous round of general technology (steam, electricity, computing, internet) has reshuffled social classes and national patterns by enhancing productivity, but AGI concentrates this reshuffling into a shorter timeframe. If the "intuitive timelines" of frontline tech entrepreneurs like Yakovenko are closer to reality, then in the coming years, the truly important question will no longer be "Has AGI arrived?" but rather "What irreversible bets have capital, institutions, and individuals made in advance before it is consensually recognized as having arrived?"