Flash News

"AI 2027" Forecast Review: Real Progress at About 65%, AI Self-Acceleration Severely Lagging

Steve Newman, co-founder of Google Docs and chairman of the AI think tank Golden Gate Institute, published an article reviewing the widely discussed "AI 2027" forecast report from last year item by item.

Conclusion: Overall quantitative predictions are about 65% complete, primarily driven by benchmarking progress and data center construction. However, the most critical metric, "the acceleration factor of AI on its own research and development," has only reached 17% of the predicted value. Newman believes that AI is still a "normal technology" at present, with traditional issues of slow implementation and difficult enterprise integration unresolved.

Positive developments include: Anthropic's annual revenue is expected to increase nearly tenfold by 2025, and triple again to $30 billion in the first three months of 2026; Claude Mythos has been restricted from release due to its ability to execute attacks by exploiting vulnerabilities in mainstream operating systems and browsers; GPT-5.5 has been able to operate independently for several days.

Newman predicts that even under conservative estimates, AI models will begin to significantly accelerate their own research and development by 2026; in a more aggressive scenario, the timeline for the "AI 2027" script may be delayed by a few years.

Interestingly, the authors of "AI 2027" have moved the timeline forward: lead author Kokotajlo has advanced the median prediction for "automated programmers" (AI fully replacing top software engineers) from the end of 2029 to mid-2028, and the team still believes that the emergence of superintelligence within 18 months is entirely possible.

In conclusion, Newman summarizes that the current dizzying pace of change is likely the slowest we will experience in our lifetimes.

Source: Public Information

ABAB AI Insight

Steve Newman, as a long-time observer of AI implementation, objectively pointed out the divergence between "benchmark and computing power surge vs. actual commercial implementation lag" in this review. The explosive revenue growth of Anthropic and OpenAI, along with the attack capabilities of cutting-edge models, shows that laboratory capabilities have significantly exceeded expectations, but the complexity of enterprise-level deployment remains a major bottleneck.

In terms of capital pathways, leading laboratories are rapidly reinvesting huge revenues into a self-accelerating cycle (data, computing power, RL), while traditional enterprises are still slowly digesting and absorbing, creating a clear stratification of "frontier surge vs. traditional climb."

Essentially, this is a technological substitution: AI is currently making rapid progress in benchmark and laboratory environments, but the substitution rate in real enterprise complex scenarios remains low. Capital is shifting from general computing power expansion to solving the last mile of implementation (integration, governance, security), driving AI from a "showy tool" to a "scalable productivity" structural transformation.

ABAB News · Cognitive Law

No matter how well benchmarks are measured, they are just fireworks in the laboratory; enterprise implementation is the real battlefield. The current pace of change is likely the slowest we will experience in our lifetimes—this statement may be scarier than any prediction. The true turning point for AI self-acceleration is not when models become smarter, but when they begin to efficiently help themselves become smarter.

Source

·ABAB News
·
4 min read
·6d ago
分享: