Menlo Ventures Partner Recommends In-Depth Analysis of World Models
Menlo Ventures partner Deedy Das described a post as the best reading in the field of World Models and one of the most important readings in AI.
In the past 18 months, $10 billion has flowed into the "world models" field, with participation from figures like Yann LeCun and Fei-Fei Li. This technology is expected to provide data like LLMs, supporting the scaling of foundational models for robotics and addressing robotic challenges.
The post breaks down the five main features of world models, compares different methods, discusses application scenarios for robots both inside and outside, identifies opportunities, and cites research. Companies involved include major products from Google Genie, Tesla Optimus, Nvidia DreamDojo, as well as pure world model and robotics foundational model companies like World Labs, Runway, and Physical Intelligence.
Market Mechanism: VCs and tech giants are the main entities injecting funds into world model startups, driven by event-based demand for training foundational models in robotics. Capital flows towards synthetic data generation, simulation platforms, and robotic infrastructure; world model companies and robotics developers benefit, while traditional paths relying on real-world data for robot training face pressure.
Source: Public Information
ABAB AI Insight
Deedy Das, as a partner at Menlo Ventures, previously led early investments in Anthropic and AI infrastructure projects like Goodfire, and has long focused on the intersection of generative models and embodied intelligence. He has frequently posted analyses on the role of Gaussian Splatting and world models in 3D scene reconstruction, providing investment perspective support for this recommendation.
In terms of capital flow, the $10 billion primarily flows through VC funds to companies like World Labs (Fei-Fei Li), Runway, and Physical Intelligence, aimed at building video generation, simulation environments, and embodied intelligence datasets. The motivation is to address the scarcity and high cost of real data in robot training, enabling infinite parallel experiments through synthetic world models and accelerating the transition from LLM-style scaling laws to robotic scaling laws.
Similar cases include early investments in synthetic data and self-supervised learning during the LLM era (such as early work by OpenAI) and DeepMind's papers on generating world models to train robots using Veo. Currently, robotic AI is at a critical stage of transitioning from data scarcity to simulation-driven training using world models.
Structural Judgment: This essentially represents a reconstruction of the industrial chain driven by technological substitution. World models shift pricing power from scarce real robot interaction data to infinitely scalable synthetic data platforms, with the mechanism relying on video/3D generation technology reusing LLM scaling paths, reducing physical trial-and-error costs, and shifting foundational model training from hardware dependence to computation and data dominance, thus initiating an exponential growth cycle for embodied intelligence.
ABAB News · Cognitive Law
The scarcer the data, the more valuable the simulation.
LLMs consume text, world models consume reality.
The next decade of technology will first buy its world with capital.