Flash News

Figure AI Demonstrates Two F.03 Robots Cleaning a Bedroom and Collaborating to Make a Bed

Figure AI released a demonstration where two F.03 humanoid robots equipped with the Helix-02 system autonomously organized a bedroom, including cleaning, opening doors, hanging clothes, and storing items, and collaborated to make a bed in under 2 minutes.

The robots operate using a single vision-language-action strategy, without the need for a shared planner or explicit communication, inferring each other's intentions solely through visual actions to handle large deformable objects like sheets and blankets.

Figure is advancing multi-robot coordination capabilities through Helix-02, accelerating the practical deployment of humanoid robots from demonstrations to shared spaces like homes and warehouses, reducing reliance on human labor and improving efficiency in complex task execution.

Source: Public Information

ABAB AI Insight

Figure AI previously demonstrated two robots collaborating to organize groceries in February 2025. This Helix-02 version achieves longer sequences, full-body dual-arm actions, and dynamic coordination in a bedroom scenario, continuing its iterative path from single-robot to multi-robot collaboration.

In terms of capital strategy, Figure focuses its funding on end-to-end neural strategy training and hardware optimization, rapidly enhancing the Helix model through real-world data feedback, attracting strategic partners like BMW, and paving the way for household service scenarios, aiming to quickly reduce the cost of humanoid robots to practical levels.

Similar to Boston Dynamics' transition from Atlas demonstrations to commercialization and Tesla Optimus' early household demonstrations, Figure is currently in the early stages of expanding humanoid robots from laboratory coordination to real shared space deployment.

Structural judgment: Essentially a technological replacement. Helix-02 allows a single neural strategy to directly control multi-robot collaboration for complex household tasks, with the mechanism being visual action implicit communication replacing traditional planners and communication modules, significantly reducing deployment complexity and shifting labor from repetitive household tasks and light physical work to AI supervisory roles.

ABAB News · Cognitive Law

Single-machine demonstrations sell concepts, while multi-machine collaboration sells true replacements.
The physical world does not rely on APIs; visual action inference is the ultimate interface.
The more robots understand tacit cooperation, the more humans can withdraw from repetitive labor.

Source

·ABAB News
·
2 min read
·1d ago
分享: