Figure AI, a robotics startup founded by American entrepreneur Brett Adcock four years ago, has recently introduced Helix 02, its new software architecture that expands neural network control to the entire mechanical structure of humanoid robots. Unlike the previous system that managed only upper body manipulation (which is why the Figure robots appeared stationary in promotional videos), the newly launched update unifies walking, balance, and manual dexterity into a seamless flow governed by artificial intelligence. This advancement eliminates the need for separate controllers for movement and interaction.
To showcase Helix 02’s capabilities, Figure AI released a video featuring a Figure 03 unit loading a dishwasher by maneuvering between cabinets and the appliance. In contrast to a similar video released during the commercial launch of 1X’s Neo, which involved a teleoperated robot, Figure AI’s robot operates autonomously, as claimed by Adcock’s company. The structural novelty lies in the introduction of a new foundational level called “System 0.” This module serves as a fundamental model for physical control, trained on over a thousand hours of human biomechanical data and advanced simulations.
Operating at a frequency of 1 kHz, System 0 manages stability, ground contact, and body coordination, replacing over one hundred thousand lines of C++ code with a single learned neural priority. Essentially, this software neuron acts as a form of “digital muscle memory.” The robot instinctively “knows” how to move, based on a statistical model derived from human observation. Figure AI abandoned the rigid logical instructions manually programmed by engineers in favor of a fluid system that learns actions rather than being programmed to execute them.
The System 0 layer works in conjunction with the previously known modules in the initial Helix version: System 1, translating perception into motor commands at 200 Hz, and System 2, responsible for semantic reasoning and natural language understanding. This resulting architecture enables the robot to process complex intentions and translate them instantly into fluid movements, maintaining balance even during dynamic tasks.
Practically, Helix 02 simultaneously processes visual, proprioceptive (communicating joint angles, “muscle” tension, and movement speed), and tactile inputs to coordinate every joint, from legs to individual fingers. Leveraging Figure 03 robot hardware, which includes tactile sensors in the fingertips and palm cameras, Helix 02 achieves unprecedented levels of fine manipulation, as demonstrated in tasks such as extracting individual pills from disordered containers, precise dosing of liquids with syringes, and manipulating objects obstructed from view.
These capabilities were exhibited in the dishwasher loading video by Figure, where the system navigated and manipulated seamlessly, utilizing the entire body, including hips and feet, to close doors or stabilize the load. It corrected any errors or unforeseen events in real-time without human intervention. According to Figure AI, this operational method of Helix 02 brings the platform closer to real operational autonomy in unstructured environments.
However, the performance of Figure 03 with Helix 02 in a cluttered kitchen or with different appliances and features remains unknown, highlighting the ongoing development and exploration of robotic autonomy and adaptability in various settings.
